Networks and computer systems are becoming increasingly attractive targets to large-scale programmed
attacks such as worms and Distributed Denial of Service attacks (DDoS), which can compromise a vast
number of vulnerable targets in a few minutes. Critical end-user applications vulnerable to such attacks
include e-commerce, e-medicine, command-and-control applications, video surveillance and tracking, and
many other applications. While there is a growing body of research techniques, prototypes, and commercial
products that purport to protect these applications and the network infrastructure on which they rely, there
is little existing scientific methodology by which to objectively evaluate the merits of such claims. Moreover,
thorough testing of a defense system for worms or for attacks on the infrastructure cannot be evaluated
safely on a live network without affecting its operation.
To make rapid advancements in defending against these and future attacks, the state of the art in the
evaluation of network security mechanisms must be improved. This will require the emergence of large-scale
security testbeds coupled with new standards for testing and benchmarking that can make these testbeds
truly useful. Current shortcomings and impediments to evaluating network security mechanisms include lack
of scientific rigor;lack of relevant and representative network data;inadequate models of defense mechanisms;
and inadequate models of both the network and the transmitted data (benign and attack traffic). The latter
is challenging because of the complexity of interactions among traffic, topology and protocols.
The researchers propose to develop thorough, realistic,and scientifically rigorous testing frameworks and methodologies for particular classes of network attacks and defense mechanisms. These testing frameworks will be adapted for different kinds of testbeds, including simulators such as NS, emulation facilities such as Emulab, and both small and large hardware testbeds. They will include attack scenarios; attack simulators;
generators for topology and background traffic; data sets derived from live traffic; and tools to monitor and
summarize test results. These frameworks will allow researchers to experiment with a variety of parameters representing the network environment, attack behaviors, and the configuration of the mechanisms under test.
In addition to developing testing frameworks, the researchers propose to validate them by conducting tests on representative network defense mechanisms. Defense mechanisms of interest include network-based Intrusion Detection Systems (IDS); automated attack traceback mechanisms;t raffic rate-limiting to control DDoS attacks; and mechanisms to detect large-scale worm attacks. Conducting these tests will require incorporating real defense mechanisms into a testbed, and applying and evaluating frameworks and methodologies. Conducting these tests will also help us to ensure that the testbed framework allows other researchers to easily integrate and test network defense echanisms of their own.
The research team includes experts in security, networking, data analysis, software engineering, and operating systems who are committed to developing these challenging integrated testing frameworks.
Intellectual Merit: The development of testing methodologies for network defense mechanisms requires
significant advances in our understanding of network attacks and the interactions between attacks and their
environment including:deployed defense technology, traffic, topology, protocols, and applications. It will
also require advances in our understanding of metrics for evaluating defenses.
Education: The research into testing methodologies for network defense mechanisms will involve
graduate students and provide new curriculum material for universities.
Broader Impact: By providing new testing frameworks, the work will accelerate improvements in
network defense mechanisms and facilitate their evaluation and deployment. The researchers will hold yearly workshops to disseminate results and obtain community feedback.
|Effective start/end date
|9/1/03 → 8/31/08
- National Science Foundation: $2,533,447.00