papers | collections | search login | register | forgot password?

A DoS-limiting Network Architecture
by Xiaowei Yang, David Wetherall, Thomas Anderson
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Apr 17 2008, 02:57 in collection CMU 15-744: Computer Networks -- Spring 08
The aim of this paper is to extend the current internet architecture to make it more reliant to Denial of Service (DOS) attacks.

The key idea in TVA, the architecture described in this paper, is to ensure that a destination doesn't receive packets it doesn't want.
A sender has to first obtain a certificate -- called a "capability" in this paper -- from the receiver that it is willing to receive the packet, before it can start sending packets.
Naturally, this raises the problem of obtaining this capability. This problem of bootstrapping this protocol is solved by allowing the first packet(the equivalent of the TCP SYN packet)
to be sent without a capability.

Design
In what follows, I explain some important design decisions in TVA, based on the attack scenarios they were meant to handle.

Attack 1:
The sender attacks destination using just SYN packets.
Solution:
Prevented by rate-limiting the number of such packets that are forwarded at all points in the network(incl the routers) ; under heavy SYN loads, the network as a whole drops
packets thereby preventing the destination from getting flooded.

Attack 2:
The sender prevents legitimate SYN packets from reaching the destination by flooding routers along the path.
Solution:
Prevented by routers along the path, by checking source of SYN packets and using per-source fair queueing

Attack 3:
The sender spoofs src addresses
Solution:
Each router along the way adds it's unique hash to the packet before forwarding it. Sources are distinguished based on
the concatenation of these hash values(which identify the path and therefore, to some accuracy, the source).
Question:
This doesn't prevent a distributed DOS attack, does it?

Attack 4:
Just getting the routers to implement all this :)
Solution:
Caching! Routers cache a nonce->capability mapping when they first see a capability. Future packets just send the nonce(short random) instead of the capability. Also capabilities have
a ttl after which they have to be reacquired by the sender.
Question:
Why can't an attacker perform a brute force on the nonce of an already established connection?

Evaluation:
The authors perform two separate sets of experiments: the first, using ns, shows the behavior of a network using this architecture and compares its behavior to other proposed architectures, under different attacks;
the second experiment measures the performance (max rate of sending/receiving packets) on an actual machine running the new architecture and compare it with the performance of standard TCP/IP architecture.
Comments:
While the first set of experiments covered multiple types of attacks, I felt they could have added a few more results with different network architectures, just to see the effects of multiple routers on the path, etc.
This wouldn't have taken too much time, so I was somewhat disappointed. The second set of experiments were admittedly simple, since the aim was to only give a proof-of-concept. However, if I understand Table 1 correctly,
it does not show the absolute processing time for normal IP. Comparing absolute processing times for the new architecture, without comparing it to the existing architecture seemed like an incomplete evaluation to me.

Other comments:
I thought the paper did a thorough job of handling backward compatibility. This is a crucial requirement for any novel architecture to be deployed and I thought TVA did a good job at it.
#2 posted on Apr 17 2008, 14:03 in collection CMU 15-744: Computer Networks -- Spring 08
Perhaps I don't quite understand the entire architecture, but it doesn't seem to prevent distributed DoS attacks along multiple paths. They introduce a very slight version of this in simulation using a colluder, but there is still only a single bottleneck link... Or is it that if each attacker looks like a legitimate user it can't do too much harm, and that there will always be a single bottleneck to the receiver...
#3 posted on Apr 17 2008, 14:04 in collection CMU 15-744: Computer Networks -- Spring 08
The article discusses TVA, a new architecture that can be incrementally deployed over the Internet to prevent DoS attacks. The article presents a detailed design, the analysis of many potential types of attack, and simulation results. And, notably, the authors discuss extensively the motivations and trade-offs involved in their decisions (see, for example, the discussion surrounding their choice of design for bootstrapping in Section 3.2).
#4 posted on Apr 17 2008, 14:48 in collection CMU 15-744: Computer Networks -- Spring 08
TVA seems to work better then other architectures and effectively limits the problems caused by DoS attacks, even controlling the impact of legacy traffic.
I believe that one could use the fine-grained capabilities to allow the receiver to control the amount of bandwidth allocated to every sender, and so avoiding a distributed attack.
#5 posted on Apr 17 2008, 14:51 in collection CMU 15-744: Computer Networks -- Spring 08
This paper proposes the use of "capability" tokens as a means of preemptively cutting off attack traffic. By requiring such a token to send normal traffic, and by rate-limiting requests for these tokens, the network can greatly reduce the rate at which malicious hosts can flood the victim.

Obviously, this system is better than what exists now. And it seems to me that it will work well for large-to-medium sized victims. However, it seems that a sufficiently large botnet would still be able to bring down a small system. If the botnet malware is advanced enough, it can send traffic that is indistinguishable from legitimate traffic (albeit at the same rate as legit traffic). If the number of botnet machines is a couple of times larger than the number of hosts that normally use the victim's system, then the botnet would still be able to flood the victim. However, it seems that this kind of attack cannot be defended against by any means, so I don't think it is really of deficiency of TVA.
#6 posted on Apr 17 2008, 15:22 in collection CMU 15-744: Computer Networks -- Spring 08
This kind of approaches seem more practical than IP traceback schemes that try to uncover the origin of flooding attacks, even though they cannot remove the source of the problem right away. However, a router failure could seriously affect the flows with capabilities when they are put in the low priority queue after router restart; it would be very difficult to send a "demotion event" when a large scale DDoS attack is going on along the route.
#7 posted on Apr 17 2008, 16:45 in collection CMU 15-744: Computer Networks -- Spring 08
Backwards compatibility and incrementally deployment are impressive for me. However, I am a bit curious how widely this system needs to be deployed before it becomes useful.

From my experience, the more complex system trends to have more bugs / security holes. So, real implementation might become an issue even though this architecture looks superior for me.
#8 posted on Apr 17 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
In stead of using traceback, the paper used end-to-end token between a sender and a receiver to verify the legitimacy of packets sent. Because a receiver will determine to accept packet or not based on end-to-end token, I wonder why couldn't this method work with Dos attack along multi-path?
#9 posted on Oct 23 2008, 23:51 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
TVA is an architecture for controlling access to a server. It reminds me of the way that Kerberos tickets and ticket-granting tickets work. To access a resource, a request packet is sent, and the server responds with a capability list. This is attached to further communication from the client to the server, and it is updated periodically using your old ticket (thus the Kerberos similarity).

For tracking hosts as accurately as possible, requests contain the path to the client. The requests are signed by the routers on the path to the server using rotating keys. The encryption is pretty trivial and doesn't have a lot of overhead. This also gets around the issue of packets that increase in size and get fragmented on the path to the server. The request packets' initial payload is zero.

It seems to violate the end-to-end principal. The routers take an active role in modifying special request packets in a new way. It seems to be a small price to pay, because this architecture should be very effective in practice.

One type of attack that I can see is that you could do a DoS on your peers. It seems like it wouldn't be hard to do a DoS attack on gmail.com to prevent people in your network from checking their email.