papers | collections | search login | register | forgot password?

Random Early Detection Gateways for Congestion Avoidance
by Sally Floyd, Van Jacobson
show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Feb 03 2008, 02:32 in collection CMU 15-744: Computer Networks -- Spring 08
This paper presents a congestion avoidance mechanism which can be deployed on routers without assuming global cooperations of end users. Because I have been wondering how congestion avoidance/control schemes can handle misbehaving users (sometimes exploiting the other users' backoffs), I expected it would present a sophisticated mechanism for it. Actually, in section 10, the authors provide their idea on this matter and a theoretical bound for probabilities that the number of marked packets are more than the expected value. However, it still doesn't give me a convincing statement on how efficiently it would work in practice. What would be the false positive rates in the real network environment?
#2 posted on Feb 03 2008, 03:40 in collection CMU 15-744: Computer Networks -- Spring 08
The basic assumption of this paper is that the gateway is in an appropriate position to detect incipient congestion hopely and find source of the congestion. And, it shows its gateway-based congestion control scheme works effectively with transport-layer congestion control schemes. Its straight-forward algorithm, likely-easy implementation, and gradual deployment seem to be adventageous in practical adoption. I am not sure this type of RED congestion control schemes are widely used in the Internet today. If RED is not widely adopted, what would be the discouraging factors in deployment?
#3 posted on Feb 03 2008, 09:10 in collection CMU 15-744: Computer Networks -- Spring 08
This paper is good and quite comprehensive. In addition to clearly describe how the proposed congestion avoidance mechanism works, the authors address how to set several required parameters; although it is just a rule of thumb. There are sections that describe parameter sensitivity, and describe goals in performance evaluation of the congestion avoidance mechanism.

However, for what the authors claimed about non-bias against bursty traffic of their RED gateway, I think what they concluded from Fig. 15 is not quite convincing. From the figure, there are quite a lot of outliers that didn't follow the proportional relation between percentage of packet drop and throughput, i.e. the dash line. In addition, for the ability to identify misbehaving users, the statistical results shown in Fig. 16 are already based on the assumption that "the probability that a packet from a particular connection is marked exactly equals that connection's fraction of the bandwidth through the gateway".
#4 posted on Feb 03 2008, 10:04 in collection CMU 15-744: Computer Networks -- Spring 08
Congestion control should have a mechanism to deal with a mix of small-sized packets and large-sized packets. RED handles various sizes of packets by providing an option to measure the queue length in bytes instead of a number of packets. Then, it adjusts the probability that each packet is marked based on the packet size. However, this probability will increase slowly as the packet count increase since the last marked packet. Does it mean that RED trend to drop a lot more packets in the traffic with a lot of small packets? Or could we change to increase slowly as the sum of packet size increase since the last marked packet instead? This minor change might have changed the performance in the traffic with various sizes of packets.
#5 posted on Feb 03 2008, 10:53 in collection CMU 15-744: Computer Networks -- Spring 08
The number of packets dropped by a RED gateway that belong to the same connection seems to be proportional to the bandwidth used by that connection. As mention before, this is a natural result of the fact that packets are dropped randomly. Althought the results show that this does not work perfectly (clearly figure 15 shows that sometimes the amount of dropped packets is bigger than expected), RED seems to surpass the previously existent mechanisms.
This relation between dropped packets and bandwidth is also behind their claim that RED can be used to detect misbehaving users.If the number of dropped packets reflects the bandwidth used, the gateway can identify who has more dropped packets and consequently who is using more bandwidth. I think that claim is roughly accurate, altough the authors should have presented more evidence that this really works.

This papers dates from 1993, and the other reading about XCP is from 2002. Since the XCP paper mentions RED gatewyas I believe that this mechanism has become fairly popular among gateways, or at least has become an academic reference among congestion avoidance schemes. On the other hand, these two papers seem to agree that the parameters of RED must be carefully adjusted in order for it to work properly.
#6 posted on Feb 03 2008, 13:29 in collection CMU 15-744: Computer Networks -- Spring 08
An interesting exploration of what you can get out of random packet dropping if you do it before you absolutely need to. I like the focus on improving latency by maintaining a small queue. I don't yet entirely understand how their scheme handles bursty traffic in a fair way (the average queue size ramps up slowly so the burst doesn't get marked?). I also wonder a bit about how these scheme performs with a strictly enforced maximum queue size -- can you get away with less total queue memory than drop tail? (Or do you need the same amount to handle traffic spikes, and RED just gives you lower latency?)
#7 posted on Feb 03 2008, 14:53 in collection CMU 15-744: Computer Networks -- Spring 08
I thought an important, if perhaps understated issue, was the ability of RED to be gradually deployed. This is a very important requirement for designs of protocols that are supposed to be used to 'update' the existing internet infrastructure. Since the internet is a decentralized system, any design that requires universal compliance (or that gives good performance only with significant adoption) cannot succeed.
#8 posted on Feb 03 2008, 15:29 in collection CMU 15-744: Computer Networks -- Spring 08
This paper puts a lot of emphasis on not penalizing bursty traffic. They present figures that show a host with bursty traffic gets more throughput under RED than under Drop Tail or Random Drop gateways. But what effects does this have on fairness (e.g. if most connections are bursty and few aren't, will those few be punished?) or overall throughput? I understand that if one host keeps sending a lot of packets (so it is "bursty" all the time), then the gateway can track the host, but what if there are multiple hosts like this clogging up the gateway?
#9 posted on Feb 03 2008, 15:38 in collection CMU 15-744: Computer Networks -- Spring 08
The authors propose a technique that manages the queue length (through random marking of packets) to avoid congestion. I'm not entirely persuaded by the argument that this scheme provides a rough approximation of "fair" allocation of bandwidth. As we talked about in the prior papers, simple managing of fairness at "packet"-granularity makes it difficult when considering variable-length packets. Suppose two connections send the same rate of packets, but one of them sends packet sizes double of the other. Does this mean both connections are still subject to equal probability of having their packets dropped by RED?
#10 posted on Feb 03 2008, 15:56 in collection CMU 15-744: Computer Networks -- Spring 08
The article presents an algorithm for performing congestion detection and control at gateways using random marking/dropping of packets. One aspect of the design that I quite liked is the careful definition of p_b that avoids a geometric distribution of marked packets.
#11 posted on Feb 03 2008, 15:59 in collection CMU 15-744: Computer Networks -- Spring 08
The basic idea behind this paper is that gateways are better suited to handle congestion control, because they have a better overview of the traffic situation in the network. The authors focus on minimizing the average gateway queue size and at the same time offer better support for bursty traffic (than TCP) and avoid global synchronization. The proposed mechanisms are quite simple and make sense. A nice property of RED is that it can precisely bound the gateway queue size. However, this is done in a very naïve fashion (i.e. beyond some point all packets are dropped).

I liked the implementation section of the paper and I believe that efficient implementation is one of RED’s strongest points. I also found the two-way traffic experiments that lead to ack-compression to be quite interesting.

As mentioned in the paper, the deployment of RED requires setting values for a number of parameters. It is not clear to me how often these parameters need to be readjusted. How major does a change in the network have to be to require new parameters? As a sidenote I noticed that the authors tended to overexplain concepts and in many cases repeated themselves. I believe the same information could be conveyed in much less space.
#12 posted on Feb 03 2008, 16:30 in collection CMU 15-744: Computer Networks -- Spring 08
First, I found it very interesting to see a randomized protocol that seems to be so successful (I believe this is the first randomized algorithm we have seen so far). I also found it interesting that the equation used to calculate average queue length is very similar to the fictitious play algorithm from game theory/learning theory which I’ve been using in my research. Basically that algorithm involves picking a strategy at time t using the following formula, where BR_(t-1) is the best response to the opponents’ strategies at time t-1: s_t = (1-1/t) s_(t-1) + 1/t (BR_(t-1)).

I also have one question about the average queue calculation. They note that there is exponential weighting in that the weight of the current queue size decreases exponentially over time. My question is, why do we even care at all about queue sizes from, say, 2 hours ago? Why not say that we only care about the last T time steps, and let avg denote the average queue length over those time steps? The biggest problem I would see is how to update the calculation of avg, since we’d need to keep track of the T+1’s queue length measurement so we could remove it from avg. I guess it is important to be able to compute avg. efficiently because it is computed so often.
#13 posted on Feb 03 2008, 16:42 in collection CMU 15-744: Computer Networks -- Spring 08
One thing that was good about the paper was that the idea was simple and deploying it incrementally (as Aaron pointed out) looks to be easy. I believe RED is implemented but disabled in most routers today (correct me if I'm wrong) and this is probably a result of what Cody suspected; i.e. how RED breaks synchronization I think, is not well-understood. Besides, the setting of the parameters might be another issue.

I concur with Michael that the goals and properties of RED are somewhat repetitive (though it reinforces these concepts on the reader, which is good) and could have been shortened.
#14 posted on Feb 03 2008, 16:46 in collection CMU 15-744: Computer Networks -- Spring 08
I like this paper. Although it is quite long, but the idea behind the paper is simple and neat. That is: to drop packets before things are getting really bad. Average queue size is used as the measurement of how busy the link is. If the queue is quite full, it is very likely sources will experience packet loss soon. Instead of making the cwnd of each source cut into one half which will hurt the throughput a lot, RED may help some source avoid this global synchronization. RED gateway described in the paper stores no per flow information thus the probability to drop packets is the same function for different source. My question is, is it possible to maintain the partial information (say top 10 flow in terms of traffic) and drop packets with less probability for small flows?
#15 posted on Feb 03 2008, 16:56 in collection CMU 15-744: Computer Networks -- Spring 08
The basic idea of RED that tries to control a time averaged queue size at the gateway is simple and the presentation in the paper is quite convincing that this active and probabilistic packet dropping scheme is indeed effective in minimizing delay. Although the algorithm could identify misbehaved users, I think greedy users could still get more than their fair share.
#16 posted on Feb 03 2008, 16:56 in collection CMU 15-744: Computer Networks -- Spring 08
I think this paper's proposal is a good one. Unlike the XCP protocol in the other paper, the RED algorithm can be beneficially implemented by just parts of the network; it does not need required changes to the rest of the network or to end-hosts.
#17 posted on Feb 03 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
One other comment about probability: When sending large amounts of data, events that are improbable for a given packet may not be improbable over the life of the connection. Randomly dropping a bunch of packets for a real-time flow (e.g., streaming video) is possible with this protocol, and this behavior could be detrimental to users of those applications.
#18 posted on Feb 03 2008, 17:45 in collection CMU 15-744: Computer Networks -- Spring 08
The paper proposes a method of Congestion avoidance to be implemented at Gateways. The idea is fairly simple and elegant. In my opinion, the two most important points in favor of it, are the fact that it handles bursty traffic better than its predecessors, and that it can be deployed incrementally. As the authors themselves point out however, it isn't clear how a network where part of the gateways deploy RED and the others use a drop-tail queue will behave.
#19 posted on Feb 03 2008, 23:10 in collection CMU 15-744: Computer Networks -- Spring 08
After reading both papers, I think there is a basic argument. TCP RED uses the packet loss as implicit congestion signal while XCP uses the explicit congestion signal. Is it possible that the XCP sources are more vulnerable than TCP under attack such as low-rate TCP attack?