papers | collections | search login | register | forgot password?

Congestion Avoidance and Control
by Van Jacobson, Michael J. Karels
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Jan 25 2008, 18:36 in collection CMU 15-744: Computer Networks -- Spring 08
In this paper, Jacobson and Karels proposed the TCP congestion avoidance and control algorithms which became the essential part of TCP protocol later.

In late 80s, as the computer networks were experiencing explosive growth, gateways also met sever congestion problems due to the buffer overflow on gateways. TCP, which had only flow control but no congestion control scheme at that time, was very aggressive to send packets in the sliding window. Consequently The network bandwidth might be saturated often and finally the performance of TCP connection was hurt. Jacobson and Karels noticed this and proposed to use an extra window (congestion window) to make TCP more 'conservative' w.r.t. the number of packets in transit. Namely, they came up with a slow start algorithm to gradually increase the amount of data in-transit, and a congestion avoidance algorithm to adapting to the path ( known as Additive-Increase Multiplicative-Decrease or AIMD). If the size of current congestion window is smaller than some threshold, TCP will operate in the regime of slow start, otherwise in congestion avoidance.

They gave the iterative functions for the size of congestion window which turned out very simple and elegant, and could be implemented with very few modifications to the code of TCP.

This paper is one of the most important papers in the development of TCP. It is easy to read and understand. I really enjoy the simple but insightful design and the innovative ideas in it.
#2 posted on Jan 27 2008, 13:14 in collection CMU 15-744: Computer Networks -- Spring 08
Well written and enjoyable romp with lots of interesting footnotes ("silly window avoidance"). The real-world figures go a ways toward convincing me that these algorithms are really effective. Using packet drops as the universal congestion-control indicator is clearly better than the recommendation of the other paper -- if only because it's simply automatic.

Side note: there was a 24c3 talk on making a port scanner work better by adopting TCP congestion control methods (basically by adding "probe" packets into the scan stream so you can detect drops):
http://berlin.ccc.de/~24c3_torrents/24c3-2131-en-port_scanning_improved.mkv.torrent
#3 posted on Jan 27 2008, 13:16 in collection CMU 15-744: Computer Networks -- Spring 08
It was an enjoyable reading to find what people thought about the network congestion problem and how they approached to simple and practical solutions, which also have become a part of today's TCP standards. Still, the solution to the second problem (backoff) is based on the assumption that the computer network is a linear system (with some "approximation"). Just out of curiosity, is it really a linear system? And, does recent research also model the computer network as a linear system? I'm not quite familiar with this area, but I think today's computer networks are very complex ecosystems where various entities are inter-connected and operating autonomously by using various network protocols. To me, it seems like a complex system where a tiny local change might cause exponential effects on a large portion of the network.
#4 posted on Jan 27 2008, 16:09 in collection CMU 15-744: Computer Networks -- Spring 08
I enjoyed this paper since it captures a combination of real-world experiences and also discusses actual policies (backed up by analytical arguments) used to cope with network failures due to congestion. The slow-start algorithm is a very simple adjustment that helps to jump-start the "self-clocked" network. Good estimation of round-trip timing is critical to reducing bandwidth waste, and the authors propose a low-cost and effective low-pass filter implementation. The authors also use the congestion avoidance algorithm discussed in the Jain paper and suggest actual constants for the parameters.

Strengths of this paper:
* Empirical data/experiments illustrating network conversations with no congestion avoidance.
* Coping with malicious users who do not participate in fairness algorithm
* Authors are very well-aware of variability/realism in the resources/conditions, which I found was somewhat lacking in the Jain paper

1 question for me:
* Why the authors claim that exponential backoff is considered the "only" reliable backoff policy in all situations. They seem to answer this in a "squishy" way. If you had more information (other than the fact that you're re-transmitting a lot) about what's actually happening in the system, are there other serialization policies between contending participants that might also work, but with less time between re-transmit intervals than exponential? What about hybrid policies that could adapt to different conditions?
#5 posted on Jan 27 2008, 16:14 in collection CMU 15-744: Computer Networks -- Spring 08
This paper explains about schemes for cogesntion avoidance and control at the transport end-points of networks. With liitle background in networks, first time, I doubted how to avoid and control congestion at end-ponets since it seemed to be a matter of routers in nature. But, this well-written paper gave me a better understanding on how to deal with congestion problem at TCP level. One question I came up with after this paper is how different current implementations of TCP are from this initial suggestion.
#6 posted on Jan 27 2008, 16:14 in collection CMU 15-744: Computer Networks -- Spring 08
In my opinion, the contribution of this paper is in showing how congestion control can be easily implemented and how it affects the behavior of real systems.

The authors suggest a choice of 1 and 1/2 for the additive increase and multiplicative decrease respectively. In the appendix, they give a few arguments on why this might be a good choice, but as they admit, they are "handwaving." I was wondering if others had studied the affects of this choice and realized that the optional readings BCCA01 and TFRC00 do exactly this (the links to the other two papers appear to be broken).

Edit: The links work now.
#7 posted on Jan 27 2008, 16:22 in collection CMU 15-744: Computer Networks -- Spring 08
The paper gives a very clear picture of what we are trying to achieve in congestion avoidance and control and uses the simple concept of packet conservation to great effect. The contribution of the paper is probably the way it integrates small important desirable units into TCP. The fact that TCP hasn't changed a great deal (if I am not mistaken) from what is in this paper speaks a lot about the paper's impact. This paper gives a better practical approach (implicit) to congestion control and avoidance than the other paper. Having said that, these 2 papers together give the reader understanding of the two ends of the spectrum in achieving the same goal.

I do agree with Jim that the footnotes in this paper (more than any other paper I have read so far) made a really interesting read.
#8 posted on Jan 27 2008, 16:24 in collection CMU 15-744: Computer Networks -- Spring 08
Like others, I also found this paper well-written and its approach elegant.

I was happy to see, in footnote 13, that work has also been done on congestion avoidance for UDP-style traffic

Like Min Gyung, I'm not clear about the whole 'linear system' assumption, and how well that has held up over time.

BTW, if ARPANET began in 1970, and the first Internet collapse occurred 16 years later in 1986, that means the next Internet crash should've taken place in 2002. Hmm, 2002. Wasn't that the worst year of the Internet stock market bust?!?!
#9 posted on Jan 27 2008, 16:43 in collection CMU 15-744: Computer Networks -- Spring 08
I also really enjoyed the clarity of this paper and thought the figures were very helpful. As a researcher in game theory mostly from an AI perspective, I find it especially interesting to see that the concept of an “equilibrium” is fundamental in this seemingly unrelated area. I look forward to reading about more interactions between game theory and networks and hopefully doing my class project on a related topic.
#10 posted on Jan 27 2008, 16:46 in collection CMU 15-744: Computer Networks -- Spring 08
One of the strongest points of this paper is the abundance of graphs from actual empirical / experimental data. I also liked the way that this paper “blends” with the Jain paper. It is nice to see some of the ideas proposed in the Jain paper are actually implemented and tested in this paper. In a sense, while the Jain paper seemed to be somewhat “theory driven”, this paper seemed to be more “experiment driven”. It was satisfying to see that in many cases both papers converged to the same ideas (additive increase / multiplicative decrease policy).

The fact that the authors begin by presenting data from real measurements and then present the underlying theory makes the ideas suggested in this paper even more convincing. All of the presented ideas (e.g. slow-start, congestion window) are explained very well and make a lot of sense. I liked the simple gateway congestion control mechanism that they proposed in section 4. It rewards nodes that implement congestion avoidance, continues to work fine for the rest of the nodes and also deals with malicious nodes.
#11 posted on Jan 27 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
This is a classical paper, cited by 3541 from Google Scholar search results.

(to be edited)
#12 posted on Jan 27 2008, 17:01 in collection CMU 15-744: Computer Networks -- Spring 08
The researchers who wrote this paper figured out what the key problems were with the existing implementation of TCP flow control and devised solid ways of fixing them. The details, such as how to compute the variance of round-trip time or the size of the congestion window, don't get in the way of understanding the high-level concepts. I think this was a good, interesting paper.
#13 posted on Jan 27 2008, 17:24 in collection CMU 15-744: Computer Networks -- Spring 08
A comment on the argument of Page 6: the expected round trip time as well as its standard deviation would scale with 1/(1-\rou), so the variation in the round trip time is not negligible. From the result of queueing theory, as \rou approaches to 1, this standard deviation would be equal to the expected round trip time.
#14 posted on Jan 29 2009, 18:00 in collection 15744 Spring 2009
The author presented a set of algorithms (or modifications to TCP) to deal with a real-world problem of network throughput collapse resulting from congestion.

The first thing that struck me when reading this paper was the fact that it was in response to a problem that had manifested itself in the real-world. The 'sudden factor-of-thousand drop in bandwidth' was an issue that had to be addressed, and the solutions proposed in the paper were all in response to this urgent problem.

In designing the solutions, the author appears to be guided by the need to avoid changing the current protocol as much as possible. Rather than adding a bit to messages (and thus requiring re-implementations on all existing gateways), the author preferred to use packet drops as indications of congestion.

I felt that this use of packet drops as a free indication of congestion was ingenious. Networks are able to drop packets without severly compromising reliability or throughput, while simultaneously informing endnodes of congestion. Endnodes are also able to probe the network for congestion by simply more packets. Thus, the scheme allows the network to be almost fully utilized, almost all the time.


The one thing that I would have liked to see more of in this paper is how the author discovered the root cause of the problem outlined in his opening paragrah. While it was mentioned that the 4.3BSD TCP was mis-behaving and that it could be `tuned to work better under abysmal network conditions', I did not find any insights on how this conclusion was reached.
#15 posted on Sep 22 2009, 20:04
The congestion avoidance algorithm used in this paper is exactly the same algorithm described in "Analysis of the increase and decrease algorithms for congestion avoidance in computer networks". But it talks about more practical concerns about the implementation of the algorithm. For example, to make the algorithm "slef-clocked", it increases cwnd by 1/cwnd . And for this problem, I have an interesting question: for fairness problem, is it possible for one to steal the bandwidth by not following the congestion avoidance algorithm in TCP protocol?