papers | collections | search login | register | forgot password?

Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks
by Dah-Ming Chiu, Raj Jain
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Jan 25 2008, 16:43 in collection CMU 15-744: Computer Networks -- Spring 08
In this paper, Chiu and Jain analyze a set of simple distributed congestion control mechanisms with simple binary (overloaded/underloaded) feedback. They introduce four criteria - efficiency, fairness, distributedness, and convergence - and consider linear and nonlinear load increase-decrease schemes to be used at each client. Using the defined criteria, they conclude that among linear functions, additive increase and multiplicative decrease give optimal convergence to the goal total load and fairness. They do not show a nearly complete analysis for nonlinear functions but conclude that linear control functions are much simpler and thus more practical.

Overall, the paper introduces a pretty complete set of criteria with reasonable definitions. The control mechanisms and analysis considered are simple but seem practical, and the insights given by the paper must have had a significant impact in the community.


Questions:

- As the authors mention, the linear control functions are probably the simplest and thus easy to implement in practice. They also suggest that non-linear control functions provide more flexibility but also complexity and thus not as practical. What are some other possible congestion avoidance schemes?

- Congestion avoidance framework here is distributed and thus each client has a choice of following or not following the suggested increase/decrease algorithm. Is there enough incentive for each individual to follow the suggested load increase/decrease? For example, given that all other clients will decrease their load after an overload feedback, one may not want to decrease one's load. In other words, how "game-able" is this system? What if more information was given in the feedback such as how underloaded/overloaded the network is, or how many clients there are? A strictly-enforced algorithm may be able to use such information to increase efficiency, but will it in a distributed framework?
#2 posted on Jan 27 2008, 01:47 in collection CMU 15-744: Computer Networks -- Spring 08
Actually the first author of this paper, Dah-Ming Chiu (http://personal.ie.cuhk.edu.hk/~dmchiu/))was my former advisor. I sent an email to him asking what comments he has now for this paper. I think it is really interesting to post his reply here (with his permission).
"
Thinking back, in the mid-1980s, there was no standard network (propriatory networks, OSI networks and TCP/IP were all contending); no agreed requirements on how to management network congestion (delay, throughput, hop-by-hop, or end-to-end...); and there were no networking courses in universities. We were studying congestion control very
much from a practical engineering perspective.

Thanks to my PhD training, having been exposed to systems theory, a little bit of decision theory, I came up with this highly abstracted desciption of the problem and decided to write a paper about it. In retrospect, I think the good thing I did is to provide a clean, easy-to-understand model of congestion control.

I like to view computer networking as having two regimes: in one end, there is abundant resources for user demand, and in the other end, there is contention for scarce resources. In the first regime, computer networking is basically "software engineering" - how to make a modular design for ease of operation, maintenance, evolution, testing, robustness... In the second regime, computer networking can be quite mathematical; it is about QoS, optimization, pricing/economics, stability... Usually people from these two regimes are pre-occupied with their own problems and have their own languages and do not understand each other. The AIMD paper is about a problem in the second regime written in a way that people in the first regime can appreciate.

I wish I had learned more about optimization, economics and control theory when I wrote my paper, so that I could point out that AIMD is but a distributed algorithm to implement a gradiant method in optimization; and that it is but a special case of the Tatonnement process in a packet transport market with charges, and that it is important to watch out for feedback delay to ensure stability. But then this is probably what research is all about - a process that knowledge is built up step by step...
"
#3 posted on Jan 27 2008, 12:38 in collection CMU 15-744: Computer Networks -- Spring 08
I enjoyed the provision lots of pictures of vector interpretations of the mathematics. However, throughout I was wondering about Sue Ann's second question. There seems to be two dated assumptions here: namely, that clients have unlimited data to send and clients are interested in cooperating. The first isn't crippling, because if one client simply stops increasing its traffic the others will pick up the slack, should they need to. The second, however, is -- since the reaction to a single client not paying attention to control signals is that that client simply gets all the bandwidth.
This also makes me wonder about the 1-feedback-bit restriction; I'm of the mind that the kind of cooperative network this paper postulates may imply centralized control of the actual hardware, in which case such careful protocols are not required. Simply tell each user what his maximum bandwidth is, either at the factory or with periodic broadcasts (I seem to recall cable modems sort of work this way).
#4 posted on Jan 27 2008, 12:45 in collection CMU 15-744: Computer Networks -- Spring 08
This is a very interesting paper from a theoretical point of view.

Even if this a distributed framework, it is in the best interest of everyone to cooperate (unless they are malicious or malfunctioning participants), since f the network is congested packets from all sources will be lost. Even in terms of fairness a user shouldn't try to be too greedy, since that may lead to congestion and therefore to its own packets being lost.
Furthermore, because the users only feedback from the system is whether the resource is overloaded or not, no one knows what other users are doing. Therefore,it is not safe for a user to assume that it can get a big portion of the bandwidth by not respecting the algorithm, since it does not know, for instance, the number of other users or if there are other participants that will try to do the same thing. Even if an user could get more than its fair share, it does not know how much that would cost (if that strategy causes more congestion the amount of packets that are lost may have bitter consequences than a low bandwidth).

However,I think the paper assumes that all users get the same feedback. If we drop that assumption, the system can control fairness and give more bandwidth to users with more priority(for example) and punish users who are greedy.


(to be edited)
#5 posted on Jan 27 2008, 13:29 in collection CMU 15-744: Computer Networks -- Spring 08
I like this clear and insightful modeling of congestion control policies, even though it's simplified in some sense.

Regarding the second problem that Sue suggested, I agree with Jim and Miguel. It seems that the greedy user will be able to hog almost all the network bandwidth in the end. I'm wondering if there has been specific work on a greedy participant who does not conform to the cooperative backoff policies. Even after 20 years from the time of the publication, we still rely on independent decision makings of the end users.
#6 posted on Jan 27 2008, 15:05 in collection CMU 15-744: Computer Networks -- Spring 08
Overall, a very solid paper that analytically argues for additive linear increase / multiplicative decrease congestion policy in order to 1) attain convergence to efficiency and fairness, and 2) be distributed in its implementation amongst multiple users.

Positive points in the paper:
* Well thought-out analytical models.
* Good clarity of thought and excellent visual diagrams to complement the equations.

Questions:
* In a very large system, how to guard against adversaries/malicious users who ignore the "decrease" feedback request from the resource? From an engineering standpoint, can the policy be made to "detect" malicious users and to do something about it?
* In this abstract model, "fairness" is defined as an equal distribution of load amongst the n users. How well can this accommodate heterogeneous allocation of load among users, in which the "optimal" point is no longer a 45 degree line (in Fig. 4) but perhaps another "equi-fairness" line?
* The authors briefly touch upon an issue, which I imagine could be a much bigger issue today (when n gets very large). That is, how well would these mechanisms hold up when there is delayed feedback? There is no easy way to instantaneously broadcast feedback information from the "resource" to all N millions of users. How does feedback adelay affect the convergence? Also, the whole concept of a static "knee" in the curve doesn't seem realistic--#users/resources/topology are constantly changing (so the "efficiency" curves are moving targets in of themselves). According to the paper, their recommended algorithm was incorporated into the OSI Transport Class 4 networks--is there summary work that looks at how well this algorithm has worked in practice with "real" loads?
#7 posted on Jan 27 2008, 15:22 in collection CMU 15-744: Computer Networks -- Spring 08
The theory, math, and graphs in this paper were all awesome. The authors could have written a really esoteric text but they didn't.

On fairness: the first sentence in the introduction seems at odds with the fairness index introduced on page 5. If intermixing of old and new technology is a major feature of modern networks, then is it fair to give an 800 Mbps Cray channel the same share of bandwidth as a 1200 bps packet radio link? Probably not - the Cray channel owner would be furious if you told him that. And, once the radio link admin got his 1200 bits per sec, he wouldn't really care how much the other guy was getting. Yet the fairness index on page 5 is maximized only if all users get "exactly equal allocations".
#8 posted on Jan 27 2008, 15:31 in collection CMU 15-744: Computer Networks -- Spring 08
Overall, I really liked the way the paper was presented, especially figures 5-7 showing why additive inc/dec wouldn't work as well as additive inc/mult dec etc.

One point that I wondered about, was the assumption of the binary feedback which the authors themselves discuss at the end of their paper. Is there any later work that looks at this? It would seem that there would be cases where the "knee" isn't clearly obvious, or there are multiple such inflection points in the network performance graph, which would necessitate a more detailed feedback.
#9 posted on Jan 27 2008, 15:58 in collection CMU 15-744: Computer Networks -- Spring 08
I also really enjoyed the paper and agree with a lot of the other comments that it would be interesting to see further analysis of more complex feedback systems and nonlinear approaches. I don’t necessarily like the argument that we should avoid nonlinear schemes just because they are more complicated, which seems to be advocated in the paper. But obviously robustness and simplicity are good attributes to some extent.
#10 posted on Jan 27 2008, 16:13 in collection CMU 15-744: Computer Networks -- Spring 08
I found this paper to be very interesting and believe that the authors did an excellent job of conveying their ideas, especially with the use of graphs that very nicely visualized the various mathematical formulas. I also liked the fact that most of the questions I had while reading the paper appeared in the "Further Questions" section of the paper.

One of my main concerns while reading this paper was the authors' assumption of synchronous operation in the network. On the one hand the authors do acknowledge that the effectiveness of their solutions is limited to networks with synchronous operation and also very briefly discuss the impact of asynchronous operation in the network. On the other hand they do not say to what degree real networks differ from their idealized model. My personal feeling is that most real networks (with asynchronous operation, delayed feedbacks, etc) are far more “chaotic”.

A minor drawback of this paper is the lack of data from actual measurements that would make the suggested congestion avoidance mechanisms more convincing. Such results would also answer the question if real networks exhibit similar behaviors to the idealized network model that this paper assumes.
#11 posted on Jan 27 2008, 16:31 in collection CMU 15-744: Computer Networks -- Spring 08
It seems like a major vulnerability with the proposed congestion is that it relies on users to voluntarily obey the congestion signal. Perhaps a more secure way would be this: In the event of impending congestion, individual autonomous systems would throttle their inflow connections and notify each connecting AS of the imposed bandwidth limit. An AS has has a business relationship with each connecting network, so interoperability doesn't seem to be concern. Perhaps bandwidth throttling wouldn't be feasible on LANs, but where it does work, I think it would be preferable to relying on voluntary congestion avoidance.
#12 posted on Jan 27 2008, 16:54 in collection CMU 15-744: Computer Networks -- Spring 08
As Abe and others before him brought up, incentives are a big problem here: as a greedy node in the network, I should send at as high a rate as possible, and this won't cause further congestion for me, because if others are following the protocol, they will back off! That is, everyone following the protocol is not an equilibrium, and will be unstable in the face of malicious (and technically competent) users.

Its not clear how to enforce good behavior without designing some sort of punishment procedure into the protocol. But just as designing protocols that had good properties when operated simultaniously in congested networks was a research frontier in the 80s, designing protocols with the same properties, but that are also best responses to each other is likely to become a more important problem as the internet grows larger and becomes less friendly.
#13 posted on Jan 27 2008, 16:56 in collection CMU 15-744: Computer Networks -- Spring 08
Sue Ann raised a very good point about the cooperative assumption the author made in the paper. Without this assumption, how to punish greedy users is an interesting point to study.

What I learned most from this paper is the clear presentation, especially the way of combining mathematical formulas with very neat graphs.
#14 posted on Sep 22 2009, 19:53
This paper proposed a simple and clear model to analyze the behavior of congestion avoid algorithms under different parameters. And by using the example of two hosts, it argues reasonably that decrease must be multiplicative to ensure the system to converge to a fair stable state. I like its analysis through using vectors and figures. But I have several questions:
(1) It only analyzes the case of two hosts, what about the case of multiple hosts?

(2) And all the actions of hots are assumed to be synchronous. What if they all run asynchronously?