papers | collections | search login | register | forgot password?

Congestion Control for High Bandwidth-Delay Product Networks
by Dina Katabi, Mark Handley, Charlie Rohrs
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Feb 02 2008, 15:24 in collection CMU 15-744: Computer Networks -- Spring 08
In this paper, the authors propose a new congestion control that can perform better then TCP when the bandwidth-delay product increases.

Goal:
- Stable and efficient regardless of the link capacity and round trip delay, the number of source.
- No state per-flow maintained in routers and small computation is required per packet.

The main concepts that the authors introduce to achieve their goal are more precise explicit feedback and the decoupling of efficiency control and fairness control.

First, the authors claim that the loss packet is not suitable for signaling congestion because the congestion is not the only cause of the packet loss and it does not provide precise enough information to allow the sender to react to the congestion appropriately. So, they follow Explicit Congestion Notification proposal (ECN) to provide explicit signaling, but the router also provide the degree of congestion at the bottleneck instead of the binary congestion indication used by ECN. Moreover, to avoid state per-flow maintained in routers, the control state is put in the packets.

Then, they introduce new concept of decoupling efficiency control from fairness control to allows more efficient
use of network resources and more flexible bandwidth allocation schemes. In efficiency controller, it uses a Multiplicative-Increase Multiplicative-Decrease law (MIMD), which increases the traffic rate proportionally to the spare bandwidth in the system to allows faster converge to the high capacity links. In fairness controller, it uses an Additive-Increase Multiplicative-Decrease law (AIMD) similar to TCP, which converges to fairness.

Overall, this paper presents the new congestion control that can deal with the high bandwidth-delay product network and also propose deployment path of the new protocol.

Issues:
- Their approach is quite friendly to the router because no state per-flow maintained in routers and small computation is required per packet. However, most modern router tries to avoid modifying a packet content in order to reduce the delay because of memory access. In this protocol, the feedback field in the header of each packet has to be modified after a small computation. This might lead to a longer delay in each router along the path and lead to much longer RRTs overall.

- Unlike many papers, the authors provide in-depth proof to show how its parameters are chosen. However, it only shows that the lower and upper bound of the the alpha parameter should be between 0 and 0.56, but it is still questionable why the alpha parameter is chosen to be 0.4 and how the different values of alpha parameter play a role in the overall performance of the protocol.

- In the paper, it mentions that "TCP might waste thousands of RTTs ramping up to full utilization following a burst of congestion." Is it possible to fine tune TCP so that it becomes more aggressive in a high bandwidth network. For example, it might try to increase window size more than 1 packet at the time.
#2 posted on Feb 03 2008, 09:41 in collection CMU 15-744: Computer Networks -- Spring 08
The experiment results suggest that XCP is more efficient than other router-based congestion control schemes. (and to me, it also looks promising even though it's not backward compatible.)

Regarding the detection of misbehaving entities, the papers we've got so far focus mostly on misbehaving end users. (XCP relies on the end user's response to its feedback.) But, in router-based congestion control mechanisms, we can also think about misbehaving routers giving positive feedback only to a specific end user. (This is not just a hypothetical matter. There have already been issues about compromised and thus misbehaving routers.) Would it be easy to detect those routers?
#3 posted on Feb 03 2008, 12:00 in collection CMU 15-744: Computer Networks -- Spring 08
Decoupling the Efficiency Controller from the Fairness Controller is a very appealing idea. It allows researches to focus on one of the controllers, which hopefully leads to better, more efficient and flexible results. It also makes it easier and more intuitive to understand the protocols.

I agree with the authors when they say that "packet loss is a poor signal of congestion". Besides the fact that packet loss is a binary signal (the authors claim it is not enough), in today's networks (wireless and mobile networks, for instance) one cannot state anymore that a loss is probably due to congestion. I think that this, together with the fact that TCP tends to become more unstable as the bandwidth-delay product increases, is a good reason to choose XCP over TCP.
#4 posted on Feb 03 2008, 12:51 in collection CMU 15-744: Computer Networks -- Spring 08
From simulation results shown in the paper, XCP seems to be a very promising protocol for congestion control. Has the protocol really been implemented?

Also, I wonder that from simulation result when there are multiple congested queues, figure 9, why wouldn't XCP, which has the lowest average queue size, and never drops any packet, achieve the highest utilization? The authors told that this is the result of "bandwidth shuffling"; but it's still unclear for me.
#5 posted on Feb 03 2008, 14:10 in collection CMU 15-744: Computer Networks -- Spring 08
XCP would certainly be a boon in a cooperative network environment where one could easily perform an entire-network upgrade at once (either by flipping a switch on XCP-ready routers or by building a parallel network and suddenly swapping); but the question of deployment is still a tricky one. As an earlier commenter mentions, modifying packet headers is a potentially expensive operation -- though if no header traversal is required to find the field, maybe not such a bad one.
I imagine that, in practice, most really-high-bandwidth links (i.e. undersea cables) are actually parceled out to various telecoms in lower-bandwidth chunks; in this case it is easy to schedule packets because you have a few queues, each with a given traffic cap -- and you push the rate control off onto the telecoms' routers.
#6 posted on Feb 03 2008, 15:21 in collection CMU 15-744: Computer Networks -- Spring 08
Again, it is good to see that 'gradual deployment' has been thought out -- this seems very important. What would be interesting is an analysis of this intermediate phase during the gradual deployment -- how will the network behave as the fraction of users using XCP gradually increases? If there is a choice to be made between XCP and TCP, which would deliver better performance to the user at each point in this curve? Is there a `point of no return' after which everyone would naturally switch to XCP, or (worse), might there be a point at which XCP adoption would stop, because the incentives to use TCP would increase?
#7 posted on Feb 03 2008, 16:11 in collection CMU 15-744: Computer Networks -- Spring 08
The Internet's increasing bandwidth definitely does pose a problem for TCP, given that the Round-Trip Time latency is not going to get much shorter. However, I'm not convinced that it would lead to 'instability' in the network -- to me, it seems that the increasingly large numbers of statistically independent flows would tend to maintain stability.

As other have pointed out, there the lack of backward compatibility with TCP is also a problem. Trying to get all the ISPs and a great majority of end-hosts to support something other than TCP as the preferred network protocol isn't easy. Cf. the adoption of IPv6.
#8 posted on Feb 03 2008, 16:18 in collection CMU 15-744: Computer Networks -- Spring 08
The authors propose to replace TCP with XCP, a new protocol that uses network resources more efficiently. XCP enables more communication between a sender and routers through a new header structure. As others have mentioned above, one of the strengths of their design is that it can be gradually deployed. One other point that struck me is that the protocol critically relies on honest data from its senders to guarantee proper behavior. The authors mention briefly how this can be enforced in Section 7, but I wonder if a more in-depth study of the question would be needed to make sure greedy senders can't steal more than their share of bandwidth under this protocol.
#9 posted on Feb 03 2008, 16:26 in collection CMU 15-744: Computer Networks -- Spring 08
The author advocates that XCP provides more precise/explicit congestion feedback and separates the utilization control and fairness control. It does not keep per-flow state info and is efficient and scalable.

I like the experiment showing XCP is TCP-friendly. However, a main concern in the deployment is that whether XCP routers could provide reasonable performance in a network of mixed XCP and TCP routers instead of a pure XCP one.
#10 posted on Feb 03 2008, 16:40 in collection CMU 15-744: Computer Networks -- Spring 08
I really enjoyed this paper. Not only were the proposed ideas very interesting, but they were also presented very clearly and concisely. I believe that, in section 2, the authors did a very good job of capturing the inefficiencies of TCP and the previously suggested congestion control schemes. The decoupling of efficiency and fairness makes a lot of sense, since these two goals are somewhat orthogonal and require different policies (Efficiency: MIMD, Fairness: AIMD). A very important aspect of XCP is the minimization of dropped packets. Dropped packets cause inefficiencies, not only for the network, but for the endpoint hosts as well (e.g. in the form of retransmissions).

The only part that was not quite convincing was the deployment strategy. The formation of “router islands” (as in CSFQ) seems unrealistic, since it would require large-scale cooperation among ASes. On another note, it seems that XCP assumes that all packets of a flow follow a certain path (,which is probably true for the majority of networks). I wonder how multipath routing affects the performance of XCP. Overall, a very solid paper.
#11 posted on Feb 03 2008, 16:42 in collection CMU 15-744: Computer Networks -- Spring 08
It's probably new to have been deployed (2002)? It seems like TCP-friendly XCP should be easy to deploy. But from stories we heard from Srini, it doesn't seem like an ISP would have a clear incentive to try this new thing (i.e., can you sell this to a company?)..
#12 posted on Feb 03 2008, 16:53 in collection CMU 15-744: Computer Networks -- Spring 08
A very interesting paper that decouples the congestion control from fairness in order to address the shortcomings of TCP. Essentially, you don't want additive increase with large available bandwidths since it will take forever to reach full utilization. However, you still want AIMD for convergence to fairness. These two mechanism are separated where the congestion control figures out total change in bandwidth, and the fairness control figures out how to evenly divide the spare bandwidth among the flows. While TCP is bashed, are there things one could do with TCP to help improve the "slow" start--so perhaps in TCP, the source can monitor that it's taking a long time to ramp up to peak bandwidth and adjust its additive increase in subsequent restarts?
#13 posted on Feb 03 2008, 16:58 in collection CMU 15-744: Computer Networks -- Spring 08
The paper, and the reviews so far, seem to suggest that XCP is superior to TCP both theoretically and experimentally. Does TCP have any advantages over XCP other than the turnover cost of switching (which people have pointed out could be done gradually)?

I also liked the fact that tools from control theory were used in the analysis to actually come up with a theoretical basis for the different feedback loops.
#14 posted on Feb 03 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
In this paper, a new TCP-like protocol is proposed. It has many redesigned window control algorithms. From the beginning the design of XCP seems put a lot of focus on the stability. I don't quite understand the proof there.
#15 posted on Feb 03 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
I think the paper is really cool, in that it gives a good theoretical treatment to congestion control using explicit control (though the idea is not completely new -- ECN). The paper also tries to make it practical by including short paragraphs based on CSFQ and how it is TCP-friendly (which was nice). Still, I don't believe fully in this type of an explicit feedback scheme getting deployed on a large-scale for the reasons pointed out in the previous lecture.
#16 posted on Feb 03 2008, 17:56 in collection CMU 15-744: Computer Networks -- Spring 08
I think the paper did a good job of explaining why TCP doesn't do well in some settings and how/why XCP does better. However, while they did include a section on gradual deployment, to show that they had considered the issues, their suggested approaches aren't really feasible (atleast a couple of people before me have pointed the issues). Improving TCP while at the same time providing backward compatibility seems to be a huge problem. I wonder if there has been any work after this which tackles this problem head-on.