papers | collections | search login | register | forgot password?

Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks
by Ion Stoica, Hui Zhang, Scott Shenker
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Jan 28 2008, 02:42 in collection CMU 15-744: Computer Networks -- Spring 08
In this paper, the authors propose a new architecture and set of algorithms to provide 'reasonably' fair queuing with ease in implementation and execution. Their architecture does not require per flow state to be maintained at all routers. Only 'edge' routers do this job. The other (core) routers behave according to the 'expected' flow through them based on algorithms which approximate the flow using information from the edge routers. Simulations are then carried out to show that the architecture indeed does seem to achieve its two-fold goal.

While the paper is easy to read, it does leave a lot of unanswered questions. Also the technique of using predictions to save on complexity is not new.
And I wish the graphs were more readable.

Issues:

1. The use of exponential averaging in section 2.2.1 is not very convincing. There could be other models that could have been considered here. The use of the particular exponential is not too convincing either.

2. The question of latency posed towards the end of the paper is actually very important and should have been analyzed in the paper itself. Without that, we can't really talk about how much we gain in terms of lowering the complexity.

3. The numbers towards the end of section 2.2.2 (1%, 25% etc) seem to be hand wavy. These need to be supported by some convincing theory.

It would have been nice to see some confidence intervals in the numerical results. Overall, the results seem just as the paper intended them to be - reasonable.
#2 posted on Jan 29 2008, 12:02 in collection CMU 15-744: Computer Networks -- Spring 08
The paper makes two assumptions: that achieving fair allocation is important, and that existing methods for achieving it are infeasible. Section 4 addresses the first assumption pretty well, although the authors note “the matters addressed in this section are rather controversial and this overview unavoidably reflects our prejudices.” I’m curious what is so controversial and what some arguments are in favor of allowing unfair allocations/unfriendly flows.
#3 posted on Jan 29 2008, 13:48 in collection CMU 15-744: Computer Networks -- Spring 08
It is an interesting point to differentiate greedy users from unfriendly users. Unfriendly users only attempt to maximize the throughput and do not care about the packet drop rate. The discussion in Sec. 4.3 on punishment is quite interesting.
#4 posted on Jan 29 2008, 14:24 in collection CMU 15-744: Computer Networks -- Spring 08
While I am skeptical of the packet re-writing component of this approach and the distribution of too much trust (any edge router can make its traffic seem "just a bit more special" by doing slightly wrong flow estimation); I appreciate the relative simplicity of the approach. Indeed, it might make sense to deploy such a system if you were, for example, a big telecom company managing a network of many different links and routers -- and controlled all entry and exit nodes.
#5 posted on Jan 29 2008, 14:38 in collection CMU 15-744: Computer Networks -- Spring 08
I thought their estimates of flow and fair share rates were pretty reasonable. But for both congestion control papers (from last lecture) and this paper, and probably all of TCP in general, the end-behavior seems to be the oscillating throughput picture from last lecture. Is that the best we can do when faced with the uncertainty in flows? Even assuming that exponential back-off and weighting schemes are the best, what should be the constant/base? For example, in section 2.7 they give guidelines for what the constant K in the exponential weighting should be, but there must be a better way to optimize this value...

As for Sam's question - it seems like this sort of congestion control by fair queuing may not be as efficient in terms of overall throughput as something simpler like dropping packets as they come in (FIFO). Though I can't make a conclusion based on their simulation results...
#6 posted on Jan 29 2008, 15:22 in collection CMU 15-744: Computer Networks -- Spring 08
Overall I think this is a good paper. It starts by giving a nice gradual overview of the suggested architecture and algorithm. I think the authors did a good job of conveying their ideas by first presenting the idealized fluid model and then transitioning to the packetized, buffered, internet-like environment. The results are presented very accurately and I liked that the authors were very careful about their statements. (if only the presented graphs were easier to read ...). This paper also helped me to better understand and compare the presented fairness algorithms: FIFO, RED, FRED, DRR.

One minor concern I had while reading this paper had to do with the formation of the CSQF islands. It seems that this scheme performs better for reasonably large islands, because this way the ratio of core to edge routers is larger, allowing for a greater number of stateless, fast core routers. As the authors state this would require different ASes (e.g. ISPs) to cooperate and merge their router islands. However, as we saw in the Gao-Wang paper, ASes are usually very competitive and avoid cooperation.
#7 posted on Jan 29 2008, 15:53 in collection CMU 15-744: Computer Networks -- Spring 08
Overall, the paper feels solid and gives a reasonable argument/evidence for stateless core-routers. The authors generally show through simulation of various topologies that the CSFQ is close to DRR (which they use as a realistic upper bound).

However, the entire paper's premise is in the implementation complexity of alternative solutions. While they do compare many different approaches (e.g., RED, FRED, etc), they give no sense of actual cost savings. There is a single section (2.5) on this, but it's hardly quantitative. From an engineering perspective, what kind of savings are we looking at (e.g., area? performance?): an order-of-magnitude or incremental '10%' improvements?

Another thought: Is it difficult from a maintenance point-of-view to have to maintain a heterogeneous set of routers and to re-define interiors and edges when making changes to the network?
#8 posted on Jan 29 2008, 15:56 in collection CMU 15-744: Computer Networks -- Spring 08
Doing classification at the border routers seems a very good idea, in my opinion. Getting perfect fairness is not really essential; as long as the distribution of bandwidth is roughly reasonable, and ill-behaved hosts aren't unfairly given excessive bandwidth, it seems that the system is okay.

Although the paper mentions that some streaming multimedia applications don't implement flow control for congestion, it seems that they could respond to congestion by sending out data at reduced quality and thereby achieve fewer dropped packets.
#9 posted on Jan 29 2008, 16:00 in collection CMU 15-744: Computer Networks -- Spring 08
It seems unfriendly ends will harm the throughput itself, so why there are still unfriendly hosts? Could one balance the friendliness and throughput by itself (rather routers)?
#10 posted on Jan 29 2008, 16:25 in collection CMU 15-744: Computer Networks -- Spring 08
this paper along with the fair queuing algorithm paper talks about complementary strategies to end-system congestion control schemes. The authors insisted CSFQ outperforms in terms of implementation cost and scalability while sacrificing perfect fairness of bandwidth allocation. The large latency problem between the points of estimation and the points of congestion might need more explanation although its effects weren't revealed in their simulation result.
#11 posted on Jan 29 2008, 16:32 in collection CMU 15-744: Computer Networks -- Spring 08
I liked how the paper took a step towards making a useful idea implementable. However, I wish that they spent some more time mentioning some details -- choice of islands and "core" routers -- and how these choices affected the performance of CSFQ. For example, one appealing approach seems to be the "all core" design and it isn't clear how well CSFQ would do in such a setting.
#12 posted on Jan 29 2008, 16:42 in collection CMU 15-744: Computer Networks -- Spring 08
Based on two assumptions that 1) fair allocation mechanisms are important to congestion control and 2) the problem for adoption of this fair allocation is its complexity, the paper proposed architecture that can achieve approximately fair bandwidth allocations without having all routers kept per-flow states. While the 1st assumption was addressed in the last section of the paper, the 2nd assumption was left untouched. So, with current technology, is complexity a real hindrance for the implementation of this fair queuing?

Also, from the previous lecture, if I didn't misunderstand, I've heard that currently queuing in a router is still FIFO. Has fair queuing have its real implementation? And if not, what is the reasons for that?
#13 posted on Jan 29 2008, 16:44 in collection CMU 15-744: Computer Networks -- Spring 08
I think Hetu raises a very important point. I'm also interested in the choice of the fraction of "edge" and "core" routers. What's the trade-off and its impact on the implementation complexity?
#14 posted on Jan 29 2008, 16:46 in collection CMU 15-744: Computer Networks -- Spring 08
I thought the paper was a really good attempt at making the theoretical idea of having routers do fair queueing more practical by proposing an architecture where we can get approximate fairness. The discussion section raised some interesting questions (fire-hose applications) though a tad too long.
#15 posted on Jan 30 2008, 13:28 in collection CMU 15-744: Computer Networks -- Spring 08
This paper is interesting. Different from RED, it uses the information from each source (flow) and the shared bandwidth. This idea is reasonable. So it can outperform RED and FRED. Again my question is if \alpha(t) is replaced by some other function other than fair share ratio, is it possible to implement other fairness other than max-min fairness? Because basically elastic and inelastic traffic have different QoS requirement
#16 posted on Sep 28 2008, 20:45 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
End-to-end congestion control is improved when routers implement mechanisms that allocate bandwidth fairly. This protects well-behave flows and allows various end-to-end congestion control policies to co-exist within a network. The most effective approaches to fair bandwidth allocation use per-flow queueing mechanisms where routers maintain state and perform operations on a per-flow basis. These approaches are costly. This paper examines an approach that allocates bandwidth in an approximately fair manner while still allowing FIFO queueing in routers and maintaining no per-flow state.

The paper was pretty enjoyable and well written. Due to my limited networks background, the empirical results and graphs in section 3 weren’t so straightforward to follow. However, I found section 4 really interesting. I think this section was well placed within the paper. It gave sort of a recap of the overall problem they are trying to tackle.

I wish there was more information about the effects of the ratio of edge to core routers within islands.