papers | collections | search login | register | forgot password?

Analysis and Simulation of a Fair Queueing Algorithm
by Alan Demers, Srinivasan Keshav, Scott Shenker
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Jan 28 2008, 14:44 in collection CMU 15-744: Computer Networks -- Spring 08
The paper studies the design requirements of gateway queueing algorithms for congestion avoidance, and presents a modified job-size based algorithm as an improvement of round-robin based algorithm. The algorithm is designed to allocate bandwidth and buffer space resources fairly over each source-destination pair. Simulation results show that the proposed fair queueing algorithm, combined with proper algorithms for flow control, provides more balanced throughput allocation and possibly smaller latency than FCFS algorithm, in a number of scenarios which may include one or multiple gateways.

An interesting feature of the proposed algorithm is that greedy users get punished in the bandwidth and round trip time irrespective of the flow control algorithm. They suffer from significant packet loss due to limited buffer space, and this leads to lower priorities for future packets arrived. On the other hand, the paper makes it an important point that gateway queueing algorithms must be combined with intelligent flow control algorithms at the sources. For example, the performance metrics of FCFS queueing algorithm
are quite different with different flow control algorithms in Scenario 2 (cf. Table III).

The paper provides much more details about theoretical properties and actual performance in simulation of the algorithm than the previous fair queueing paper (Nagle, 1987), which probably accounts for its popularity in the citation number. However, the mathematics in the first half of the paper can be made more readable by eliminating some unnecessary notations and adding more intuitive descriptions with the formulas. In Table 4, the throughput for FTP 2 under FQ is orders of magnitue smaller than other sources. Is it an error?

Points to ponder:

Is the definition of "fairness" really fair? In Scenario 5, the fourth receiver-sender pair, which is farthest apart, gets the improvement of throughput/latency at the expense of other three pairs. But should they receive equal resource
at all? Also, consider a greedy user who wants to transmit data from A to B, if he can get control of a number of machines C_1, ..., C_N, then he could still realize his goal by using these N machines to relay the packets.

The packet sizes of FTP/TCP resources are fixed in the simulation environment. What if they are sampled randomly from a probability distribution? How would the performances of non-preemptive and preemptive queueing algorithms differ in this case?
#2 posted on Jan 29 2008, 14:09 in collection CMU 15-744: Computer Networks -- Spring 08
I'll admit that in reading this paper I kind of got lost in the middle with its charts upon charts. However, the paper does mention [but not answer] two concerns that I thought were quite reasonable:

(a) What is a 'User'? A source-destination pair seems like a simple thing, but what of a mainframe or login server? Or a proxy? And what of a router "out in the midst of the internet" that might have to handle packets from all IP addresses to all IP addresses?

(b) Tying directly into my previous "in the midst of the internet" question, there is concern about router speeds and resources required to make this work -- if nothing else, it's certainly more complicated than a FCFS queue. (I mean, obviously it can't be that hard, since my OpenWRT-running linksys is doing fair queuing -- though I'm not entirely sure what algorithm.)
#3 posted on Jan 29 2008, 15:03 in collection CMU 15-744: Computer Networks -- Spring 08
I'll be honest that I didn't follow this paper 100%. Some portions of the text could have been explained first without any equations, with examples, and then followed up with more rigorous analysis.

I noticed that the "approximation" of bit-by-bit round-robin scheduling relies on tracking the past F values (which are kind of like timestamps) for each independent conversation. This requires maintaining local state at the gateway. This algorithm also assumes a fixed number of conversations, with each statically assigned to some entity. However, I am guessing that in a real implementation, you would virtualize many independent channels onto a fewer number of physical channels--and the assignments are changing all the time. How does this affect the internal state of the gateway as well as fairness?
#4 posted on Jan 29 2008, 15:13 in collection CMU 15-744: Computer Networks -- Spring 08
I think that the 'pre-emptory' aspect of the otherwise round-robin scheme is a good one. It automatically gives lower latency to low-bandwidth interactive sessions than to high-bandwidth file transfer sessions.

The notion of 'user' in the paper (based on source/destination end hosts) may be appropriate for use inside local networks, but I don't think it is the best one for routing between autonomous systems. If a tier-1 provider network has $N$ customer networks, then a better notion of 'user' would be the customer networks, and a 'fair' allocation would seem to be one in which each customer network receives bandwidth in proportion to what it pay for, although I'm not sure how peerage should be fairly handled.
#5 posted on Jan 29 2008, 15:17 in collection CMU 15-744: Computer Networks -- Spring 08
Also, I agree with Eric regarding the equations. They needlessly obscure a lot of what could be explained in plain English, and the parts the really benefit from the symbolic notation aren't necessary to understanding the main thrust of the article and could probably be put in an appendix.
#6 posted on Jan 29 2008, 15:52 in collection CMU 15-744: Computer Networks -- Spring 08
I thought that the paper raised interesting questions, but could have done a better job of both stating and defending its case. As pointed by others, the equations were unnecessarily complex. They could have been replaced with simple asymptotics with the exact formulae moved to an appendix. Many of their results shown as tables would probably have been better presented as bar plots/histograms.

Another point I would like to raise is their evaluation. While their choice of "benchmark scenarios" does help understand the behavior of the algorithm, the utility of its deployment is probably best measured by testing on a more realistic network(s).
#7 posted on Jan 29 2008, 16:02 in collection CMU 15-744: Computer Networks -- Spring 08
While I did find this paper a little confusing in organization/clarity, I liked the discussion of compatibility between flow control and queuing algorithms, and showing that the queuing algorithm can influence end hosts' behavior. Also I appreciate the other paper more after reading this one.
#8 posted on Jan 29 2008, 16:34 in collection CMU 15-744: Computer Networks -- Spring 08
I thought it was particularly difficult finding my way through the middle sections of the paper and now also I am not sure I understood every bit of detail. The math could have been presented better as others pointed out. Nevertheless, the paper had some important extensions to Nagle's proposal such as promptness allocation and considering packet lengths (though it looks like pretty obvious).

However, as the second paper points out, I don't think having such routers (stateful) with packet scheduling on a per-flow basis would be efficient. From that angle, it is probably not that practical.
#9 posted on Jan 29 2008, 16:37 in collection CMU 15-744: Computer Networks -- Spring 08
The three interesting points that I find in this paper are the definition of "fairness", punishment to the greedy users and the need to combine appropriate flow control algorithm with the queueing algorithm at the gateway.
#10 posted on Jan 29 2008, 16:44 in collection CMU 15-744: Computer Networks -- Spring 08
This paper begins by defining a three-fold metric for evaluating queueing algorithms (bandwidth, promptness, buffer space) and then goes on to support that Fair Queueing is better than previous queueing algorithms.

I will start by saying that I found the discussion about the definition of a user to be quite interesting. For the cases examined the authors seem to present solid results that show FQ to be superior when compared to other queueing algorithms. However I believe the studied cases are very limited and the scenarios do not capture traffic variations in time. Firstly, Telnet and FTP represent only a small fraction of network traffic. Secondly, even for a single flow, traffic patterns do not remain the same over time.

As the authors also mention, a weak point of this paper is that it does not examine the implementation cost/feasibility of the suggested FQ algorithm. Would FQ be fast enough to be implemented in the internet’s core routers? Probably not, since this is the exact issue that the other assigned paper (Stoica, Shenker, Zhang) tries to address, according to which any kind of per-flow information seems to lower router performance.
#11 posted on Jan 29 2008, 16:56 in collection CMU 15-744: Computer Networks -- Spring 08
I agree with several people here that the paper would be much easier to understand if basic ideas were explained before their mathematical denotation. Also, I quite don't understand the meaning of "more than 1" throughput, of FCFS, shown in Figure 1.

I think one important conclusion from the paper is that: fair queuing by itself does not provide adequate congestion control; but it must be combined with intelligent flow control.


#12 posted on Jan 30 2008, 12:45 in collection CMU 15-744: Computer Networks -- Spring 08
I am interested in the part stating fairness in this paper. I agree with the author that, each definition of 'user' regarding the bandwidth allocation has its limitations. This paper is in early 90s. In mid 90s, F.Kelly proposed to use utility function to represent how fair the allocation is. They gave a spectrum of different utility functions and each associated with one particular fairness - including max-min fairness used in this paper. I am wondering if the proportional fairness can be implemented by such queuing algorithm.

The simulation part of this paper also looks good and reasonable. The simulation tries to capture the interaction between the queuing algorithm adapted on gateway side and the flow control algorithm on source side. I really like the style of the simulation.
#13 posted on Feb 10 2010, 22:55 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
The paper presents fair queuing - an approximated bit-by-bit round robin servicing of packet flows - as an alternative queuing algorithm for routers. Distinct flows are identified by the source/destination address pair and inbound packets for each flow are placed in a separate queue. Ideally, one bit from each packet would be transmitted in a round robin fashion, but packets cannot be split, so fair queuing approximates this ideal algorithm. Based on the complex math described in the paper, calculating which queue to service next seems computationally expensive. Also, selecting the appropriate queue from which to send the next packet is made more challenging by queues which do not always have packets waiting to be sent - for example, a queue of packets for a telnet session.

The analysis of the algorithm provides a variety of simulations, but is lacking in two regards. First, the simulation results are measured in packets instead of bytes. This contradicts the earlier requirement for fair queuing to be fair when packets are of varying lengths. Using packet granularity does not evaluate the ability of the algorithm to satisfy this requirement. Second, the authors do not provide any situations in which fair queuing would lead to behavior worse than FCFS. Is this possible? Based on the comment that other scenarios "involve complicated dynamic effects" which "challenge our understanding of the collective behavior of flow control algorithms," it seems likely there are situations in which FQ is worse than FCFS.
#14 posted on Feb 11 2010, 14:30 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
The definition of fairness is fair, but it assumes that the resource (presumably bandwidth) can be arbitrarily divided in space. The reality, however, only supports resource sharing in a time-division manner -- it's a scheduling game.

Roughly speaking, both FIFO and round robin can be seen as plausible *definitions* of fairness too -- if our imagination doesn't go beyond the discrete real world. The brilliance of this paper is the message that thinking ideal can result in great algorithms.

The hypothetical bit-by-bit round robin algorithm illustrates how the defined fairness works. From this model we can extract the single parameter that decides the optimal schedule: IdealFinishTime := IdealStartTime + PacketSize. The only rule is "pick the remaining packets with the smallest IdealFinishTime." The ideal times follow the clock in the imaginary BR world. In actual implementations, they need to be updated only when a new packet arrives.