papers | collections | search login | register | forgot password?

Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism
by David D. Clark, Scott Shenker, Lixia Zhang
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Feb 10 2008, 03:39 in collection CMU 15-744: Computer Networks -- Spring 08
This paper adds some meat to the discussion in the other Shenker paper we read.

Again, the paper observes that we currently have separate networks for data and voice, and that we could take advantage of vast economies of scale if only we could combine them. However, the two networks were designed for very different types of traffic, and the current internet, while satisfactory for data transfer, is not sufficient for real time applications (like video and voice).

The paper observes (predicts) that most real time applications will be -play back applications-. That is, since there is inevitably packet delay, they will buffer data up until a playback point. Packets that arrive before the playback point can be used, and those that arrive after it are useless. However, in order to set the playback point, these applications will need a bound on the delay that they can expect. The network can give priority to 'late packets' that are likely to miss their playback point if delayed further.

One of the main contributions of the paper is to suggest a dichotomy -- some applications will be intolerant to delays, and so will have rigid playback points (since adaptive playback points will necessarily make mistakes, as latency decreases and then increases). Here, the authors predict remote surgery as such an application. Other systems will be more tolerant to delays, and may afford adaptively setting their playback points. The intolerant applications will require -guaranteed- delay bounds, whereas the tolerant ones can settle for predicted bounds.

The paper presents algorithms for routing each type of traffic, and then give a method of combining both types over the same network.

Issues:

The idea of guaranteed delay and predicted delay makes specific the proposal of Shenker to offer two tiers of service quality from which users can choose. However, the paper fails again to address the issue of incentivizing users to report their types honestly, except by again suggesting that traffic be charged at differing rates. Yet Shenker brought up compelling problems with charging for internet traffic, and so the problem of incentivizing users to truthfully report their tolerance to delay remains a problem if we wish to maintain the unlimited-usage nature of the internet.
#2 posted on Feb 10 2008, 10:38 in collection CMU 15-744: Computer Networks -- Spring 08
The reason why adaptative applications have a smaller playback point than rigid ones is because the latter base their playback time on the worst-case scenario (an upper bound of the maximum possible delay), while adaptative applications can change their playback point to try to reflect the current delay.


This Predicted Service concept is interresting. I think that most phone calls, for example, belong to this class of service: we don't want big delays during the conversation but we tolerate losses (if we don't get what the other person says due to losses, we can just ask them to repeat).
#3 posted on Feb 10 2008, 12:40 in collection CMU 15-744: Computer Networks -- Spring 08
I'm not sure those packet scheduling mechanisms would work in today's Internet environment. Real-time voice/video applications like Skype basically work on P2P overlay networks that sometime lead to path diversities even for one communication session. Additionally, users don't want to reveal what sort of applications they are using, for their privacy reasons (especially for real-time voice/video communication).
#4 posted on Feb 10 2008, 15:00 in collection CMU 15-744: Computer Networks -- Spring 08
The observation that real-time applications can be adaptive is an important one; for instance, YouTube and other flash video sites represent traffic that should work "more-or-less" in real time, but buffering isn't really a problem. Whether we really need to know what kind of delay to expect before sending traffic is a different question -- this seems to be exactly what endpoints can handle.

Also, really, would you want to trust your remote surgery to internet traffic? What if a router crashes? Hasn't implemented the queuing discipline properly?
#5 posted on Feb 10 2008, 15:09 in collection CMU 15-744: Computer Networks -- Spring 08
The basic idea of the paper is that in order to guarantee QoS, i.e. delay, a network needs to be able to 1) isolate different classes of flow, i.e. guaranteed traffic, predicted traffic, and datagram,and 2) accordingly applies different sharing method; i.e. WFQ, FIFO, priority, or FIFO+ etc.

However, what is still not clear for me is the author's argument of why FIFO is a more efficient sharing scheme compared to WFQ in section 5; especially "Consider what happens when we use the FIFO queuing discipline instead of WFQ. Now when a burst from one source arrives, this burst passed through the queue in a clump while subsequent packets from the other sources are temporarily delayed; however, is much smaller than the delay that the bursting source would have received under WFQ."
#6 posted on Feb 10 2008, 16:22 in collection CMU 15-744: Computer Networks -- Spring 08
The paper classifies applications into intolerant & rigid, and tolerant & adaptive. Based on that, two service models are proposed apart from the best-effort datagram service model.

The remote surgery example sounded too far-fetched and though FIFO+ looked like a simple extension, to FIFO, doesn't it need much more complex queue management (which is the advantage of FIFO -- simple)?
#7 posted on Feb 10 2008, 16:56 in collection CMU 15-744: Computer Networks -- Spring 08
This paper argues that applications can be made to be "real-time" over an unreliable, packet-switched network. This is supported by making applications "adaptive", i.e., adjusting their playback point to achieve a reasonable level of quality. Compared to intolerant and guaranteed applications, the authors argue that adaptive applications can in some cases perform better by optimistically adjusting their playback point during run-time. This paper made a pretty good prediction as to how the Internet today is used to support a variety of real-time applications (e.g., VoIP, video conferencing, remote desktop) although it doesn't look like their mechanisms were actually implemented (e.g., jitter slack in the packet header).
#8 posted on Feb 10 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
I had the same question as Rathapon (Comment 6). Even if FIFO is more efficient than WFQ, in the long-term wouldn't FIFO lead to unfairness? Even further, ill-behaved applications could easily take advantage of this.
#9 posted on Feb 10 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
It looks like this paper was actually written in 1992 -- three years before the other paper ("Supporting ..."). This might help to address Aaron's issue that this paper also fails to address the issue of providing incentives to users to report their types honestly. It is kind of interesting to see this chronology, as this paper (which is much more technical) actually predates the other less technical and more visionary paper.
#10 posted on Feb 10 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
This 1992 paper discusses integrating real-time application on the packet-based internet. After 15 years, such applications are well celebrated by millions of users on skype and youtube. I'm wondering how the ideas presented in the paper could be relevant to the history/future of today's technology.
#11 posted on Feb 10 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
The integration of traditional telephony with the packet-switched Internet described in this paper is now actually happening, with Skype and other VoIP services. From personal experience, it seems VoIP works pretty well even without explicit QoS controls. Not as good as the traditional landline telephone network, but probably on par with cell phone service.

This paper, like the earlier one, still doesn't seem to propose a realistic way of accounting for the cost of converting existing networks to a new architecture, a cost that I imagine would be quite high.
#12 posted on Feb 11 2008, 12:04 in collection CMU 15-744: Computer Networks -- Spring 08
Honestly I don't like this paper. Although it makes a successful prediction that real-time application will occupy a large part of traffic in Internet, I don't think their proposed architecture really contributes to the booming of VOIP or Streaming. In my opinion, the success of those multimedia application is due to the improvement of multimedia technology (flash is dominating the Internet which is used by Youtube) and the multipath-routing on application layer ( BitTorrent and its follow ups). Even Skype operates on a decentralized and distributed model, rather than the more traditional server-client model.

I don't think the scheduling algorithm will deployed by ISP.