papers | collections | search login | register | forgot password?

Fundamental Design Issues for the Future Internet
by Scott Shenker
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Feb 08 2008, 00:46 in collection CMU 15-744: Computer Networks -- Spring 08
This is a visionary paper on what kind of architectural issues should be addressed in designing the future Internet, where we would face more demand on real-time network applications. The author argues that new service models are needed for different classes of applications to provide different qualities of service, thus achieving higher total utility of the network. Also, he compares admission control and overprovisioning as a way of resolve the overloading problem.

As real-time applications like video and audio emerge, the current Internet architecture has faced two fundamental problems; great variations in packet delays cannot support these real-time application properly, and these new applications take unfair shares over traditional data applications that conform to congestion control. Using fair queuing and modifying the applications might be considered for these problems, but the author points out that both cannot solve the problems adequately. Instead, he proposes to modify the basic Internet architectures, extending service models from the single class model ("best effort") and allowing admission control on applications requests. To formalize the ultimate design goal of this new Internet architecture, the author defines total utility of the network which is the sum of each application's performance.

To support the need for an extended service model, the author provides a simple comparison of two different service models for different applications (data app. and real-time app.). In this simple example, the prioritized service model shows higher overall network utility than the current best-effort service model, by giving different qualities of service to different kind of applications. However, the author states that this extended service model still casts another problem; then "how does the architecture decide which service to give a particular flow?" In this paper, two approaches are mentioned: implicit (the architecture decides the service class) and explicit (the application requests the service class) methods.

Lastly, the author discusses the cost-effectiveness of network overprovisioning and admission control, and argues that overprovisioning cannot be cost-effective in the network with higher variation in traffic loads.


Issues

- The author provides a simple and very clear comparison between the service models by defining a utility function. In this example, the utility function is a simple function of queuing delay only. However, what would be other important factors in today's Internet environment?

- The author predicted that the networks attempting the overprovisioning which is not cos-effective would lose in the markets. But, the ISPs still prefer overprovisioning to admission control. Why is it so? Is it because the cost of overprovisioning is cheaper, or because implementing admission control is impossible in the already-ossified Internet architecture?

- These days, the most dominant traffic on the Internet is P2P which is an 'elastic' data application. The author's arguments are more valid in today's Internet environment? or less valid? (e.g., prioritizing real-time applications doesn't introduce any problem since they take a small portion on the Internet. Or, just "best-effort" model fits well into today's networks.)
#2 posted on Feb 10 2008, 02:27 in collection CMU 15-744: Computer Networks -- Spring 08
This 1995 paper offers a series of predictions about the future growth of the internet, and suggests structural changes that will need to be made to accommodate those changes. The paper is both remarkably prescient, and amusingly dated.

Emphasizing the growth of the internet in the 10 years from 1985 to the writing of the paper, Shenker highlights evidence of the internet's popularity: Newsweek has a regular column about the internet. Time featured "the internet" on its front cover. Yet the author makes startlingly accurate predictions, from the wide spread growth of video and voice, to the inevitability of spam emails. The paper considers how to adapt to these changes.

The current design of the internet offers "best effort" service -- there are no latency or arrival guarantees. As a result, applications such as file transfer and email have been designed to be resilient to delay and packet loss. But Shenker predicts (correctly) that new applications like video and voice will require real time data transfer, which will be problematic for two reasons: They will not be resilient to delay and so will experience poor performance, and will not be able to back off in the face of congestion, and so will "steal" bandwith from data applications.

In order to give intuition to these problems and his design approach, Shenker considers vectors of information (including network latency and other features), and assigns to each user a distinct utility function of this vector. He then defines the sum over all users of their utilities to be the objective that the network designer should be maximizing.

He then proceeds with a simple example demonstrating that offering two tiered service (say, fast and slow) can improve sum utility, if different users have different sensitivities to delay. He observes that one solution would be to offer a separate dedicated network for high demand applications, but offers another simple example demonstrating that this is suboptimal (due to wasted bandwidth on each network). The question that remains is: how is network service assigned to flows? Do the flows pick, or does the network?

If the network picks the level of service, Shenker predicts problems with growth of the internet: Routers will only know about certain types of applications, and new nonstandard applications will not get efficient flow allocation. This is avoided if the flows pick their own level of service, but this brings with it the problem of incentive. In a selfish (and large) world, one cannot expect applications to voluntarily opt for inferior service. Here Shenker does not offer much of a solution: he suggests charging for service at different rates, but warns that this will likely destroy the culture of the internet, discouraging browsing, and hindering the free dissemination of information.

Finally, this paper considers the issue of admission control: giving routers the ability to turn away flows of certain types in the face of congestion. Shenker advocates for admission control, again in the name of optimizing sum utility.

Issues:

This paper clearly identifies the incentive problems of soliciting service quality requirements from users, but does not provide a satisfactory solution. What else could be done, if charging for use is distasteful?

Why is optimizing sum utility the best metric? As admission control exemplifies, the optimal sum utility may lead to an unfair allocation, with high bandwidth for some users, and denial of service to others. What other metrics might be appropriate? Minimizing the maximum latency over users?
#3 posted on Feb 10 2008, 03:51 in collection CMU 15-744: Computer Networks -- Spring 08
Nowadays, multimedia data seem to be fairly well being dealt by various overlay networks such as Akamai-style (content-caching and routing) and Skype-style (p2p-based VoIP). Maybe, they violate the initial design philosophy of the Internet in some sense. However, aren't they more visionary solutions for real-time applications than new architectures?
#4 posted on Feb 10 2008, 09:56 in collection CMU 15-744: Computer Networks -- Spring 08
Regarding the question of who chooses the service, I agree that applications should explicitly request it. Not only because of the disadvantages of the alternative approach but because, unlike the author, i think that using different prices for different services as an incentive for good behavior is a good thing. I do not believe that users would necessarily be against this model of pricing. For instance, I do not know how cellphone networks work in the US but in Portugal we have a similar approach where we pay a different price for each service (phone calls, text message, multimedia messages, accessing the Internet ..). This is one of the ways the Internet has changed over the years; it used to be a luxury to access the Internet, but nowadays we cannot live without it and we are willing to pay in order to have a better service (and for some users this model could actually be cheaper, if they never used applications that require good quality of service).

In terms of todays technology, I agree with those who say that most of todays applications are still rather "elastic". And even in terms of communicationm, most people still prefer to use text messages (email, for example) rather than using voice communication over the Internet.
#5 posted on Feb 10 2008, 14:18 in collection CMU 15-744: Computer Networks -- Spring 08
This paper is very interesting. I like the idea that the author justified his arguments on 1) efficiency of network design and the possibility of network overloading for different applications using utility function concept.

I also believe that other QoS indicators than queuing delay could be, easily, included in this utility function.
#6 posted on Feb 10 2008, 14:47 in collection CMU 15-744: Computer Networks -- Spring 08
I agree with other comments -- this paper did really forsee what could potentially be big issues facing the internet today (like net neutrality!).

Some other thoughts:
Admission control, while not "officially" deployed, should come about automatically for certain utility function shapes -- users getting bad service will simply stop using the application. In this case, some of the "unfairness" effects in traditional queuing and routing (like shut-out) might well be serving in the place of real admission control.
Bandwidth requirements for video are actually somewhat lower than predicted by this paper; in part, because low link speeds have motivated more efficient encoding strategies. In other words, applications/users have figured out how to deal with the internet's current best-effort policy without having to hack the basic architecture.
#7 posted on Feb 10 2008, 16:14 in collection CMU 15-744: Computer Networks -- Spring 08
Like some of the other posters, I found the incentive issues presented in the paper to be very interesting. The situation differs from standard economic/mechanism design situations in that the network itself is not trying to maximize its own profit (in terms of payments received); yet without a system of payments to the network, users have no incentive to report their requirements truthfully and network performance will suffer. But as Shenker noted, charging all the users goes against the notion of the internet as being a "gift economy," and would definitely deter users from using it.

On an unrelated note, I appreciated the author's sense of humor throughout the paper (e.g., "this author also pleads guilty to this crime" (pg 1177), "in this section the larceny is especially egregious" (1179)).
#8 posted on Feb 10 2008, 16:23 in collection CMU 15-744: Computer Networks -- Spring 08
It is interesting to see that some of the issues this paper raises are still relevant today (how does the internet handle video streaming or applications with real-time requirements?) This paper does a good job of identifying potential problems of the internet. However, it seems that even today none of the suggested extensions to the internet model have been adopted. For instance the internet still only offers best-effort service. Moreover, it is quite interesting to see that most of the applications that the author suggests would require better service, seem to have found ways to work efficiently without requiring any modification from the internet's current service model.
#9 posted on Feb 10 2008, 16:32 in collection CMU 15-744: Computer Networks -- Spring 08
Like other comments above, I question whether the sum of utilities of users is the best measure of the total utility of a system, but I definitely agree with the author of the article that the value of a network should be measured from the end user's point of view.
#10 posted on Feb 10 2008, 16:33 in collection CMU 15-744: Computer Networks -- Spring 08
While offering different qualities of services (QoS) is an interesting idea, the paper didn't really seem to propose realistic ways of preventing applications from always requesting the highest level of service. Charging based on *actual* usage wouldn't be transparent enough to the customer unless there is e.g. a physical switch to turn on or off a request for higher QoS. Perhaps providers can allocate a fraction of provided bandwidth as "high-QoS" bandwith for applications that are delay-sensitive but low-bandwidth, like voice data or multiplayer network games. E.g., an ISP might offer customers a 768 Kb/s DSL line with up to 56 Kb/s being allowed as high-QoS. If there's enough demand, users who need more high-QoS bandwidth could be provided with an option to purchase a higher maximum high-QoS bandwith.

Regarding admission control: the idea of refusing a flow, at least at this point in time (maybe it was different in 1995) is a bad one IMHO. Today, Internet users (like phone users) expect not to be blocked from access except in extraordinary conditions or network failures.
#11 posted on Feb 10 2008, 16:36 in collection CMU 15-744: Computer Networks -- Spring 08
I think that the bandwidth is not so expensive compared with what it was 10+ years ago, especially for PC users. On the other hand, the number of users/hosts are also growing fast, and the recent popularity of P2P applications is creating a really huge amount of traffic, so the internet is still as crowded as it once was, if not more. So the incentives of extending the service model still make good sense, but implementation of the author's ideas, such as admission control, may not be smooth because ISPs have their own business concerns, and also the internet is growing so large that even incremental change of protocol/design would take a long period.
#12 posted on Feb 10 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
Like many others, I really liked the paper for the way the author considered various points in the design space and gave arguments based on tradeoffs.

The basic idea of coming up with an architecture in which users are happy rather than network-centric properties was nice.
#13 posted on Feb 11 2008, 13:17 in collection CMU 15-744: Computer Networks -- Spring 08
This paper is much easier to read without many cool but vague new terminologies. Basically I like this paper.

Utility function proposed here (I am not sure if Shenker is the first to use these functions) seems to get thousands of followups - F.Kelly's work is the most famous one. Also the issues discussed in this paper like incentives mechanism, stability, admission control, all have become very hot research topics. I think the most valuable part of this paper is, it asks many good questions.

This paper was written in mid 90s which is more than 10 years ago. Now we all know that ISP are so lazy and conservative to turn on those new options. So given the rapid development of research on application layer, when we examine these ideas, can we find a way to achieve these goals, without major change in Internet framework?