papers | collections | search login | register | forgot password?

End-To-End Arguments In System Design
by Jerome H. Saltzer, David D. Clark, David P. Reed
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Sep 04 2008, 13:01 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
So this paper is a well thought out argument that "high level" functionality should be at the end nodes and not the intermediate nodes of the network. The author does a very good job of giving specific examples to support his case and spans these examples across multiple disciplines within networking. He also gives some counter examples such as the discussion on voice packets. This example shows that the overheads of the end-to-end system would be counter productive for voice packets.

On the more negative side I would say that this really isn't a research paper; it is more of a position paper. I was a bit confused with the section on "secure transmission of data." I credit my confusion due to the fact that a lot has changed in the world of security since this paper has been published.

This is a paper that is always talked about in networking however I feel that many people enforce this argument to the extreme. I really like the line "Thus the end-to-end argument is not an absolute rule, but rather a guideline..." As networking moves towards implementing more complex applications I believe that the strict rules of end-to-end semantics will have to be relaxed. For example it is beneficial to split connections at the AP when a wired LAN meets a wireless LAN (split-tcp).

Overall this is a well written paper with a well thought out argument. Some of the examples turned out to be right on the money while the "internet" developed after the paper was published.
#2 posted on Sep 13 2009, 04:15
It is impressive that the paper pulled off a clear discussion on a problem that's abstract and somewhat hard to define. Authors very successfully made the point that functions placed at data communication system often require redundant functions at the higher level through various examples. Their arguments on each functionality (careful file transfer, secure transmission, etc) were convincing and clearly supported by examples and suggestions (about moving the functionality to the higher level).

However, because authors concentrated on attacking each functionality at a time, I feel they downplayed the adavantage of functions on low level reducing burden of developing applications. Adding each functionality to application level sounds simple enough, but it also means developers will have to implement, customize and throughly test all these functionalities for every new applications they work on. Perhaps, end-to-end system design will yield less redundancy in programs, but it will increase redundancies in developing processes. This aspect makes the argument seem bit outdated, as human labor is becoming more and more expensive compared to machine labor.
#3 posted on Sep 13 2009, 14:08
This paper argued that in computer networks, functions that requires with application-specific knowledge should be implemented by the applications on end hosts, not by the network itself. A good example "Careful file transfer" is used to illustrate the idea of end-to-end arguments. And this argument has served very well in designing Internet. However, as the author wrote, the end-to-end argument is "not an absolute rule but rather a guideline". In recent years, a number of new requirements on the Internet and its application might best be fullfiled by adding new mechanism but not following only the end-to-end argument. For example, NAT technology is used to temporarily ease the shortage of IP addresses. Its implementation violates the end to end arguments. Another example is today's multimedia services on the Internet which require more demanding transfering throughput than before. One solution to meet the demands is to install an intermediate storage sites that caches the video contents close to the end user. This is not a simple end-to-end structure but a two-stage delivery via intermediate servers. Afterall, the primary point of the paper is sound and still very useful to today's computer system design.
#4 posted on Jan 20 2010, 01:20

#5 posted on Jan 21 2010, 02:35 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
The paper strongly advocates the principle of making low levels as light-weight as possible. The authors provide two justifications (i.e. feasibility and necessity) and one point of compromise (i.e. performance).

It's common to see a system designer tempted to implement certain functionalities at low levels in order to enhance system reliability or support application logic. There are really two types of hazards threatening reliability: those visible to the low level in question, e.g. faults in the physical layer; and those indistinguishable from valid data (again, to the eyes of the low level), e.g. high-level errors violating application logic.

[FEASIBILITY] The low level may be able to reduce or even eliminate the former type of hazards; application logic-related functions, however, are beyond its capability. They have to be handled by the application itself.

[NECESSITY] Since the way that the application ensures reliability usually does not care about the actual cause, the first type of hazards are captured as well. As a result, whatever the lower level does is only redundant.

[PERFORMANCE] Nevertheless, the paper does admit that in certain cases, it's desirable to push some reliability measures at the lower levels. The reason is that performance (i.e. success rates) may be significantly improved, especially when the first type of reliability hazards is substantial.

*MODULARITY* The paper claims that the only justification of disobeying the end-to-end principle is performance. There is actually another point that goes against its strong belief, i.e. system modularity. When designing a networking system, it may be the case that all anticipated applications require the same infrastructural functionality (e.g. encryption). It would be suboptimal not to implement it at the lower levels. Speaking of security/encryption, the threehold argument in the section "Secure transmission of data" is not really convincing IMHO.
#6 posted on Jan 21 2010, 03:29 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
Its been more than 25 years since this paper was published, but the authors thoughts on system design hold good even today. Most of it would seem like common sense now, but its an impressive paper nevertheless.

- When developing a framework, one shouldn't try to envision all its possible applications. Doing so would only result in over-complicating it for the user. So, keep it simple. Provide a basic set of functions which can be built on top of.
A very evident networking example is UDP and TCP. Since the latter provided way too much functionality (reliability at the cost of delay), applications like voice/video needed a lighter weight transport version that focused primarily on speed (UDP).

-I really liked the MIT example where the gateway developed a transient error, thus corrupting the data. Such scenarios aren't ever obvious unless you encounter them.

- Many of the examples appear as common sense now, but given the age of the paper, they were definitely some brilliant ones. Using the lower level functions to reduce the frequency of errors and using the application level functions to deal with eventual problems that cannot be addressed by the lower level (as the problems are introduced by other component failures) summarizes the paper for me!

*There is one small nugget I have to offer though. When a presumably unnecessary/redundant feature is implemented at a lower system level, it allows for hardware acceleration in the future if found useful (which cuts down the delay).
Generally, implementations start at a software level, and when speed becomes a criteria/barrier, the natural instinct is to implement the same in hardware or to do away with it.
Hence, a blank assumption that the feature isn't worth implementing at a lower level isn't the smartest thing to do.

~Raja
#7 posted on Jan 21 2010, 14:16 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
* Given almost a quarter of a century has passed since the paper first appeared, It is quite interesting to see how things have evolved over the years using this guiding principle. The paper is quite well written and most of the things seem quite common sense now. In fact, I missed the date in my first reading and was wondering what happened to TCP? I am still surprised though that it is not mentioned although basic functionality of TCP had already been proposed by that time. I guess the other variants with explicit congestion control/avoidance (RENO,VEGAS) came much later.

*However, had I been in a physical and mental state to comprehend this paper way back in 1984, I would have been somewhat critical of this.
For example, my 1984 self would not have understood ensuring reliability using checksums in their file transfer
example. I would have argued that the checksum still needs to be "sent" via the underlying communication subsystem and what's the guarantee that the checksum didn't get tampered in transit. May be we needed another checksum for a checksum??

*Again,way back in 1984,from the examples, it seems that their argument for application layer to ensure reliability anyway is heavily based on threat two - "transient errors"
To me, it would not make sense to think of disk failures as an argument for making a network design. Sure, it was a problem in 1984, but then I would have argued to focus on reducing disk failures and other such errors rather than assuming that these will be there anyway, so we cannot guarantee bits to not switch in between and hence some kind of functionality needs to be in the application nonetheless.

*Again, purely from my 1984-self point of view, I would have liked them to elaborate further on "Guaranteeing the FIFO message delivery" section and the SWALLOW system, things which haven't been discussed much... relative to rest of the sections.

*I do however fully agree with the encryption example - which I also believe is something the applications need to handle on their own.

- Overall, a good read. A trade-off between functionality / performance is something to take away from this paper.