papers | collections | search login | register | forgot password?

Development of the Domain Name System
by Paul V. Mockapetris, Kevin J. Dunlap
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Mar 28 2008, 18:19 in collection CMU 15-744: Computer Networks -- Spring 08
This historic paper describes the evolution, surprises, successes and shortcomings of DNS for DARPA Internet as of the late 1980's. It's interesting that when the paper was written, not all hosts on the internet are supporting this new mechanism for resolving domain names. The writing style of this paper is much like a discussion report, but that's quite nice in my mind.

The most important idea of DNS is decentralized maintenance: it is a distributed database whose control is delegated in a hierarchical fashion. The performance should be scalable and reasonably robust, and the query results are globally true.

The name structure of DNS is a variable-depth tree with labels. The domain name of a node is the concatenation of all labels on the path between the node to the root. (e.g. for db.cs.cmu.edu, we have root-edu-cmu-cs-db). Delegation is realized by generating a subtree (or a zone) on the owner node (e.g. db.cs.cmu.edu are created by parent organization who owns cs.cmu.edu).

Records in the database should have fields of type, class, ttl, name and the corresponding data. For example, MX is the type for email addresses. The number of different types and classes are limited, and introduction of new types/classes could be tricky. TTL determines the frequency of refreshing the cache. The name field are hostnames and corresponding data field are ip addresses for the host type.

Questions to ponder:

As the author stated in the paper, caching was critical to achieve reasonable performance in those early years. What's the typical cache performance in modern internet, hit rate? how much speedup?

Cache also leads to transient inconsistencies. How to eliminate this effect as much as possible?
#2 posted on Mar 30 2008, 08:57 in collection CMU 15-744: Computer Networks -- Spring 08
The authors ask if the DNS was a good idea after all. We need some distributed mechanism to map host names to IP address and after years of using DNS on a fast growing Internet it seems that DNS does it job well enough.

The TTL value is used to control when a cache entry expires, and it was expected that names would change slowly so the TTL could be a large number. I think they mention in the paper that problems could arise if someone cached incorrect data and used a TTL that is too large, but in most cases administrator end up setting values that are too low causing worst performance but less inconsistency.

DNS probably does not offer many services because it is easier to implement those services in an independent way than adding a new type of service to DNS


#3 posted on Mar 30 2008, 13:20 in collection CMU 15-744: Computer Networks -- Spring 08
This paper was a very informative read that describes the initial steps in the development of the DNS. The authors do a good job of presenting the problems they are trying to solve and have convincing arguments and examples that support their design decisions. It was interesting to see that apart from small changes (e.g. the 7 root servers became 13), the DNS core concepts have remained the same for the past 20 years. This fact alone also probably answers the authors’ question about the DNS being a good idea or not.

Below are a few interesting DNS-related links:

DNS response-time statistics
Massive DDoS Attack Hit DNS Root Servers
DNS DDoS Server Attack
Official root servers’ website
DNS Security Extensions

On a side-note I found this comment made by the authors quite amusing: "Documentation should always be written with the assumption that only the examples are read." :D
#4 posted on Mar 30 2008, 14:40 in collection CMU 15-744: Computer Networks -- Spring 08
Whenever I have a chance to read this kind of old papers about the beginnings of Internet architecture and protocols, I end up finding security was not in those pioneers' interests. As Samir briefly mentioned, compromised DNS service could lead to various malicious attacks by poisoning DNS caches and hijacking all the subsequent network connections to the servers with those hijacked names. Besides the attacks against DNS services, recently, it's been found that some malicious web sites contain java script code to exploit web browsers that visit the web pages and change the DNS server entries of the visitors' home routers. This is a really practical attack that works even though the home router configuration service can be accessed only from behind firewalls.
#5 posted on Mar 30 2008, 14:59 in collection CMU 15-744: Computer Networks -- Spring 08
This paper was a good introduction to DNS and naming issues in distributed computing. I like the section of three-classified issues: suprprises, successes, and shortcomings. Without going through the history of one type of system, it is usually difficult to find a proper justification for the current system design, since the requirements and constraints change in every phase of system evolution. Among the lessons learned by the authors, is was also interesting that removing function from systems is often challenging than getting a new features added. Maybe, it would be the most challenging constraint for today's system engineers that they cannot start over from scratch.
#6 posted on Mar 30 2008, 15:21 in collection CMU 15-744: Computer Networks -- Spring 08
This is a good, clearly-written paper explaining how DNS came about and the important technical details of the system. Delegating authority over subdomains to the owning domain is clearly a win.

IMO, the clearest downside of DNS is that changes can take up to 2 days to propagate across the Internet. A mechanism for faster propagation would be desirable. For example, servers higher up in the DNS heirarchy might actively push the new changes to servers beneath them (but this would require maintaining a list of subscribed servers). Or, lower servers might query higher-up servers once every n minutes to ask for new changes.
#7 posted on Mar 30 2008, 15:49
The DNS(Domain Name System) is aiming at convert a host name to its IP address. As the increase size of internet, the DNS has evolved from centralized look up to a more distributed and hierarchical structure.

The main goal for the design of DNS is summarized in the paper as follows:
1)providing all of the same information as centralized look up
2)Allow distributed mechanism
3)no obvious size limits for names, namecomponents, data associated with a name, etc
4)Interoperate across the DARPA Internet and in
as many other environments as possible
5)Provide tolerable performance.

Fan Guo has make a post on the data format of the DNS. I will summarize some observation people made about current system.

In practice, people have some surprise finding that: 1)the DNS does not have a good understanding of how to organize of its information. 2) There is still quite delay for query the system. 3) There is a high rate of negative response.

On the other hand, the variable depth hierarchy, organizational structures, datagram access, additional section processing, catching, and mail address cooperation are regarded as successful aspects of the system. Meanwhile, the author think class growth, easiness of upgrading application and distributing authority are the main shortcomings that the system should overcome.

I also have a few questions here:
1) Is there any secure mechanism for the DNS to prevent malicious attack?
2) Part of the DNS is based on UDP. Is there better way to ensure the transmission reliability?
#8 posted on Mar 30 2008, 15:52 in collection CMU 15-744: Computer Networks -- Spring 08
The DNS(Domain Name System) is aiming at convert a host name to its IP address. As the increase size of internet, the DNS has evolved from centralized look up to a more distributed and hierarchical structure.

The main goal for the design of DNS is summarized in the paper as follows:
1)providing all of the same information as centralized look up
2)Allow distributed mechanism
3)no obvious size limits for names, namecomponents, data associated with a name, etc
4)Interoperate across the DARPA Internet and in
as many other environments as possible
5)Provide tolerable performance.

Fan Guo has make a post on the data format of the DNS. I will summarize some observation people made about current system.

In practice, people have some surprise finding that: 1)the DNS does not have a good understanding of how to organize of its information. 2) There is still quite delay for query the system. 3) There is a high rate of negative response.

On the other hand, the variable depth hierarchy, organizational structures, datagram access, additional section processing, catching, and mail address cooperation are regarded as successful aspects of the system. Meanwhile, the author think class growth, easiness of upgrading application and distributing authority are the main shortcomings that the system should overcome.

I also have a few questions here:
1) Is there any secure mechanism for the DNS to prevent malicious attack?
2) Part of the DNS is based on UDP. Is there better way to ensure the transmission reliability?
#9 posted on Mar 30 2008, 15:58 in collection CMU 15-744: Computer Networks -- Spring 08
Given the lack of control over TTL values (which creates all sorts of problems where resolvers not under your control might have long TTLs), have there been other more reliable schemes proposed for propagating DNS updates?
#10 posted on Mar 30 2008, 16:31 in collection CMU 15-744: Computer Networks -- Spring 08
The paper provides a good introduction to DNS. For out-of-the field people like me, I think the summary of several issues in sections Surprises, Successes, and Shortcomings are very useful to understand what are going on about the topic.
#11 posted on Mar 30 2008, 22:52 in collection CMU 15-744: Computer Networks -- Spring 08
It is an interesting paper and easy to read. I think the Tree hierarchy is one of its most successful design since it makes the distributed maintain possible at that time. As pointed in some posts before, nowadays DHT makes it possible to go back and rethink this hierarchy. However, I think the tree hierarchy is the easiest way for me to remember the host names like csd.cs.cmu.edu. After all we need to remember those names and type them in the browsers.