papers | collections | search login | register | forgot password?

A Delay-Tolerant Network Architecture for Challenged Internets
by Kevin Fall
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Apr 12 2008, 17:03
DTN Paper Review

This paper discusses a network architecture approach for handling the interoperability between networks, especially challenged internets. Challenged internets are networks where the typical properties of the internet do not hold. What this means is that challenged internets may deal with the following problems: low data rate, high latency, some form of disconnection between end hosts, longer queuing delays than typical internet, end hosts having limited longevity, nodes having limited resources such as limited memory and processing power to perform tasks, and low duty cycle operation which means the there are times when a node is not operating.

The paper briefly states that link-repair approaches are not enough for handling these networks. Examples of link-repair approaches are Performance Enhancing Proxies (PEPs) and Boosters and application-layer proxies. PEPs and Boosters are agents that modify the data stream between end hosts to mask poor or unusual performance of the data stream. They state the problems with these approaches are their specificity, security, and abandoning the fate-sharing model of the internet. Another option is to have internet protocols work like email, an asynchronous message delivery system. They state limitations of email is its lack of dynamic routing.

Their solution is to have a network service and API that supports non-interactive messaging by combining delay and disconnection tolerant properties of email with some routing capability. They call it Delay Tolerant Networking (DTN) architecture, an overlay architecture that focuses on message switching.

The architecture has the following properties:

- Operates above the transport protocols of existing networks
- Gateways and Regions: Dissimilar networks are split into regions and DTN gateways are used to communicate between regions.
- Names Tuples – Routing of DTN messages used names tuples where first part of tuple is the name used to traverse regions and second part is the name used within a specific region. Therefore, an address www.google.com would be the second part of a name tuple.
- CoS – The DTN uses a class of service (Cos) similar to US Postal System. This CoS structure means messages have different priorities and there is some form of notification of delivery between sender and receiver.
- Path Selection and Scheduling– DTN routes consists of time-dependent communication opportunities called contacts. A contact is measured by its predictability, whether a node can be communicated with. The paper did not give a complete description of path selection or scheduling within DTN.
- Reliability – To ensure reliability, nodes in the DTN architecture implement a custody transfer strategy. Custody transfer means that the node who receives a message is responsible for delivery of the message. Therefore, state, as in a copy of the data, is delegated across DTN nodes and the end host does not need to maintain state.
- Convergence Layer – This layer is the connection between DTN and the network transport protocol of the node that it overlays. Depending on the underlying network's transport protocol the amount of implementation within this layer varies.
- Time Synchronization – Some form of synchronization is necessary to handle messages that have expired
- Security – Like a Postal Service, a message is stamped with credentials that are checked at each DTN hop. To implement this strategy, they used public/private key pairs.
- Flow and Congestion control – They leave flow control to be handle by underlying network's transport protocol. They state that DTN has a problem with congestion control as it can't simply drop messages due to custody transfers. Their solution is having a shared priority queue for allocating custody storage by clearing expiring messages, denying custody transfers to messages that are too large, and then sorting messages based on priority.

Questions:
Is DTN necessary or link repair approaches fine enough for interoperability between dissimilar networks?
How extensible is DTN to handle other potential challenged internets in the future?
What are some ways to handle congestion control within DTN?
What are some ways to handle or implement path selection and scheduling within DTN?
#2 posted on Apr 13 2008, 10:23 in collection CMU 15-744: Computer Networks -- Spring 08
Despite all its problems, what i think is important about this architecture is that it states that a DTN region should look like a single monolithic entity to other regions. Messages should be delivery to a gateway, and that gateway would then be responsible to deliver the messages, since it stands in a better position to understand how the region works and what to do.

I don't think that a link layer approach would be sufficient for communication between Internet and a wireless sensor network, for instance. Sensor networks may be mobile, nodes are not awake all the time , their range may not be stable due to power constraints and the way messages should be handle is usually application-specific. For these reasons I think that sensor networks may have higher losses than other wireless networks and the link layer may not have the necessary information needed to handle the messages.
#3 posted on Apr 13 2008, 12:13 in collection CMU 15-744: Computer Networks -- Spring 08
This is a really interesting idea, and the assumptions on "challenged Internets" also seem reasonable and practical. However, I'm still wondering what is the bottom line network performance and conditions where this architecture can operate. For example, if we want to transmit a very large chunk of data, say a HD-quality movie file, then would it be still practical to use the DTN architecture? In some extreme cases ("too challenged" like very high network delay and huge non-interactive data to transmit), it'd be better to use physical logistic service to deliver physical DVDs or hard drives.
#4 posted on Apr 13 2008, 13:10 in collection CMU 15-744: Computer Networks -- Spring 08
This was an interesting "idea" paper that considers more "exotic" networks and their interoperability. I would have liked to see a quantitative evaluation of DTNs. I recognize that this is difficult considering the heterogeneity of all the networks they want to support, but I think even approximations in simulation would have been interesting to see.
#5 posted on Apr 13 2008, 15:19 in collection CMU 15-744: Computer Networks -- Spring 08
This paper addresses an important issue: in the real-world, there are often adverse circumstances that result in Internet connections which violate some of the standard ideals/assumptions of the Internet. The introduction gives several important examples of these "challenged internets," which makes it clear that they really exist and must be dealt with. It would have been nice to see some experimental results on an actual or simulated challenged internet -- in the author suggests that a prototype DTN has been developed, so perhaps there are some experimental results to come in the near future.
#6 posted on Apr 13 2008, 16:16 in collection CMU 15-744: Computer Networks -- Spring 08
The paper provides a well-motivated problem of architecture design of delay-tolerant and disruption-tolerant networks. The approach the author takes is connected to the overlay networks lecture. I found that the discussion on PEP and E-Mail etc. is pretty interesting. Section 4 provides a comprehensive but not in-depth discussion on the overlay architecture. And the RFC text that Samir pointed out is a quite nice follow-up reading.
#7 posted on Apr 13 2008, 16:17 in collection CMU 15-744: Computer Networks -- Spring 08
The article addresses the interesting question of how to build network protocols for use in challenging environments where non-interactive communication is the only type of communication possible. The author mentions a wide range of extremely different types of networks where current protocols are not suitable, and then introduces the delay-tolerant network architecture as a solution for those environments.
#8 posted on Apr 13 2008, 16:45 in collection CMU 15-744: Computer Networks -- Spring 08
This paper presents the Delay Tolerant Network architecture, which aims to connect and provide communication among a set of very diverse networks that may have exceptionally poor and diverse performance characteristics. My main problem with this paper is that I was not very convinced that there really is a need for a common architecture that connects all of these “esoteric” networks. For instance the presented example in figure 1 seemed a bit far-fetched (of course I’m not expert in this area so I might be wrong). Apart from the motivation I was also not very convinced that the proposed mechanisms would work in practice. The presented ideas seemed valid, but there wasn’t enough experimental proof to back them up. On the other hand, assuming that such networks are actually going to be widely deployed, I would say that this paper does a good job of discussing the problems and suggesting potential solutions.

As a side-note the XCP protocol which we saw in a previous paper seems to solve some of the problems presented in this paper (e.g. sustain good performance in large-delay satellite links).
#9 posted on Apr 13 2008, 16:54 in collection CMU 15-744: Computer Networks -- Spring 08
Realizing that the so called "challenged networks"- characterizing by high latency, bandwidth limitations, error probability, node longevity, and path stability which are deviated from normal Internet- are becoming more realistic and more in-use, the author proposed an interesting idea of an overlay architecture to deal with this kind of networks. However, there is no performance evaluation of the proposed architecture in the paper. Since the paper was published, is there any work evaluating how well this architecture works?
#10 posted on Apr 13 2008, 19:55 in collection CMU 15-744: Computer Networks -- Spring 08
It seems in most of the posts ppl believe this problem is important and pracitical. I remember I read some papers talking about DTN before. They even claim that DTN may have applications in the information transmission in space traveling between different spaceships. Well maybe they are right. But I am still not very conviced that this problem is realistic.
#11 posted on Nov 10 2008, 10:52 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
The paper addresses multiple problems. Mainly, that some internet connections have huge delays or are often disconnected. The authors' solution is that network protocols be adapted to a message-based architecture.

The closest parallel is email. There is no connection between the endpoints. Instead, a message moves between relays to reach the source. Optionally, an acknowledgment can be returned.

So in the architecture, responsibility is passed from node to node. Gateway nodes that live between differing networks need to hold on to all messages before passing them on, so storage is a big issue. On receipt, the destination can send back an acknowledgment back, and the sender's software must resolve it as it sees fit.

It is an interesting idea, and the more popular services can work well with it. It fits well with email and HTTP with little work, but most pulling services like RSYNC or CVS can be adapted to require no intervention.