papers | collections | search login | register | forgot password?

Middleboxes No Longer Considered Harmful
by Michael Walfish, Jeremy Stribling, Hari Balakrishnan, Maxwell N. Krohn, Scott Shenker, Robert Morris
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Apr 01 2008, 00:54 in collection CMU 15-744: Computer Networks -- Spring 08
Although the the authors give examples and reasons why violating those two architectural principles is bad and we have gone over this a few times in class, but I still don't quite understand why NATs are so bad. Other than the NAT being a point of failure that violates fate-sharing and also things like mobile-IP, why are NATs so bad again? Their specific complaints in section 2 make sense, but I still don't really understand why violating those architectural principles is a crime. Also, what does the sentence "Layer violations lead to rigidity in the network infrastructure, as the transgressing network elements may not accommodate new traffic classes." mean?
#2 posted on Apr 01 2008, 11:56 in collection CMU 15-744: Computer Networks -- Spring 08
The paper seems pretty compelling: I don't see a reason to stick to architecture's original principles if DOA can allow the advantages of middleboxes to outweigh their cost. They do note that the increased latency of DOA is the main performance cost; however, it wasn't that clear to me from the discussion in 8.1 exactly how severe this cost is overall. They do say "DOA can add noticeable delays to small data transfers, sometimes tripling their end-to-end latencies," but it would have been nice if they defined what they meant by "small," and perhaps quantified the overall effect taking into account the actual distribution of data transfer sizes.
#3 posted on Apr 01 2008, 12:43 in collection CMU 15-744: Computer Networks -- Spring 08
I am not sure it would be easy to convince users and administrators that DOA is a good idea, since it does not offer new features and adds performance degradation. It is true that middleboxes may cause problems and the Internet should follow a clean architecture, but the existing solutions are popular and they do not seem to have caused any serious problems so far.
#4 posted on Apr 01 2008, 13:19 in collection CMU 15-744: Computer Networks -- Spring 08
This paper presents DOA (Delegation-Oriented Architecture), which is an extension to the current Internet architecture that facilitates the deployment of middleboxes, such as NATs or firewalls.

Overall I wasn’t very impressed by this paper. As far as the unique addressing problem is concerned, IPv6, which uses 128 bits per address, could also solve this. Being able to place middleboxes that are not on the path that connects the source to the destination also seems like a bad idea both from a security perspective (opens up opportunities for DOS attacks), but more importantly from a resource utilization perspective. Packets that deviate from their normal path consume extra resources, such as bandwidth or router queue space. Current middleboxes that sit somewhere on the path between source and destination do not suffer from this problem and also have the advantage that traffic must go through them (i.e. there is no way to bypass them). On the contrary DOA relies on the correct operation of the delegation system to guarantee that packets will indeed visit the middlebox before reaching their destination. Finally, although the authors' argument about violating the two tenets of the internet architecture is valid, I don’t think it is alone strong enough to justify the changes proposed by DOA.
#5 posted on Apr 01 2008, 13:43 in collection CMU 15-744: Computer Networks -- Spring 08
This paper presents an interesting idea and design to address the problems that network 'middleboxes' have. However, the given architecture design still requires enormous changes on the current environment such as modifications on TCP/IP stacks, end system implementations, DHT deployment, and etc. As mentioned in the above comments, it seems that we still can live with those 'evil' middleboxes for now. In addition, the latency problem of the architecture might be cripple some rigid real time services.
#6 posted on Apr 01 2008, 13:44 in collection CMU 15-744: Computer Networks -- Spring 08
In this paper the authors propose a mechanism to extend the current Internet architecture, in order to accommodate the Middleboxes such as NATs and Firewalls without violating "two tenets".

The biggest assumption in this paper is that we should keep our current Internet consistent with its original design principles ("two tenets"):
(1) Every Internet entity has a unique ID to reach;
(2) Network elements should not process packets not addressed to them.

The solution of DOA, according to my own understanding, actually is to add one more DOA layer between Network layer and Transportation layer. IP is used for routing (in Internet), TCP port is used for indicating services, and EID is used as an identification of an end host. Since IP is originally designed as the unique identity for routing, now this functionality is now separated into both IP (for routing) and EID (for identifying). The end host may map its EID to its delegation's EID if it wants to hind itself on Internet or it can use its unique EID so that host outside can initiate the connection to this end host.

I think this is a good idea. But compared to the cost of the extension, the benefits are very limited. The cost involves:
(1) A global EID distribution and management mechanism.
(2) Potential increase of packet delay.
(3) Security problem (the popularity of NAT is partly because it hind the hosts behind it).
The benefit includes:
(1) End-to-end communication. Actually NAPT technology has now made e2e communication more likely than before.
(2) Saving the port space of NAT
(3) VPN. In the paper the authors mention there is at most one VPN host behind the NAT since IPSEC expects traffic on port 500. But it seems this problem has been solved by encapsulating ESP packet into a UDP packet.

Above all, I think the idea is good. But the "performance-price ratio" is too low.
#7 posted on Apr 01 2008, 14:35 in collection CMU 15-744: Computer Networks -- Spring 08
I think apart from NATs and Firewalls, there are other examples of middleboxes that have emerged:

* Traffic shapers
* Intrusion detection systems
* Transparent web proxy caches for reducing load

With that being said, I'm not convinced that DOA isn't simply a "band-aid" for ipv4 (i.e., using EIDs to uniquely address all nodes in the system).
#8 posted on Apr 01 2008, 16:05 in collection CMU 15-744: Computer Networks -- Spring 08
I like the fact that the authors applied their proposed architecture with the two well-known network-layer middleboxes; i.e. NAT, and Firewalls. However, in evaluation part, without any benchmark or reference, it is harder to see how costly the price we need to pay, i.e. latency, for using this DOA.
#9 posted on Apr 01 2008, 16:13 in collection CMU 15-744: Computer Networks -- Spring 08
This is a interesting paper introduce a new architecture. I would be glad to see more experiment result on this design comparing with exisiting systems.
#10 posted on Apr 01 2008, 16:34 in collection CMU 15-744: Computer Networks -- Spring 08
I didn't know that NATs were such a huge problem, until I read this paper and saw Samir's comments.

In response to a few other comments on performance/price tradeoffs, I think the fact that this architecture will enable other (currently languishing, and possible future) services/applications to work better should not be undervalued.
#11 posted on Apr 01 2008, 16:53 in collection CMU 15-744: Computer Networks -- Spring 08
First off, I don't see so much wrong with configurable NAT's -- the problem is users who don't know how to properly configure NAT's (and this paper doesn't do much to eliminate that problem). What it does do is somewhat clean up the whole port-overloading thing.

So that's good.

For off-path boxes, however, I'm not so pleased with the solution. MAC'ing packets coming off of a RPF is more-or-less the same as setting up a VPN that terminates at said RPF (though without, y'know, the encryption). It's unsatisfying! With a physically-interposed filter, I don't ever see "bad" packets. With this mechanism, I've got to actually do cryptographic work on _every_ packet -- including bad packets!
#12 posted on Apr 01 2008, 16:58 in collection CMU 15-744: Computer Networks -- Spring 08
I think the proposal made in this paper is overkill for what it might be useful for. The paper mentions that private IP spaces are not a temporary artifact of the limited IPv4 address space, and while I agree with this, I think there are two separate reasons for having a private space: (1) to overcome the limited IPv4 space, and (2) to shield the machines from the public Internet. Reason #1 will disappear as we migrate to IPv6. For for reason #2, you don't want the machines to be addressable from the public Internet, so I don't see how this proposal is useful.

For a few middleboxes such as transparent caches and other performance-related things, DOA may be useful. But a whole new architecture seems like overkill just for this.
#13 posted on Apr 01 2008, 16:59 in collection CMU 15-744: Computer Networks -- Spring 08
DOA is a delegation-based solution and compared with i3, it seems to be easier to deploy but as many of us pointed out, the performance gain may not be really worthwhile.
#14 posted on Oct 15 2008, 03:21 in collection UW-Madison CS 740: Advanced Computer Networking -- Spring 2012
As pointed out in one of the public reviews, the paper is founded on the idea that the two tenets of the internet as described by the paper still needs to be preserved to the fullest. I would however like to take a similar stance to that public reviewer. IPv4 prevailing with the use of NATs even when IPv6 is already out is a clear indication that NATs as implemented today is a solution which does its job well. (Of course, that's not reason enough not to try improve it)

But then, I find the solution furnished by the paper more problematic than the original problem in hand.

Basically, what the paper does is to introduce one more layer between IP and TCP. If you examine closely, the purpose of the layer is actually redundant with IP - to provide unique identifiers for each host on the internet. But since ipv4 uses just 32 bits and the namespace ran out, and EID uses 160 bits + the underlying ipv4 and hence has a larger namespace. And this redundancy has a significant overhead - the DOA headers. (Ok, not all nodes in the network need to support DOA, but it's likewise with tunneling ipv6 over ipv4)

One of the problems mentioned with NAT was VPN from behind NATs.. But then the IPSEC based VPN seems to me to be slowly losing out to things like OpenVPN which would not have a problem operating behind NATS.

Another proposal of the paper is the filter architecture, the RPF. This is sort of like a middle box, which doesn't really sit in the middle, and hence isn't very appealing to me. As another public comment points out, DOA is expecting right functionality of the proposed mechanism to ensure packets do the visit to the middle box before getting to the destination. This is wasteful of resources, introduces bottleneck points and potentially is vulnerable to attacks where the MAC is fabricated. As the above-mentioned public reviewer points out, a better approach might be to set up a tunnel between the destination and the filtering host.

Another question I have is of scalability. NATs actually provide a nice way of containing/cutting-down on numbers. But when those numbers now get exposed through EIDs (Which cannot be grouped via stuff like supernetting/cidr), I wonder how scalable the EID tables of the DHTs will be..