papers | collections | search login | register | forgot password?

A High-Throughput Path Metric for Multi-Hop Wireless Routing
by Douglas DeCouto, Daniel Aguayo, John Bicket, Robert Morris
url  show details
You need to log in to add tags and post comments.
Tags
Public comments
#1 posted on Feb 23 2008, 22:06 in collection CMU 15-744: Computer Networks -- Spring 08
It is a really interesting observation that wireless links exhibits asymmetric delivery ratios. (even though I'm not sure if it can be generalized.) So, regarding the issue on d_r and d_f, I think the assumption on the independence and the definition of ETX are fair in the paper. As we see in the observation results, those two values don't have to show strong correlation.
#2 posted on Feb 24 2008, 09:41 in collection CMU 15-744: Computer Networks -- Spring 08
I enjoyed reading this paper and liked the fact that they addressed many of the reasons why a low hop count may not be the best choice.

ETX works well most of the times, but with large packets the detection of assymetric links seems to be the main reason for the improvement. In those cases, the hand-shaking approach to assymetric links may be an interesting alternative to ETX (unfortunately, the authors compared the two using only small packets)
#3 posted on Feb 24 2008, 13:18 in collection CMU 15-744: Computer Networks -- Spring 08
This paper presents ETX, which is a transmission count metric used for achieving high-throughput in multi-hop wireless networks. The authors argue that the shortest path, found by the minimum-hop count metric, is not always the fastest one. ETX also takes into account asymmetric links and interference of multi-hop paths.

The use of 802.11b broadcasting for avoiding link-layer retransmission effects and determining the actual link loss ratios was interesting. In section 3.2 the authors mention: "If the best path has four or more hops, ETX may choose a slower path that has fewer hops, since the increased number of transmissions required by extra hops does not slow down throughput beyond three hops." I would expect things to be the other way around. If the number of transmissions does not affect throughput beyond three hops, why not choose the longest and higher throughput path? Would this also affect the scalability of ETX? In very large networks, where many paths have more than 4 hops, would ETX still make the best routing choice?
#4 posted on Feb 24 2008, 15:01 in collection CMU 15-744: Computer Networks -- Spring 08
At first I was worried about the possible overhead for estimating ETX for each link, but since ETX has a simple definition, this doesn't seem to be too much of a problem. However, I don't know if a separate load balancing algorithm on top of ETX would be good enough, though inevitable since ETX doesn't take load / congestion into account, unless we recalculated ETX very frequently (and hence observe congestion). This would create annoyance in a real network...
#5 posted on Feb 24 2008, 16:01 in collection CMU 15-744: Computer Networks -- Spring 08
The paper proposed a new metric for routing in wireless network, ETX, which takes two important characteristics of wireless channel, i.e. lossy and asymmetric channel, into consideration.

About the way ETX is calculated, I agree with Min Gyung that assumption on independence between delivery ratio of forward and reverse deliveries seems reasonable; note from Fig. 4 that not all of the links have positive or negative correlations.

However, can anybody tell me the reason for asymmetry in the wireless link? Is it mainly because of interference level perceived by the receivers at two ends of the link?
#6 posted on Feb 24 2008, 16:07 in collection CMU 15-744: Computer Networks -- Spring 08
The authors successfully motivated the introduction of anew routing metrics to replace the old hop-count metric in 802.11b wireless network. The ETX metric is a relatively simple idea - predicting # of transmission using a geometric assumption and independence of forward-backward transmission. It is easy to compute and is additive for a multi-link route. It makes sure that extremely asymmetric link is not favored and one hop routes with perfect links are still best choices. And it could cover around half of the performance gap between the old metric and the best static multi-hop route.

A minor issue is that in the first paragraph of Sec. 2.2. The second sentence stated that the 100 pairs are randomly selected from 812 total pairs. Why don't they use all the pairs to plot the figure?
#7 posted on Feb 24 2008, 16:16 in collection CMU 15-744: Computer Networks -- Spring 08
I agree with the observation in the OP that the expected probability that a transmission is successfully received and acknowledged is not necessarily d_f x d_r. If the authors are assuming that the probability a packet arrives successfully and the probability that an ACK packet is received successfully are independent (as they seem to be doing), they should state that and give an argument for that decision. Especially in wireless networks with crazy combinations of interference/capture/etc., it is not obvious why independence is a reasonable assumption, and I agree that the two rates should be positively correlated.
#8 posted on Feb 24 2008, 16:27 in collection CMU 15-744: Computer Networks -- Spring 08
This paper takes a very sensible approach: it observes that link quality is not soley determined by the number of hops in the network! Coming from outside of the area, this seems like it should be clear, but it seems that this was a standard assumption in the networking literature because it is roughly true in wired networks. Given that the assumption fails to hold in wireless networks, it is not surprising that this more direct measure of path quality yields substantial improvements -- since users are now optimizing a quantity that more closely approximates their utility.
#9 posted on Feb 24 2008, 16:58 in collection CMU 15-744: Computer Networks -- Spring 08
I'm surprised that a metric like ETX was only proposed relatively recently. As the article explains, using minimal hop-count is setting yourself up for failure: it selects for a path with few hops but poorer connections rather than a path with more hops but better connections. It's also nice to see some real experimentation as opposed to just simulation. On the downside, I must say that those CDF plots are awfully peculiar and aren't readily interpretable.
#10 posted on Feb 24 2008, 17:52 in collection CMU 15-744: Computer Networks -- Spring 08
I thought the paper did a thorough job (real testbed instead of simulations etc) of evaluating a simple hypothesis: shortest paths don't necessarily have the best throughput.

The paper shows that choosing the link with best expected transmission count does much better than just choosing the link with the shortest links. A question I had when I was reading the paper was -- and I don't know if this infeasible in wireless networks -- why limit yourself to the best single path? It seems that choosing a couple of good paths and doing some sort of multi-path routing would do a better job(in terms of reliability than just choosing one link). Am I missing something obvious, or is this really difficult to do in practice?

p.s.

Ofcourse, we could go fully bayesian and choose a distribution over paths..but that might be an overkill :)
#11 posted on Feb 25 2008, 01:46 in collection CMU 15-744: Computer Networks -- Spring 08
I like the idea. Min hop routing is a greedy policy and it might work well with few competitors. If every user is using the greedy strategy, I am sure the total throughput will be worse than the strategies less greedy but more adaptive. From my point of view, ETX is designed as this kind of properties.

Question: 1) From the experiement results, the improvement of throughput is not so significant (as I expected). Is there any reason? 2) If the routing path changes too often , will it cause the instability of routing?