Infranet: Circumventing Web Censorship and Surveillance
by Nick Feamster, Hari Balakrishnan, Magdalena Balazinska, David Karger, Greg Harfst
url show details
Details
type: | inproceedings | booktitle: | Proceedings of the 11th {USENIX} Security Symposium 2002, August 5--9, 2002, San Francisco, {CA} | year: | 2002 | month: | jul # "~26 | annote: | Nick Feamster (MIT Laboratory for Computer Science); Magdalena Balazinska (MIT Laboratory for Computer Science); Greg Harfst (MIT Laboratory for Computer Science); Hari Balakrishnan (MIT Laboratory for Computer Science); David Karger (MIT Laboratory for Computer Science); | publisher: | USENIX | pages: | 247--262 | abstract: | An increasing number of countries and companies routinely block or monitor access to parts of the Internet. To counteract these measures, we propose Infranet, a system that enables clients to surreptitiously retrieve sensitive content via cooperating Web servers distributed across the global Internet. These Infranet servers provide clients access to censored sites while continuing to host normal uncensored content. Infranet uses a tunnel protocol that provides a covert communication channel between its clients and servers, modulated over standard HTTP transactions that resemble innocuous Web browsing. In the upstream direction, Infranet clients send covert messages to Infranet servers by associating meaning to the sequence of HTTP requests being made. In the downstream direction, Infranet servers return content by hiding censored data in uncensored images using steganographic techniques. We describe the design, a prototype implementation, security properties, and performance of Infranet. Our security analysis shows that Infranet can successfully circumvent several sophisticated censoring techniques. | url: | http://pages.cs.wisc.edu/~akella/CS740/S10/740-Papers/F+02.pdf | editor: | {USENIX} | address: | pub-USENIX:adr |
|
|
You need to log in to add tags and post comments.
What I saw is paper making and clever but ad hoc hacking. Some random thoughts follow.
Papers begin with motivations, and this paper doesn't have a strong one. For HTTP, an obvious/simple/principled/well-established solution is SSL. But the paper rules it out solely based on the possibility that SSL could *presumably* be blocked by the censor. This not-so-convincing argument serves as a major justification for the necessity of this work.
One trick in paper engineering is to cook a list of design "goals" that are perfectly suited by what's being proposed. E.g., 'plausible deniability'.
The 'sophisticated' system described in the paper may have made a great application (e.g., a browser plugin). Unfortunately, it not only requires a lot from the client, but also too much efforts from the server. It simply won't fly.
Censors not only block sensitive data content, but sometimes also certain data sources (specific URLs or entire domains). URL blocking cannot solved by SSL, but is partially addressed in this paper. However, the server-enabled solution looks rather unwieldy. Instead of permuting a static set of URLs, why not (let the client) use random salt and the public key to dynamically generate URLs? BTW, HTTPS URLs are encrypted.