At my job, I get involved in trying to solve a lot of hairball problems that seem obscure and bizarre. It’s the nature of what I do.

Over the last 3 weeks, some issues that we have been investigating as independent performance-related trends merged into a single meta-issue. I can’t go into the details right now, but what is clear to me (and some of the folks I work with are slowly starting to ascribe to this view) is that the background noise of Web 2.0 services and traffic have started to drown out, and, in some cases, overwhelm the traditional Internet traffic.

Most of the time, you can discount my hare-brained theories. But this one is backed by some really unusual trends that we found yesterday in the publicly available statistics from the Public Exchange points.

I am no network expert, but I am noticing a VERY large upward trend in the volume of traffic going into and out of these locations around the world. And these are simply the public peering exchanges; it would be interesting to see what the traffic statistics at some of the Tier 1 and Tier 2 private peering locations, and at some of the larger co-location facilities looks like.

Now to my theory.

The background noise generated by the explosion of Web 2.0 (i.e. “Always Online”) applications (RSS aggregators, Update pings, email checkers, weather updates, Adsense stats, etc., etc.) are starting to really cause a significant impact on the overall performance of the Internet as a whole.

Some of the coal-mine canaries, organizations that have extreme sensitivity to changes in overall Internet performance, are starting to notice this. Are there other anecdotal/quantitative results that people can point to? Have people trended their performance/traffic data over the last 1 to 2 years?

I may be blowing smoke, but I think that we may be quietly approaching an inflection point in the Internet’s capacity, one that sheer bandwidth itself cannot overcome. In many respects, this is a result of the commercial aspects of the Internet being attached to a notoriously inefficient application-level protocol, built on top of a best-effort delivery mechanism.

The problems with HTTP are coming back to haunt us, especially in the area of optimization. About two years ago, I attended a dinner run by an analyst firm where this subject was discussed. I wasn’t as sensitive to strategic topics as I am now, but I can see now that the topics being raised have now come to pass.

How are we going to deal with this? We can start with the easy stuff.

  • Persistent Connections
  • HTTP Compression
  • Explicit Caching
  • Minimize Bytes

The hard stuff comes after: how to we fix the underlying network? What application is going to relace HTTP?

Comments? Questions?