April 24, 2014
 
 
RSSRSS feed

Netli, Linux Take Web to Warp Speeds - page 2

The Joys of Sublight

  • July 7, 2003
  • By Brian Proffitt

The company's name is Netli, a three-year old start-up from Palo Alto, California. Like a lot of Silicon Valley companies, it's founded on big dreams and high technology. But unlike some SV firms, Netli is already delivering its product to its customers--big names like Neilsen, HP, and Millipore--customers who need to have lightning-fast connectivity for their Web pages for browsers anywhere on the planet.

Adam Grove, Netli's CTO and co-founder, is one of the people with the big dream and when you talk to him you get the sense that a lot of what he and the company are doing is making use of a lot of technical know-how and more than a little common sense.

Netli addresses what is a universal problem for those who's revune and lifeblood are tied to the Internet: the problem of delays in delivering Web content.

According to Grove, there are essentially three things that cause delivery delays on the Internet. First, there are the server delays, inherent in the software and hadrare of the Web server itself. You can juice up a Web server to the nth degree, but there will always be a tiny, sub-second delay in getting pages out the virtual door when a request comes in.

On the other end of the connection are the last-mile delays. There are the delays caused by the type of connection closest to the end-user's computer. Dial-up is still very prevalent in the world as far as connection speeds go, but for some Web pages and applications, even DSL and cable connections might slow the delivery down.

And between all this are what Grove refers to as the middle-mile delays. The middle mile is where the data will do most of its traveling as it crosses the planet. It is roughly the distance traveled between the Web server's ISP and the end-user's ISP. It is here, in the middle mile, that the third category of delays appears: the distance-induced delays.

Distance-induced delays can be caused by many things: router delays, traffic congestions, packet loss. Any number of things can cause lost or misrouted data packets. Now, the nature of the TCP/IP prorocol that most of the Internet uses for communication is such that whenever a packet is lost, the packet is retransmitted by the Web server upon request of the end-user. This is the reason why the Internet works so well. Unless there is some sort of overload or mechanical problem with the server, you can be reasonable sure a request to that Web server will eventually get you a page to your browser.

But this compulsive completeness is also a big reason why the Internet can get so slow, even over backbone connections, Grove explained. He described an example of how this happens: delivering a 70-Kb Web page with 25 objects from a server in Atlanta to an end-user in Tokyo. In the first mile (Web server to Web server's ISP), the round-trip delivery time would be .25 seconds. In the last mile, from the Tokyo ISP to the Tokyo end-user, the round-trip is .1 seconds. In the middle mile, from ISP to ISP, the round-trip time should ideally be .2 seconds. But because of packet drops and congestion, the middle mile trip is actually made 31 times--which jacks up the duration of the middle-mile leg to 6.2 seconds. What should be a .55 second trip now takes 6.55 seconds.

This is, of course, one example, but Grove maintains that it is indicative of a problem that happens with each passing moment on the Internet. The traffic delays inherent in the middle mile are what drag the transmission times down for transoceanic or transcontental server requests and deliveries.

Sitemap | Contact Us