In today's tip, we'll explore Web transactions from the networking point of view.
There are numerous factors that contribute to network latency. Also, except for links directly from the client to the Internet, or the server to the Internet, many of these factors are outside anyone's control (except, of course, for the custodian of any particular link in use as HTTP requests make their way from client to server, and HTTP replies and other communications make their way from server back to client). In fact, some or all of the following factors come into play when looking at the network's role in client-server communications on the Web:
- From the client perspective, one of the most important network characteristics is the speed of its connection to the Internet. As with most forms of network, this is a clear case where "faster is better" and explains why so many clients are adopting higher-bandwidth technologies such as cable modem and DSL, as compared to ISDN or POTS lines.
- Likewise, the server's link to the Internet is an important factor. Because servers handle so many concurrent user sessions, providers use both "close to the source" (such as co-location) and multi-site server-mirroring strategies (such as multiple T-3 connections) to improve network response time.
- At the client end of the connection (or at the nearest stop away) caching services can also provide significant performance improvements. This helps explain why so many ISPs and corporate Internet link architectures include caching servers to lower the number of outgoing HTTP requests that must actually traverse the Internet.
- The links between what is sometimes called the "closest link" or the "last mile" on both client and server ends of any connection are often represented as a cloud. This reflects the lack of determinism in paths between client and server, and also reflects a general lack of knowledge about what's going on inside the cloud itself. This latter phenomenon helps to explain why major Web sites so often contract with response-time monitoring services so that they can simulate typical client requests, locations, and loads, and get a sense of how well the cloud is behaving at any given moment.
Interestingly the common threads that describe managing the network portion of the client-server link might best be described as "pedal to the metal where possible" but also as "monitor and control whatever you can" which describes the strategies that motivate caching on the client side, and mirroring, load-balancing, access management, and performance monitoring on the server side. Please note also that such sophisticated server-side strategies are both expensive and complex; thus, they are far more likely to be deployed in managing hosting service outlets or in corporations or organizations large enough to afford such capabilities. Single-server sites must usually rely on whatever services their providers incorporate into their environments (one step up the food chain, as it were) rather than implementing mirroring, monitoring, and load- or access-balancing strategies themselves.
Ed Tittel presides over LANWrights, Inc., a company that specializes in Web markup languages and tools, with a decidedly Microsoft flavor. He is also a co-author of more than 100 books on a variety of topics, and a regular contributor to numerous TechTarget Web sites.
This was first published in June 2002