Bandwidth and Latency: The Dynamic Duo of Network Performance
Gilbert Held
The ability to transmit information in a timely and accurate manner is governed by many factors. You can quickly judge the number of variables that network analysts and designers have to consider simply by scanning one of the many books that deal with network testing and troubleshooting. Assuming you're in relatively good physical shape and are able to lift these hefty books, you'll note such topics as protocol decoding, network driver settings, and buffer storage allocation, as well as enough mnemonics and technical terms to make even an experienced system administrator cringe. Although those topics are important for testing and troubleshooting network-related problems, there are two key variables often overlooked whose control can alleviate a variety of network performance-related problems. Those variables are bandwidth and latency, which I will focus on in this article.
By understanding the characteristics and measurements associated with each variable, you can enhance your network planning and design effort and obtain the ability to isolate certain network-related problems as well as learn techniques that improve the effect of these variables on network performance. Because the use and meaning of these variables can vary based upon the type of network you are working with and can affect different applications in different ways, it is probably best to begin with a few definitions. I will then discuss how the basic definition can vary due to the type of network as well as by the use of a specific network structure.
Definitions
In an engineering class, bandwidth is defined as the range of frequencies used to generate a signal. Thus, often in textbooks you will encounter the following equation:
B = f2 - f1
where B represents bandwidth and f2 and f1 represent the higher and lower frequencies, respectively. One engineering principle is the fact that bandwidth is proportional to frequency, resulting in a higher signaling speed being obtainable when the range of usable frequencies expands. This explains why, for example, you can transmit at a higher data rate on a fiber cable than a metallic cable . . . or can you? In actuality, bandwidth governs the signaling rate, with a higher bandwidth supporting a higher signaling rate in Hz. The actual data transmission rate obtainable on a transmission facility depends upon the encoding method used. If you can pack more bits into each signal change, you can obtain a higher data transmission rate than if you use an encoding method that packs fewer bits into each signal change. For example, two common modem signal encoding methods are tribit and quadbit encoding. If the modem can generate a 2400 Hz signal using quadbit encoding, it can transmit at a data rate of 2400 Hz * 4 bits/Hz, or 9600 bps.
If the modem uses a tribit encoding technique, it would be limited to a transmission rate of 2400 Hz * 3 bits/Hz, or 7200 bps. Based upon this information, bandwidth technically references a range of frequencies. However, like many technical terms, bandwidth has taken on an alternate meaning. That alternate meaning involves its use to specify a data transmission rate, usually meant to be expressed in bps; although from a purely technical standpoint, its use as a bps measurement isn't exactly kosher.
A second network variable that is important to network performance is latency. Latency can be defined as the delay in a signal reaching its destination due to such circuit parameters as inductance, resistance, and capacitance experienced on a transmission facility. Although latency usually refers to the delay in a signal reaching its destination, it can also refer to the delay caused by a frame or packet being serviced by a router or a similar communications device slightly before the arrival of a different packet at the device. As the router processes the first packet (such as by pulling it out of its buffer and placing it bit-by-bit onto its serial interface for transmission via a WAN), the succeeding packet will be delayed in its placement onto the WAN. Similar to bandwidth, latency is used in different ways to represent a general term normally associated with a delay in the arrival of individual bits or packets and frames. Now let's examine the effect of these variables on network performance.
Effect on Network Performance
Bandwidth can be considered in a manner similar to the diameter of pipe that regulates the potential flow of water. Thus, the higher the bandwidth, the higher the potential flow of data. This means that bandwidth regulates your organization's capacity to transmit data on a local area network and between LANs via a wide area network. It also regulates your ability to move data from your personal computer via a dial-up connection to an Internet Service Provider. Thus, bandwidth is usually used synonymously with network capacity and, unlike the weather which many persons talk about but which few persons can alter, there are a variety of tools and techniques that can be used to increase the bandwidth of a network or transmission facility.
The effect of latency upon network performance can vary based upon the type of data you are transmitting and the type of latency affecting the transmission. When latency results in portions of data being delayed from their optimal arrival time, the degree of delay can result in the generation of one or more bit errors. Figure 1 illustrates the displacement of a series of bits by time. Note that the displacement of a bit by more than one-half its bit duration normally results in receiver sampling generating a bit error. This in turn is sufficient for an entire frame or packet of information to be considered received in error when cyclic redundancy checking is performed. Depending on the protocol employed for the data transfer, the frame or packet may simply be discarded. Higher layer protocols at the end point in a network will then assume responsibility for retransmission (Frame Relay), or the network node will note the error and request the originating node to correct the error via retransmission (X.25 network). For both situations, the displacement of a single bit can result in a packet or frame containing thousands of characters being retransmitted.
The actual displacement of bits by time is a gradual process referred to by the term jitter, and test instrumentation is readily available to measure jitter on digital transmission facilities such as T1 lines. Another latency term to know is wander. It refers to a very slow buildup of bit delay that results in periodic errors whose cause may be difficult to detect with certain test instrumentation. Another area of latency that can affect network performance is when a packet or frame transporting one type of data arrives at a bridge, router, or gateway slightly ahead of a frame or packet containing delay-sensitive information, such as digitized speech or SNA traffic that must arrive at its destination within a certain period of time after a request is issued. To illustrate the effect of packet or frame latency, see Figure 2, which shows a router or Frame Relay Access Device (FRAD) being used to provide a transmission capability for conventional data packets or frames generated by stations on one LAN. Figure 2 also shows frames or packets transporting digitized speech generated via a server connected to a second LAN that supports a connection to a PBX.
Note that in the lower part of Figure 2, the transmission of a frame or packet carrying a relatively long Information field of data is shown being placed onto the serial access line routed to a TCP/IP or Frame Relay network between digitized voice packets or frames. Suppose the access line operates at 64 Kbps, and the packet transporting data has an Information frame 1500 bytes in length. Without considering the overhead associated with the packet or frame header and trailing Frame Check Sequence (FCS) fields, the delay resulting from a 1500 byte Information field being serviced between two relatively short packets or frames transporting digitized voice would become 1500 bytes, 8 * bits/byte/64000 bps, or .1875 seconds. When you add several slight routing delays resulting from the flow of digitized speech through the network, this is probably sufficient to make reconstructed speech sound awkward to the human ear. Thus, latency or the delay resulting from the processing of time-insensitive frames or packets between a sequence of frames or packets transporting time-sensitive information can have a significant effect upon certain types of time-sensitive data. Now that I've mentioned some of the problems associated with the dynamic duo of network performance, let's look at techniques that can be used to minimize their effect upon network performance. In doing so, I will discuss both LANs and WANs, as both variables effect performance on both types of networks.
Enhancing Bandwidth
From a technical perspective you cannot change the bandwidth on a LAN or WAN transmission facility. Although you could rip out your existing LAN and replace your cabling or replace an existing wide area network circuit with a higher capacity transmission facility, to do so would probably be both disruptive and costly. Thus, most network managers and LAN administrators look for alternatives.
In a WAN environment, there are several techniques you can consider using to obtain additional transmission capacity at an economical cost. Two favorite techniques are the use of compression-performing network devices, such as Data Service Units (DSUs), and the use of bandwidth-on-demand multiplexers. The use of compression performing equipment is almost as good as watching Sigfried and Roy make tigers and elephants disappear.
A compression DSU operates similarly to other compression performing products, examining an input data stream for redundancies and eliminating the redundancies via the use of one or more compression performing algorithms. Each digital circuit requires the use of a DSU to convert the unipolar signaling associated with computer equipment into the bipolar signaling required on digital circuits. To make more efficient use of the digital circuit, the DSU also compresses the computer data before placing it on the line as bipolar data.
If you are using 56 Kbps digital-leased lines, the use of compression performing DSUs can result in a variable data transfer rate between 2 and 4 times the line operating rate, with the actual transfer rate dependent upon the susceptibility of the data to compression. Thus, you should be able to transfer data between 112 Kbps and 224 Kbps via a 56 Kbps transmission line. By paying the one-time cost of the compression DSUs (around $1000 per device), you can delay or avoid a costly network upgrade - an upgrade that for a long-distance circuit might result in a monthly expenditure equal to the cost of the compression-performing DSUs. An investment that can be recaptured in such a short period of time is sure to be a hit with your company's CFO.
A second popular method to minimize the cost of additional WAN bandwidth is obtained through the use of bandwidth-on-demand inverse multiplexers. Such devices are commonly used to monitor the utilization level of a circuit and initiate one or more calls via the switched digital networks when extra bandwidth is required. Figure 3 illustrates the use of a pair of bandwidth-on-demand inverse multiplexers to interconnect two geographically separated locations. Note that many inverse multiplexers can be programmed to dial up from one to 24 switched digital circuits that actually represent channels or time slots on a T1 access line. Since the use of switched digital service is based on call duration, it is often more economical to use the network configuration shown in Figure 3 instead of installing a second digital leased line, or upgrading the existing line to a higher operating rate.
In regard to LANs, there are also several techniques that can add network bandwidth for a minimum expenditure of time and funds. Those techniques include the use of bridges and switches to segment networks into smaller entities. The top portion of Figure 4 illustrates the use of a bridge to segment a network into two parts. If the LAN is a 10 Mbps Ethernet 10BASE-T network, such segmentation results in two independent segments, each operating at 10 Mbps. Although on paper, segmentation via the use of a bridge appears to double available bandwidth, when a station on one segment requires communications with a station on another segment, we are back to a 10 Mbps bandwidth since only one operation can occur through the bridge at a time. Since inter-LAN communications can occur quite frequently, the bandwidth gain obtainable through the use of a bridge is typically significantly less than a doubling of capacity. However, if you can segment a network by department and add a server to each segment as indicated in the top right portion of Figure 4, you can minimize inter-LAN communications, enabling total bandwidth to approach a doubling of the bandwidth of a single network.
In the lower portion of Figure 4 the use of a 10/100BASE-T segment switch is shown. In this example, 10 workstations are shown placed on each of five network segments, while two servers are directly connected to the switch on 100 Mbps ports. This configuration enables two workstations to simultaneously access two different servers, but since each server is connected via a 100 Mbps port, they can respond to a client request more quickly than if connected to the switch via a conventional 10 Mbps port. This means they can use buffer memory in the switch to temporarily store responses to lower operating rate clients and more quickly become available to service new client requests. With the per port cost of many LAN switches approaching $100, the LAN switch provides a very reasonable cost mechanism to provide a multiple simultaneous client-server access capability for network workstations that require communications with different types of servers. Now that we have an appreciation for techniques to enhance bandwidth on WANs and LANs, let's consider methods to minimize latency.
Minimizing Latency
The most common way to reduce latency or network delay is to add transmission capacity. This can be expensive, but does not have to be if you can juggle existing network operations to better accommodate delay-sensitive traffic. For example, assume your organization needs to transmit SNA traffic over a Frame Relay network along with inter-LAN communications. One possibility is to adjust your use of the Committed Information Rate (CIR) for the Private Virtual Circuits (PVCs) used to transport SNA and inter-LAN traffic, providing a sufficient CIR for SNA so its delay through the network is minimized. If you are transmitting inter-LAN traffic and digitized voice, the previously described technique by itself would probably be insufficient to stop latency problems from adversely affecting the reconstruction of digitized speech. The arrival of a lengthy packet or frame transporting data at a FRAD or router just before one transporting digitized voice could delay the transmission of the voice packet, resulting in an awkward period of silence at the destination while the voice sample is reconstructed into its analog form. Recognizing this problem, several equipment vendors introduced FRADs and routers that fragment lengthy packets and enable users to assign priority to queues used for storing different input data sources. Through the use of fragmentation and prioritizing of input data sources, you can minimize the effect of latency on time-sensitive information.
Although there are many variables that effect network performance, bandwidth and latency can be considered a dynamic duo. If you can gain control of their characteristics, you can gain control of the performance of your network. By employing "smart networking" and using some of the tools and techniques mentioned in this article, you can significantly enhance the performance of your network.
About the Author
Gilbert Held is an internationally known author and lecturer specializing in data communications and personal computing technology. Gil is the author of over 30 books and 200 technical articles. He can be reached on the Internet at GHELD@MCIMAIL.COM.
|