Switching by the Layer
When I was growing up in Brooklyn, I remember TV commercials referring to a brand of cigarettes with the slogan, I would rather fight than switch! Today, cigarette advertisements on television are banned, and most network administrators have either changed or are considering changing their LAN infrastructure to a switched-based environment, effectively discarding years of prior investments in shared media technology. What started as a gradual migration to switched-based equipment during the early to mid 1990s, is now a full-scale migration affecting the LAN infrastructure of most organizations. The benefits of LAN-based switching are considerable for end users, but for some manufacturers (dependent upon the sale of shared media hubs) the migration of customers to new technology almost put them out of business. This effect is obvious when you compare the current price of shares in several former hub manufacturers to the price just a few years ago. As other manufacturers produced switching equipment to satisfy the requirements of end users, product differentiation became an important consideration. Some equipment manufacturers focused their attention on switch connectivity options, providing Fast Ethernet, Gigabit Ethernet, FDDI, and ATM backbone connectivity. Other equipment manufacturers enhanced the switching capability of their products, effectively moving up the protocol stack to support switching at the network layer (layer 3) or at the transport layer (layer 4) of the International Standards Organization Open System Interconnection (ISO) Reference Model. As you might expect, some vendors incorporated both increased connectivity and enhanced switching capability into their product offerings.
In this article, I focus on different types of LAN switching, starting with the data link layer (layer 2). Because certain switch features have different effects at different layers, I also focus on different switching methods that represent those switch features. The best method to approach LAN switching is by obtaining an appreciation for its capability, so I first compare and contrast switching and shared media operations. Then I illustrate the basic types of switching techniques and switching methods and move up the protocol stack to explore switching at layers 2 through 4.
The Rationale for Switching
It is easy to obtain an appreciation for the rationale behind the development of switches by first examining the throughput obtainable on a shared media-based network. Consider the upper portion of Figure 1 that illustrates a bus-based shared media Ethernet LAN with 20 stations to include a mix of servers and workstations.
Although a bus-based shared media LAN is shown in the top portion of Figure 1, this schematic is also applicable to a conventional hub-based network infrastructure where frames transmitted into one port of the hub are broadcast out of all other hub ports, as shown in the lower portion of the figure. For both bus-based and hub-based shared media networks, only one network station at a time can transmit, with the resulting transmission either flowing on the cable for all other stations to receive (Figure 1a), or repeated onto all other ports for stations to respond to (Figure 1b). Of course, only the station with the destination address in the frame actually copies the frame off the network.
When one station on a shared media network is active, it has the ability to use all available network resources. In the case of a 10Base-T Ethernet network, this situation enables a station to continuously transmit data without the possibility of collisions that cause a random exponential backoff algorithm to be executed and adversely affect throughput. Thus, when only one station is transmitting, it is possible for data transfer to approach 10 Mbps. However, because Ethernet includes a gap of 9.6 µs between frames, the actual transfer rate will be slightly under the 10 Mbps operating rate. Assuming that two stations have files to transfer, they obviously cannot simultaneously transmit data successfully or a collision would occur. Thus, they must share the bandwidth, resulting in each station having an average data transfer capability of 5 Mbps. The average data transfer rate on a shared media network then becomes operating rate/n, where n represents the number of active stations on the network.
To illustrate the effect of station activity, consider a 10 Mbps Ethernet network with 40 active stations. The average data transfer rate then becomes 10 Mbps/40, or 250,000 bps, which is not sufficiently greater than a V.90 dial modem with compression enabled. While an average data transfer rate in the hundreds of thousands of bits per second was sufficient during an era when terminal devices primarily displayed alphanumeric characters, the migration to Windows-based terminal devices and the transfer of graphic images when viewing Web pages could easily saturate shared media networks. Recognizing this problem, communications equipment developers reinvented the concept behind telephone company central office switches, applying switching to local area networks.
LAN Switch Fundamentals
A LAN switch can be considered to represent an n by m device, with n and m representing numbers of I/O ports that are cross-connected via a switching architecture. That architecture can range from a high-speed bus to a matrix of crossed wires, with the latter illustrated in Figure 2. The switch in Figure 2 illustrates a 16-port switch, whose architecture is fabricated as a 4 by 4 crossbar. Because the use of two ports is required to route a frame through a switch, a maximum of 16/2 or 8 frames can be simultaneously routed through the switch shown in the referenced exhibit. If a computer with a 10Base-T network adapter is connected to each port on the 16-port switch, the theoretical data transfer capability becomes 16/2 * 10 Mbps, or 80 Mbps. Here I used the term theoretical because most transmissions represent portions of a client-server session. This means that if you connected 14 workstations and 2 servers to a 16-port switch, the switch would probably be limited to a maximum of two simultaneous client-server frame flows, resulting in the practical maximum data transfer through the switch becoming 20 Mbps. While 20 Mbps is certainly less than the theoretical maximum, it is still significantly above the average transfer rate obtainable on a shared media network. Thus, the rationale for the rapid acceptance of LAN switches can be explained in one word -- performance.
Layer 2 Switching Techniques
Because layer 2 switches represent the first series of LAN switches to be marketed, I initially focus on this category of communications product. At layer 2, a switch makes its forwarding decision based upon the destination address contained in each frame. The operation of most layer 2 switches resembles that of multiport bridges, with the switch learning the addresses of stations connected to each port by examining the source address of frames transmitted into each port. This allows the switch to construct an address-port table that is used to route frames entering the switch on one port through the switch, so that they are output onto the correct destination port. The actual switching technique used by layer 2 switches falls into one of three methods -- cut-through, store-and-forward, or hybrid.
A cut-through switch only needs to read a small portion of a frame to include its destination address before being able to initiate switching. For example, Figure 3 illustrates the format of an Ethernet frame. Note that a cut-through switch only has to read the preamble and start-of-frame delimiter fields prior to being able to read the destination address field value. This means that only 16 bytes of the frame need to be read prior to a cut-through switch being able to initiate its switching logic. This also means that the switch delay or latency is minimized. In fact, another name for cut-through switching is on-the-fly switching. Although cut-through switching minimizes delay, it cannot act upon the integrity of a frame because switching begins once the destination address is read. Additionally, because a switching decision is made after the first three fields in the frame are read, a cut-through switch cannot perform filtering on anything other than the destination address.
A switch operating in a store-and-forward mode stores an entire frame in memory. As the frame is being stored, it will use the destination address as a mechanism to determine where it should be switched; however, the actual switching operation cannot occur until the full frame is stored and some additional processing occurs. That additional processing commonly includes a Cyclic Redundancy Check (CRC) computed on the frame, which is compared to the CRC stored in the frame's Frame Check Sequence field. If the two do not match, the frame is considered to be in error and is dropped.
Because the frame is stored in a store-and-forward switch, filtering can occur on any field in the frame; however, latency can be considerably more than when the switch operates in a cut-through mode. In fact, latency on a store-and-forward switch is variable and depends upon the amount of data transported in the Information field (which for Ethernet can vary from a minimum of 46 bytes to a maximum of 1500 bytes). For example, a frame transporting 46 bytes of data has 26 bytes of overhead, resulting in a minimum frame length of 72 bytes. At a 10-Mbps operating rate, the delay per bit is 1/10,000,000, or 1 x 10-7 sec. Thus, waiting for 72 bytes to be stored results in a minimum latency of 72 bytes * 8 bits/byte x 1 x 10-7 sec/bit, or 57.6 µsec. Similarly, a maximum length frame containing 1500 bytes in its Information field has a total length of 1526 bytes to include 26 bytes of overhead, resulting in a latency of 1526 bytes * 8 bits * 1 x 10-7 sec/bit, or 1220 µsec. Although store-and-forward switches can perform integrity checking and filtering, their variable latency typically precludes their use for transporting many time sensitive multimedia applications.
Recognizing the fact that cut-through and store-and-forward operations are better suited for different applications, manufacturers introduced hybrid switches. Some hybrid switches can be set for either a cut-through or store-and-forward mode of operation. Other hybrid switches support a dynamic mode of operation, initially operating in the cut-through mode and computing the frame error rate on the fly. If the error rate reaches a predefined threshold value, the switch changes its mode of operation to store-and-forward, which allows erroneous frames to be discarded instead of forwarded. Now that we have an appreciation for basic layer 2 switching techniques, let's briefly examine two methods of switching prior to moving up the protocol stack.
There are two basic switching methods -- port and segment switching. A port switching method results in a switch supporting only one address per port. In comparison, a segment switching method supports multiple addresses per port, which enables a network segment to be connected to a switch port. While a segment switching method enables a single device to be connected to a switch port, the reverse is not true. That is, you cannot attach a LAN segment to a port switching switch.
Figure 4 illustrates an example of a segment switching switch. Note that in this example two servers are directly connected to switch ports while servers on other segments remain connected to their segments, and each segment is connected to a switch port. The servers on a segment could represent departmental servers while the servers connected to individual ports could represent enterprise servers.
Layer 3 Switching
Layer 3 of the OSI Reference Model represents the network layer. Thus, a layer 3 switch operates at the network layer. This normally means the switch must have a routing capability as network addresses in packets, instead of frame addresses, are now processed. While this may appear to be a minor difference, actually the switch has to look deeper into each frame when performing layer 3 switching. To understand the reason for this, consider Figure 5 that illustrates the relationship of the TCP/IP protocol suite packet formation to its transportation within a LAN frame. Note that a TCP or UDP header prefixes application data at layer 4 in the protocol stack, with the appropriate port number in the header identifying the application. The IP header is prefixed to the prior header and contains the source and destination IP address of the packet, enabling routers to perform their job. However, when an IP datagram reaches a LAN it must be encapsulated within a LAN frame. This means that to perform layer 3 switching, the switch must look further into each frame, past the LAN header to read the network header. This also means that latency increases. For this reason layer 3 switching is not capable of reaching the frame processing rates of layer 2 switches.
While a layer 3 switch can be considered to perform routing as it operates on network addresses, on a LAN it uses port-address tables in a manner similar to a layer 2 switch; however, network addresses are normally configured whereas a layer 2 switch learns layer 2 addresses. In this mode of operation a layer 3 switch may not perform true routing, and its cost can be reduced by eliminating support for many routing protocols.
A second type of layer 3 switch is a special purpose router that uses application-specific integrated circuit (ASIC) hardware instead of a microprocessor-based engine to perform routing. Designed for use on wide area networks, layer 3 WAN switches are commonly referred to as a switching router or router switch, and typically employ one or more techniques to expedite the flow of packets without the switch having to look into each packet header, which delays packet processing.
Over the past few years, several techniques were implemented by different vendors as a mechanism to expedite layer 3 switching. Each of these techniques is based upon the use of a tag or label to expedite the flow of packets through the switch. Under this type of switching, a lookup of the destination address is performed once per flow, with a flow representing a series of related packets. Once the destination port is identified, a tag or label is prefixed onto each packet by the layer 3 router switch. The tag or packet addition facilitates the flow of packets through a WAN network as subsequent switching routers can examine the new header and compare it to a table entry to make a routing decision without having to look deep into each packet. Through the use of a tag or label, packet processing is facilitated, which is an important consideration in an era of network congestion. Unfortunately, the use of tags and labels has represented proprietary solutions to traffic congestion. Fortunately, the Internet Engineering Task Force (IETF) published a Request for Comment (RFC) covering Multiprotocol Label Switching (MPLS) that, when standardized and implemented by vendors, could considerably enhance the flow of traffic across the Internet and private intranets.
Layer 4 Switching
Layer 4 switching represents the newest wrinkle in switching technology. A layer 4 switch represents a routing switch, which both looks further into packets to determine the composition of transport layer fields, as well as to make routing decisions based upon the contents of those fields. Layer 4 switching currently is only applicable to the TCP/IP protocol suite, with vendors fabricating equipment that examines TCP and UDP port values as a mechanism for routing decisions. Because the routing switch examines TCP and UDP port values, it can determine the type of traffic carried by a packet. This means a layer 4 routing switch can be used as a mechanism to balance incoming traffic among a group of servers in a server farm connected to a switch, similar to Figure 6. To my knowledge, this is the primary function of a layer 4 routing switch.
There are currently no standards concerning layer 4 switching. In fact, one vendor bases their product upon layer 2 and layer 4 and does not perform layer 3 routing. Instead, the switch operates as a bridge and transfers packets based upon their layer 2 address; however, the bridging means that the switch requires a separate router to establish a connection to the Internet if you want to employ this device as a traffic load balancer similar to the configuration in Figure 6.
One interesting application for layer 4 switches is as an access control mechanism. Under heavy traffic conditions, the use of a router's access list capability to bar certain types of traffic can significantly impede router throughput, especially when the access list contains a significant number of statements. In this situation, an ASIC-based layer 4 switch can be used as a back-end security policy implementator behind a router.
When examining the potential use of switches in a network, it is important to understand that generally the higher the layer supported, the deeper the need to examine fields within frames or packets. This means that when a switch incorporates a routing capability, the higher the layer processed, the lower the processing capability of the switch. Because a switch permits multiple simultaneous transmissions between pairs of ports, its capability is significantly beyond shared media hubs. This is why many organizations have used their checkbooks as votes to literally switch, rather than share.
About the Author
Gilbert Held is an award-winning lecturer and author. Gil is the author of over 40 books and 300 technical articles. Some of Gil's recent titles include Voice over Data Networks covering TCP/IP and Frame Relay and Cisco Security Architecture (co-authored with Kent Hundley), both published by McGraw Hill Book Company. Gil can be reached at: email@example.com.