Cover V06, I04
Article
Sidebar 1
Table 1
Table 2

apr97.tar


New Storage Interfaces

Greg P. Schulz

Interesting things are occurring with storage technology including the cost per megabyte (MB) dropping, faster drives, and higher bandwidths. Meanwhile applications are requiring larger amounts of storage and bandwidth, resulting in a storage and I/O growth explosion. This article centers around storage interfaces that enable data to be stored and retrieved from storage devices. Although these interfaces can be used to access other peripherals (scanners and CD-ROM), my primary focus will be on storage devices. Network interfaces such as Ethernet, FDDI, Token Ring, Fast Ethernet, and ATM can also be used as storage transports for distributed or network file systems such as NFS; however this article focuses on technology used primarily for storage.

Parallel SCSI

One of the most popular storage interfaces for high performance systems, servers, and workstations is Parallel SCSI. Parallel SCSI (SCSI-1) was standardized in 1986 to support seven 8-bit devices operating at up to 5 Mbyte/second. Parallel SCSI was enhanced in 1992 with SCSI-2 (current standard), implementing Fast and Wide transfers (maximum bandwidth of 20 Mbytes/second), Command Tag Queuing, Parity Checking, and connectors smaller then SCSI-1 Centronic 50 connectors.

Parallel SCSI, generally referred to as SCSI, is named for its parallel architecture in which data is sent in parallel over a series of wires to maximize performance. For example, in narrow mode, 8 data bits are sent in parallel over a series of parallel wires, while wide transfers are performed by sending 16 bits in parallel. With wide (16-bit) SCSI, there are twice as many data bits and data wires to double the maximum bandwidth. Wide transfers improve performance during the data transfer phase for large I/O applications. The maximum bandwidth numbers include data transmission, command execution, timing, and other overhead. For example, 15-18 Mbyte/second is a more reasonable bandwidth for a Fast Wide (FW) parallel SCSI interface than 20 MB/second. Parallel SCSI bus distance varies from 3-20+ meters depending on attributes such as Fast, Wide, Single-Ended, or Differential. Differential parallel SCSI operates at further distances (up to 20 meters) using a wider cable with extra wires to compare differences in signals and make adjustments if needed. (See Table 1.)

While there are shortcomings with parallel SCSI, there is room for improvement, such as Ultra-SCSI and new serial storage interface technology (SSA and Fibre Channel). Parallel SCSI will be around for some time as more devices and systems are added on a daily basis. The new storage interfaces discussed here will provide backward compatibility with existing SCSI devices using bridge technology.

Ultra-SCSI or FAST20

Ultra-SCSI, or FAST20, implements the SCSI-3 protocol doubling the maximum bandwidth to 40 MB/second and increasing the number of devices to 15 compared to Fast Wide SCSI-2 (7 devices). Ultra-SCSI is parallel SCSI with faster performance using a new signaling and clocking technique. SCSI-3 implements some new features including support for Ultra-SCSI, Fibre Channel, SCAM (automated configuration without the need for device jumpers), and new device connectors. Ultra-SCSI can co-exist with existing parallel SCSI, however the maximum speed or bandwidth is limited to parallel SCSI-2, or 20 MB/second maximum. For example, an HP T500 server running HP-UX 10.2 with a Fast Wide Differential (FWD) SCSI adapter has a SCSI RAID (FWD) device and an MTI Ultra-SCSI RAID device on the same bus. The Ultra-SCSI device will operate or transfer at the lower speed of the HP FWD adapter compared to native Ultra-SCSI (40 MB). These limitations apply when SCSI-2 devices are mixed on the same bus using the same controller. When SCSI-2 devices are on separate controllers from Ultra-SCSI devices that are on an Ultra-SCSI adapter, these limitations do not exist. Even though Ultra-SCSI has a maximum bandwidth of 40 MB/second, Fast Wide SCSI operates at 20 MB/second, which would be the maximum bandwidth. Some Ultra-SCSI points to consider include:

1. Some vendors, such as Silicon Graphics Inc. (SGI), have indicated they will not implement Ultra-SCSI using, instead, Fibre Channel. Ultra-SCSI storage devices can be used on SGI by attaching them to a Fast Wide SCSI host adapter. Conversely, Quantum has yet to indicate any plans for migrating their DLT tape drive technology, popular in high-capacity tape libraries, to Fibre Channel.

2. Verify which version of your operating system is necessary to use Ultra-SCSI. Also, see whether any patches or drivers are needed and whether any device configuration restrictions exist.

3. Ultra-SCSI has tighter signal requirements due to its speed and faster clock cycle (carefully watch your cable lengths).

4. Adhere to Ultra-SCSI configuration guidelines by placing older, slower SCSI devices and non-disk devices (CD-ROM, scanners, etc.) on a separate host adapter. Pay attention to proper termination and types of terminators used for maximum performance.

Ultra-SCSI exists today with storage products available from vendors such as Seagate (with Barracuda LP drives) and Quantum, two of the largest drive manufacturers. Ultra-SCSI provides a good "mid-life kicker" for parallel SCSI by doubling the bandwidth and increasing the number of devices. The future of Ultra-SCSI calls for doubling the bandwidth to 80 MB/second and implementing Low Voltage Differential SCSI (LVDS) to combine the performance of Ultra-SCSI with longer distances. This new version has been referred to as Ultra2-SCSI, and some LVDS technology is now under development by companies such as Symbios Logic.

New Serial Storage Interfaces

New serial storage interfaces include SSA and Fibre Channel, which take aim at the weakness of existing parallel SCSI and other storage interfaces. Features found in serial interfaces include fiber and other mediums, dual-porting or dual-pathing for fault-tolerance, higher performance, and support for more devices over longer distances. Unlike parallel interfaces that send data bits over individual parallel wires, serial technology sends data bits in series as packets over a single wire, thereby simplifying data transmission and reducing overhead. Although this may seem slower, by simplifying the underlying signaling and timing mechanism, serial technology can far outperform existing parallel technology.

Serial Storage Architecture (ANSI X3T10.1)

Serial Storage Architecture (SSA) has been designed by IBM in partnership with other vendors as a new open storage interconnect to replace parallel SCSI and other older, slower interfaces. One of the key attributes of SSA is "Spatial Reuse," which enables more traffic to exist on a bus and increases the aggregate bandwidth. Unlike parallel SCSI storage, which is configured as a bus or chain, SSA devices are configured in loops with daisy-chained connections between devices and do not require arbitration. Each node or SSA device can support two loops, each with two links (in and out) capable of transmitting concurrently thereby increasing bandwidth (Spatial Reuse). However, the maximum bandwidth of any given link is only 20 Mbytes per second (speed of FW parallel SCSI) or half that of Ultra-SCSI (40 MB/second).

Improvements in packaging have enabled SSA devices to be hot-swappable and autoconfigurable. Using SSA-Parallel SCSI bridges, such as those from Vicom, allows existing parallel SCSI devices to be accessed from an SSA system and vice versa. When using a bridge device, "Spatial Reuse" does not apply to the parallel SCSI devices. A single SSA loop can support 127 devices (compared to 15 for parallel SCSI) with distances up to 15 meters between the devices. A 4-pin cable replaces a 50-68 pin SCSI cable, simplifying installation and configuration. Some host adapters such as those for IBM RS6000 systems provide connection for up to four loops.

The future of SSA is to double the speed from 20 MB/second per link to 40 MB/second or from a total bus bandwidth using "Spatial Reuse" of 80 MB/second to 160 MB/second. A practical limit of 96 devices per adapter is suggested by some vendors. Benchmarks by IBM have shown about 35 MB/second at the processor using realistic workloads. This compares with about 12 MB/second using the FWD SCSI (20 MB) adapter on a RS6000. Note that several drives must be accessed to achieve these types of data rates. Although the 35 MB/second is considerably lower than the 80 MB/second, the Microchannel on RS6000 appears to be a bottleneck. And, additional IBM laboratory testing has shown that PCI-based adapters may be able to process 70 MB per second at the adapter level.

Some SSA points to be aware of include:

1. SSA is well suited for multi-host or clustered systems in which connectivity and aggregate storage bandwidth are important. SSA is not well suited for single-stream bandwidth intensive applications.

2. A tradeoff is made with current host adapters in reducing the number of parallel SCSI adapters with fewer SSA adapters for possible performance bottlenecks.

3. Existing parallel SCSI devices can be bridged to SSA using a Vicom bridge, however "Spatial Reuse" does not apply to these devices.

4. Although SSA is not widely accepted by the industry as a whole, some users of IBM AIX/RS6000 systems are implementing SSA storage arrays such as the IBM 7133.

5. SSA uses an addressing scheme that requires knowledge of the path and route to the destination. If the path should change (e.g., as when a drive is removed), the address must be recalculated. An analogy would be giving directions to a cab driver. Instead of telling the driver your final destination, with SSA you would tell the cab driver to go two blocks, turn left, go three blocks, turn right, then go two more blocks, and stop at the third house on the left. This works out well as long as no detours are required, however the cab driver is not allowed to optimize the route.

6. SSA has had products shipping from several vendors including IBM (Disk drives and controllers), Xyratex (Disk drives), and Vicom (Bridges), among others.

Fibre Channel (ANSI X3T11)

Work began in 1988 to develop a replacement for existing storage interfaces including parallel SCSI, HIPI, and ESCON. Fibre Channel can be used as a storage interface as well as a network interface for protocols such as TCP/IP and others. Similar to parallel SCSI, Fibre Channel is evolving into several topologies including Point-Point, Fabric, and Arbitrated Loop with various speeds. Fibre Channel is capable of speeds up to 1 Gbit per second (100 Mbytes) simultaneously in each direction and of supporting multiple protocols such as IP, SCSI, HiPPI, and IPI. The spelling of Fibre compared to Fiber for Fibre Channel is intentional. Because Fibre Channel does not require fiber optics, it can also be implemented using copper wire. Fibre Channel devices utilize a GLM or Global Link Module, which enables easy changeover from copper to optical media, as well as increased speed or bandwidth, such as migrating from quarter speed to full speed (1 Gbit).

Fibre Channel establishes a point-point connection between devices, providing full bandwidth (100 Mbyte) to each device. Only one device, however, can transmit or receive over a link at a time. Although this provides higher throughput than SSA, multiple devices wanting to transmit or receive at the same time may have to wait for the bus. Each Fibre Channel port has a send and receive port, which can be either a wire or fiber depending on the medium being used. Dual loops can also be implemented for higher aggregate bandwidth and to reduce contention. Fibre Channel ports include N_PORT, F_PORT, NL_PORT, and FL_PORT, which are used to establish links for point-point, arbitrated loop, and fabric topologies. N_PORT refers to a node or device port that connects to an F_PORT, which is part of the fabric, or to another N_PORT as in a point-point topology. The N_ indicates that it is a node or device, and the F_ indicates that the port is part of the fabric. Arbitrated loop ports are referred to as FL_PORT and NL_PORT. The L_ in the name indicates that it can handle the arbitrated loop protocol.

Point-point, arbitrated loop, and fabrics may be combined, for example, in a work group on an arbitrated loop with nodes connected to a fabric using a switched network to the rest of the company. A particular node may have local disk devices attached via an arbitrated loop with a direct connect over an N_PORT to a fabric. Likewise, a disk array may be attached to a workstation or server via a point-point connection for high availability. The port limit for point-point is two ports connected via a single link.

Fabric devices are referred to as elements, which can be thought of as switches or routers. Fabrics can consist of a single fabric element or several fabric elements. This enables new technology to be introduced in the middle of the fabric without disrupting nodes on the outer edge of the fabric. This is similar to how a phone system works in that the phone company can transparently change out their internal networks to your phone while ensuring that the signals will be the same for your phone. In a fabric, any node can talk to any node. The fabric performs the required switching and routing thus providing a peer-peer service. This enables multiple protocols to exist at the same time on the fabric or loop, with the individual nodes responsible for speaking the proper protocol or command set. This is similar to a telephone conversation in that it is the people on the line that must understand each other.

Fibre Channel points to consider include:

1. Early Fibre Channel products, such as storage devices from Sun and HP, use slow point-point fiber (25 MB/second) connections and are not compatible with FC-AL (Fibre Channel Arbitrated Loop).

2. Although Fibre Channel has the ability to operate up to 400 Mbyte/second in a dual loop full-duplex topology with individual devices or link bandwidth of 100 Mbytes/second, only one device or node can utilize a link or bus at a time. FC-AL implements a fairness algorithm similar to FDDI to ensure that all devices obtain equal access to the bus. However, you should configure systems properly to avoid congestion.

3. Fibre Channel with various topologies presents configuration issues that you will need to plan and implement carefully. Several host adapters are available for different host buses, including S-Bus, HP-PB, and PCI. Fibre Channel variables include FC-AL, point-point, speed, fabric, and medium (i.e., copper, Multi-Mode fiber (MMF) or Single Mode fiber (SMF)).

4. Verify which version of your operating system is required for device or adapter support, and determine if any special patches or drivers are needed.

5. Fibre Channel uses a flexible addressing scheme that relies on knowledge of the destination and leaves the routing up to the network. Using the cab driver example, Fibre Channel works on the premise that you give the cab driver the address of your destination, and he or she is free to determine the best route to get there. Fibre Channel packets contain a destination that is used by Fibre Channel components to ensure delivery. Knowledge of the actual route is not needed by the sender, only the destination address.

Storage Interface Comparison

Here are some questions and issues to consider when evaluating and selecting a storage interface:

  • Does your host platform support Ultra-SCSI or will it?

  • Does your host platform support Fibre Channel and, if so, which variant (FC-AL, Point-Point, Fabric)?

  • Which version of your operating system is required to utilize a new bus technology?

  • How will your application benefit from a new I/O bus (Performance, Distance, More Devices)?

  • Do your devices support any of these new protocols?

  • Do you currently have performance problems that will be addressed by new technology?

  • Do you currently have cabling or distance issues?

  • Are you out of device ports or adapter connections?

    With these and other questions in mind, you should be able to apply the information in Table 2 to your configuration.

    Beyond Fibre Channel and SSA

    A new storage initiative between Fibre Channel and SSA camps has resulted in a cease fire in the storage interface wars. This initiative is being led be Adaptec, Seagate, and IBM with several other system vendors participating in the review processes. The result will be a hybrid Fibre Channel that includes the best of Fibre Channel and SSA while retaining backward compatibility. Although details are still being worked out, Seagate has indicated that future drives will be compatible with both Fibre Channel as well as the new protocol. The new interface does not yet have a formal name, but many have nicknamed it FC-EL (for Fibre Channel Enhanced Loop). However, this name is being discouraged by the combined groups. Perhaps by time you read this a formal name will have been assigned along with ANSI approval or assignment to committee.

    Summary

    Although there are shortcomings with parallel SCSI, it should continue as a popular storage interface for some time. Ultra-SCSI should become popular as a high performance storage interface and method to access existing parallel SCSI devices. New serial storage interfaces (SSA & Fibre Channel) should gain popularity with support for existing parallel SCSI devices via bridges. SSA and Fibre Channel have benefits and weaknesses that need to be considered with respect to your unique environment. SSA has a higher aggregate bandwidth for clustered or multiple systems (Spatial Reuse), and Fibre Channel has higher individual device bandwidth. Around the end of this decade, a new hybrid serial storage interface combining the best of SSA and Fibre Channel should be available providing backward compatibility for Fibre Channel devices directly and SSA devices via a bridge or converter.

    Applications needing backward compatibility with existing parallel SCSI devices (including disk, RAID, and tape) will find Ultra-SCSI a good fit for most applications. Similarly, for systems that do not yet support Ultra-SCSI, SSA, or Fibre Channel adapters, Ultra-SCSI devices can provide an improvement in performance at the device level. Applications requiring high data throughput rates from individual storage devices like RAID arrays will find Ultra-SCSI and Fibre-Channel better suited than SSA. These types of applications include database with high bandwidth, imaging, batch processing, and video. SSA, however, is better suited for applications requiring simultaneous access to several devices from one or more hosts. Examples include host-based RAID, databases with data spread over several individual drives, and clustering.

    Finally, keep in mind that there is not necessarily a right or wrong adapter or interface as long as it meets the needs of your applications, environment, and service criteria.

    About the Author

    Greg Schulz has a Masters in Software Engineering from the University of St. Thomas and Bachelor's in Computer Science. Greg is a Sr. Systems Consultant with MTI Technology Corporation working with clients designing and implementing high-performance storage and backup solutions. Greg can be reached at gschulz@ccmgate.mti.com.


     



  •