Cover V06, I10
Article
Figure 1
Figure 2
Sidebar 1
Sidebar 2
Sidebar 3

oct97.tar


Artecon's LynxStak and LynxArray RAID Systems

The flexible design of these units is attention getting, but can they perform like the big boys?

During the process of helping collect information and look at products for the Sys Admin Desktop RAID Product Survey (May 1997, page 35), one of the product lines that looked particularly intriguing was Artecon's Lynx. While it is not uncommon for drive housings to be interchangeable between models in a given manufacturer's rack-mount or pedestal product lines, the apparent flexibility of the Artecon design caught my eye. Having evaluated other RAID products (most of which started at fairly large configurations) for other clients over the years, I was impressed by the low entry point of the Artecon line. A minimal configuration for the LynxStak, for example, might be a controller and two drives - a convenient desktop configuration for a power user or project leader requiring the data security provided by RAID. But, could this tyke perform like the big boys? I was pleased to get the chance to find out when the units were selected by the editorial staff for review.

If, like me, you have been working in the industry for awhile, you may have also encountered Artecon in the past dealing through OEMs and resellers and come to think of them as somewhat of a SCSI specialty house. Products like external SCSI diskette drives and other external storage devices were what I thought of Artecon for. In researching the company for this review, however, I found Artecon to have a broader base. Formed in 1984, the company's primary focus is actually providing turnkey systems integration solutions to so-called Fortune 1000 companies, telco companies, government agencies, financial firms, and universities. Artecon has subsidiaries in England, France, Japan, and the Netherlands, and sells through reseller channels worldwide. Artecon's telco and Internet-related products fall into three categories: workstation and server rack solutions, power products, and mass storage. All of this suggests that the company's experience is in fairly sophisticated environments, working with demanding customers - facts that I found evidenced in the design and execution of the two products reviewed here.

Installation

Installation of both the LynxStak and the LynxArray is straightforward. The products are shipped in assemblies that are easy to deal with. Disks, for example, are shipped separately in shock-absorbing packaging, cables and connectors are boxed in appropriate groups, and so forth. Although most of the assembly process is obvious for an experienced hardware person or system administrator, instruction booklets provide the necessary information to assemble the modules into the ordered configuration.

Although the LynxArray is a fairly conventional rack-mounted disk array housing, the LynxStak is rather unusual in its design and warrants a more detailed discussion. The basic building block of the system is the Lynx enclosure - a durable plastic housing that is designed to accommodate any standard 312" or half-height 514" device. In the case of the LynxStak, each enclosure includes a power supply, and internal circuit boards with Single Connection Architecture (SCA) connectors instead of cable assemblies. The drive bays accommodate Artecon's standard drive sled for hot-swap drives, and a locking device is provided to prevent inadvertent removal of the disk. Although the power supplies in the test units were fixed, Artecon indicates that the configurations now shipping include hot-swapable power supplies for greater maintenance flexibility.

The non-marring feet on each unit fit into corresponding depressions in the top of the unit below, allowing the units to be stacked firmly together. Additionally, two slider bars on each side of the Lynx enclosure can be pushed up to interlock with the unit above. Two-connector rigid SCSI jumpers plug into the backs of the units, providing the interconnect between succeeding units. Similar AC jumpers provide power up the stack, allowing a single cord to provide electrical power to the RAID controller and disks. Although the SCSI interconnect could be accomplished with conventional, short SCSI cables, the Artecon jumper blocks provide a much more compact arrangement.

Prior to shipping the unit, Artecon had determined what type of host I would be using for testing the unit. Because I was using a SPARC-compatible system that had only a standard SCSI 2 external port, Artecon supplied an Ultra/Wide SCSI host adapter from Performance Technologies, Inc. (PTI, Rochester, New York) in order to take full advantage of the Wide Ultra SCSI bus in the Lynx units. The PTI adapter installs in the same manner as any SBUS card. After installing the card, you must install the PTI drivers and either perform a reconfiguring reboot or rebuild the kernel, depending on the OS you are running. My test system runs Solaris 2.5, so I was able to use the Solaris pkgadd utility and simply reboot with the flags -rv to reconfigure the kernel. Note that after installing the PTI adapter, special commands that are included with the PTI software must be used to examine SCSI devices at the prom-level OK> prompt. The Sun commands, probe-scsi and probe-scsi-all, work only with Sun SCSI adapters.

Configuration of the RAID array itself can be accomplished in either of two ways. The RAID controller unit has a front-panel LED display through which the unit's prom-based configuration menu can be accessed. A serial connector is also provided in the form of a three-connector mini stereo jack on the front of the controller. A serial cable is supplied to provide a standard DB-9 serial connection to a terminal or PC running VT-100 terminal emulation software. I used the front panel LED display to configure the array during the initial installation, but used the serial connector for array management during the testing. The LED menu is explained in the accompanying documentation and is easy to follow. The controller software includes default RAID configurations that it derives based on the number of drives included in the array. This allows you to set up the array quickly and verify that everything is operational. In a three-drive configuration, for example, the quick setup allows the drives to be initialized as RAID 0 (striping data across all three drives), RAID 1 plus a spare (one primary drive, a mirrored drive, and the spare), and RAID 3 or RAID 5 (striping with parity). The same quick setup routine is also available through the terminal interface.

Documentation

Documentation for the Lynx systems includes the PTI installation manual for the Ultra/Wide adapter, a Lynx 2000 Series user's manual (LynxStak), and a LynxRAID user's manual. The Lynx 2000 manual provides mechanical information about the Lynx enclosures used in the LynxStak, while the LynxRAID manual details the RAID configuration process and the commands available through the front-panel LED display and the terminal interface. The Artecon manuals are informal and somewhat terse pamphlets written to the level of a reseller's experienced intstallation personnel. For example, the LynxRAID user's manual, which is the primary source of information about configuring and managing the array, shows the various menu options available under both the LED and terminal interface menu structures. However, the manual stops short of detailing what most of the menu subchoices are and guidelines for appropriate selections. A seasoned system administrator with past experience configuring RAID arrays should have little trouble in following the basic instructions. However, calls to Artecon's technical support may be required for guidance on more sophisticated configuration issues.

Operation

Ease of Use

Once the RAID configuration is completed and file systems are made on the array using conventional Solaris commands, the arrays function like any conventional file system. The Wide Ultra SCSI interface on the Artecon arrays was noticeably faster on large file operations than on the older SCSI 2 system drive, however. By default, the Lynx controller software is optimized for sequential writes on large files such as CAD and graphics applications. If the unit is going to be used for applications such as database files, the firmware allows the optimization to be shifted to random I/O. The desired type of optimization can be selected during the initial configuration, prior to the creation of the RAID set. That parameter can also be modified later, which would require recreating the RAID set and restoring files from a backup.

To test performance on the LynxStak, I used the same bigdisk benchmark that Sys Admin's sister publication, UNIX Review, employs to test server disk I/O systems and RAID arrays. This benchmark performs a series of disk I/O operations at various data block sizes ranging from 16 bytes to 16KB, working to and from a 128MB data file. Reads and writes are tested for both sequential and random operations, with the throughput being measured in KB/second. Figure 1 depicts the results for the LynxStak, running at firmware level 1.21J, attached to the Performance Technologies PT-SBS450 Ultra SCSI host adapter with the Lynx controller optimization set for sequential operations. As shown in the graph, performance for the three-disk LynxStak configuration peaked at the 1KB data block size and again at 16KB, with the highpoint being over 14MB/sec.

My test host was a SPARCstation 5 upgraded with the Ross HyperSPARC motherboard and dual 166MHz Ross HyperSPARC CPUs. To assure that I was putting as much stress on the disk I/O capabilities of the Artecon array as I could, I did two concurrent runs of the benchmark on different portions of the array's filesystem, taking advantage of the dual HyperSPARC CPUs in the host. Each run included three iterations of each test in the benchmark, with the average of the three iterations being taken as the result. The graphs reflect the combined throughput of two concurrent runs.

During the testing of the Lynx arrays, the host was also used as part of the testing infrastructure for the Fast Ethernet hubs and switches for the product survey in this issue (see page 21). As part of that configuration, a Sun Microsystems SunSwift Fast Wide SCSI-2 host adapter with a 100Base-Tx Ethernet port was added to the host. The multiple SCSI host adapter configuration provided greater flexibility in testing and pointed out some anomalies in the operation of the Performance Technologies adapter. The initial benchmark runs for the LynxArray attached to the Performance Technologies adapter were below my expectations. Moving the array to the SunSwift adapter provided results that were in line with what I anticipated, however. Figure 1 reflects the performance of the LynxArray, running firmware level 1.21P, attached to the SunSwift adapter. The LynxArray RAID configuration was a five-disk array, plus a spare, running at RAID level 5. The array was further partitioned into two file systems, each mapped to a different host LUN. Performance for that configuration peaked at the 8KB data block size at a throughput of over 16MB/sec for two concurrent runs of the benchmark.

The test configuration also included an external Exabyte Mammoth 8mm tape drive that fits neatly into the stack in its Lynx enclosure, but has a separate SCSI connection to the host. I found the Mammoth to be relatively fast (almost 3MB/sec using tar), but somewhat sensitive to timing issues. For example, the blocking operation provided by tar worked without a hitch. Doing a similar backup with cpio, however, failed unless the -B option was used. This behavior appears to be as much a function of the Solaris SCSI tape driver (the potential problem is noted in the online man page for st) as a characteristic of the Mammoth drive, however. The Exabyte Mammoth option from Artecon is an acceptable backup alternative for relatively small arrays. For larger configurations, however, an external tape library is probably a better solution.

Ease of Administration

Basic configuration and administration of the array can be accomplished through the front-panel LED display. The terminal interface, however, provides more complete information and is easier to follow. Although the serial port on the array could be connected to the serial port on the host, I simply connected a laptop computer running Windows 95 and Procomm Plus 95. Firmware upgrades are also installed via the serial port on the array.

Evaluation

Installation of the Lynx arrays is easy and straightforward. The only complications that may arise are a matter of the flexibility of the array and the ability to configure a dual host attachment. The SCSI jumper blocks on the LynxStak are particularly convenient, resulting in a compact arrangement that is devoid of the usual tangle of SCSI cables that is normally associated with individual external drives. Lynx rates an excellent on installation.

Documentation for the arrays, as noted earlier, is written for experienced installers, not end-users or novice administrators. While technically accurate and reasonably well illustrated, the manuals do not provide background information at the level of technical depth that would be helpful in determining the best choice of configuration options. Based on Artecon's target markets, I would not expect the Lynx documentation to be glossy end-user books, but additional technical information would be helpful, even for experienced administrators. The Lynx documentation rates average on the Sys Admin scale.

The Lynx controller's front-panel LED and terminal interface provide considerable ease of administration. Although the terminal interface is more complete and easier to follow, the LED display allows basic administrative functions to be performed quickly and easily without the hassle of attaching a terminal or PC. A graphical interface, called RAIDvisor, is currently provided by Artecon for their Windows NT models, and a UNIX version is expected to be shipping by the end of September this year. The current LED and terminal interfaces for administering the Lynx arrays rate a good on the Sys Admin scale, however, for an entry-level system. In addition to the graphical interface that may be shipping by the time you read this, I would also like to see more control over the stripe size in the array. Being able to control the stripe size separately for each partition within a logical drive would allow you to tune the operation of that partition in accordance with how it is being used. The terminal interface also lacks the system diagnostics and operational statistics features found in more sophisticated systems.

Ease of Use and Functionality both rate excellent. The system feels fast to the user and is operationally transparent. The interchangability of modules between the tested models is also a plus for organizations that expect growth and want an additional level of investment protection. Larger configurations, with more advanced module features, are available in the Series 5000 (up to 391GB) and Series 7000 (up to 783GB) LynxArray models.

Performance results of both the LynxStak and LynxArray are excellent for entry-level systems. The three-disk LynxStak configuration provided peak throughput of over 14MB/sec in my tests, and the five-disk LynxArray configuration peaked at over 16MB/sec. Although the Ultra SCSI host adapter from Performance Technologies should have provided greater throughput, the SunSwift combination Fast Wide SCSI-2/Fast Ethernet adapter from Sun Microsystems, Inc. excelled in my testing.

Platform Support by the various Lynx models is good. Artecon formally supports the major commercial UNIX versions including AIX, HP-UX, IRIX, Solaris, and SunOS, in addition to MacOS and Windows 95 and Windows NT. The Lynx systems should also function properly on other UNIX variants (such as Linux), however, because they use standard SCSI host interconnects and require no host-resident software for basic RAID controller operations.

Standards Conformance is also excellent. In addition to conforming to the conventional SCSI and EIA standards expected in rack-mount storage, the LynxArray has also received NEBS (Network Equipment Building System) certification for its ability to withstand a Zone 4 earthquake (between 7.0 and 8.3 on the Richter scale) and up to 8,000 volts of direct electrical contact (equivalent to the centerpost rackmount being struck by lightning), with no permanent loss of data or functionality.

Overall, the LynxStak and LynxArray provide robust feature sets for entry-level RAID systems. The LynxStak is an innovative solution for small capacity desktop RAID, with considerable room for growth. Both units reflect solid engineering and perform well. Although larger and more sophisticated systems are certainly available (including some from Artecon), an overall rating of excellent is appropriate for these units in their entry-level category.