Editor's Forum
Although the Web phenomenon has brought the joys of the Internet to a much broader audience, and has allowed us to drink more coffee while waiting for poorly designed Web pages to load, the Web has also brought significant changes to the way we administer systems. Increased security concern is one obvious effect, and we will discuss some of the aspects of that topic in next month's May issue on network security. One of the more dramatic behind-the-scenes effects of the Web, however, is in the area of backups and how we go about performing them.
As more organizations embrace the Web as part of their corporate strategy, whether for information distribution or real e-commerce, larger segments of our system population shift to 24x7 operation. The corresponding lack of administrative downtime has changed the way we approach the question of backups. Greater and more precise planning is required to ensure our ability to recover data in the event of a system component failure or an Internet-based, malicious attack. Frequently, more complex backup strategies also place additional load on the network, because creating additional streams of backup data is one method of minimizing the amount of time required for backups to complete. Rearranging network backups by utilizing subnets and vLANs can be helpful in optimizing your backup plan, but may not provide a complete solution in many cases.
While gigabit Ethernet is an attractive solution, because it promises greater throughput with existing network protocols, some question whether a collision-based approach offers the best alternative. The competitor of gigabit Ethernet receiving the most attention lately, of course, is Fibre Channel and Fibre Channel-based Storage Area Networks (SANs). Fibre Channel offers gigabit speeds, or more properly 100MB/sec transmission rates, and can extend over larger areas than conventional high-speed network protocols - up to 10km between end-points through a Fibre Channel switch. Although much of the attention focused on SANs has been associated with making large disk arrays accessible to larger populations of users, it is no small wonder that backup vendors Legato and Veritas, among others, are at the forefront of establishing industry standards for the management of SANs.
Until the SAN came along, there were limited connectivity options available for massive backups with large tape libraries and silos. You either attached the library to your largest server (with multiple SCSI channels), attached the library to the fastest network you could afford, or both. In either case, one of the underlying assumptions was that differing file system structures represented on different servers (UNIX and NT, for example) would be handled separately by orchestrating the tape selection through the enterprise-level backup software. SANs, however, open the potential for the same disk-storage devices to be used by clients of different architectures. To do so, however, requires another layer of software that can deal with the various file-structure requirements. While designing such intermediate software is obviously a non-trivial task, it is likely a more realistic approach than expecting OS vendors to agree on a single, universal file system.
What we want to avoid, however, is having multiple SAN-management protocols promoted by various splinter groups within the industry. If SANs are to provide realistic solutions for our heterogeneous storage needs, the industry must agree on a single set of interfaces that meet cross-platform specs. Then, storage vendors can produce different implementations of the APIs and compete on the basis of functionality, rather than underlying protocols. Should the industry fail to agree on such a single, universal protocol for SAN management, the system administration community will, once again, be left holding the complexity bag. Storage vendors should be encouraged to accept nothing less than a single standard here.
Sincerely yours,
Ralph Barker
|