Why SANs?
W. Curtis Preston
Storage area networks are just the latest fad. I wouldnt bother learning about them, and I certainly wouldnt try to be an early adopter of SAN technology. That is, as long as you dont mind missing out on the biggest paradigm shift since the advent of the LAN. I predict that in just a few years, SANs will be as ubiquitous as LANs. Just as you assume today that any server that needs access to other computing resources needs to be put on a LAN, you will assume that any server that needs access to storage resources will need to be connected to the SAN. Large disk arrays and tape libraries will become commonplace, and anyone connecting to the SAN will be able to share these resources. I believe that this paradigm shift will create storage possibilities that we havent yet imagined, just as the invention of LAN/WAN technologies eventually gave birth to what we now know as the Internet.
Although the above statements may sound a little optimistic, I hope Ive piqued your interest in SANs just a little bit. In this first installment of a multi-part series of articles on SANs, I will examine how we got here what kind of trouble weve gotten ourselves into by allowing the storage size of the average server to grow exponentially. Then I will examine how SANs can make things a whole lot better. By the end of this article, you should know why SANs are becoming popular they are just the right solution at just the right time. You should also know the difference between a SAN, a LAN, or a NAS. In the premier edition of this lost+found column, I discussed how the capacity of tape drives have grown significantly over the past few years. Theres a reason for this. In less than 10 years, the storage requirements of the average server have grown more than ten-fold! Eight years ago, we referred to our 7-GB host as a monster, because it was so big. Now the average size of hosts that back up and recovery systems are being asked to backup is somewhere between 50 and 100 GB. And 500-GB and 1-TB hosts are becoming commonplace.
How We Used to Do It
A long time ago in a data center far away, we had servers that were small enough to fit on a tape. In fact, you could often back up many systems to the same tape. This type of data center led to a backup system design like the one in Figure 1. Many or most systems came with their own tape drive, and that tape drive was big enough to back up that system, and possibly big enough to back up other systems. All we needed to perform a fully automated backup was to write a few shell scripts (or load AMANDA from http://www.amanda.org) and swap out a few tapes in the morning.
In those days, bandwidth was not a problem for several reasons. The first reason was that there just wasnt much data to back up. Even if the environment consisted of a single 10-Mb hub chock full of collisions, there just wasnt that much data to send across the wire. The second reason was that many systems could afford to have their own tape drives, so we didnt need to send any data across the LAN.
Gradually, many people began to outgrow these systems. Either they got tired of swapping that many tapes, or they had systems that wouldnt fit on a tape any more. The industry needed to come up with something better.
Things Got Better Then They Got Worse
A few early innovators came up with the concept of a centralized backup server. Combining this with a tape stacker made life manageable again. All you had to do was spend $5,000 - $10,000 on backup software and $5,000 - $10,000 on hardware, and your problems were solved. Every one of your systems would be backed up across the network to the central backup server, and you just needed to install the appropriate piece of software on each backup client. These software packages were even ported to all kinds of platforms, which meant that all the systems shown in Figure 2 could be backed up to the backup server, regardless of operating system.
Then we began to experience a different problem. People began to assume that if you bought a piece of client software, all your backup problems would be taken care of. As networks grew larger, it became more and more difficult to back up all the systems across the network in one night. Of course, switching from shared networks to switched networks helped a lot, as did Fast Ethernet (100 Mb), followed by 400-Mb connections created by trunking four 100-Mb connections into a single pipe. But some people continued to experience systems that were too large to back up across the network especially when they started installing very large database servers that would range from 100 GB to 1 TB.
Backing up Today Without SANs
A few backup software companies tried to solve this problem by introducing the media server. In Figure 3, the central backup server still controls all the backups, and still backs up many clients via the 100-Mb or 1000-Mb network. However, you can now attach a tape library to each of the large database servers allowing these servers to back up to their own locally attached tape drives, instead of sending their data across the network.
This solved the immediate bandwidth problem, but it introduced significant costs and inefficiencies. Each one of these servers needed a tape library big enough to handle a full backup. Such a library can cost anywhere from $50,000 to $500,000, depending on the size of the database server.
The inefficiencies are a little less obvious than the costs. Many servers of this size do not need to do a full backup every night. If the database software is capable of performing incremental backups, you may need to perform a full backup only once a week, or even once a month. This means that the rest of the month, most of the tape drives in this library are going unused.
Another thing to consider is that the size of the library (specifically, the number of drives it contains) is often driven by the restore requirements not the backup requirements. For example, a recent client of mine had a 600-GB database that they needed backed up. Although they did everything in their power to ensure that a tape restore would never be necessary, the requirement we were given was a three-hour restore window. If we had to go to tape to do the restore, it had to be done in less than three hours. Based on that, we bought a 10-drive library that cost $150,000. Of course, if we could restore the database in three hours, we could back it up in three hours. However, this means the $150,000 library was unused approximately 21 hours per day. That is the second inefficiency introduced by the concept of the media server the hardware remains unused much of the time.
We needed a way by which large database servers could back up to a locally attached tape drive that could also be shared with another large server when it needed to back up to a locally attached tape drive. This was not possible until now.
Enter the SAN
To share resources, you need a network. To share computing resources, the industry developed the LAN. The LAN allowed you to access any computing resource (computer) from any other computing resource. This introduced the concept of a shared resource, where one computing resource could be shared by multiple clients.
One of the best examples of a shared resource is the Network File System, or NFS. NFS allows dozens, or even hundreds, of clients to store data on a file system that appeared to be a local file system, but was actually a file system located on an NFS server. Once a client stored a file on this NFS file system, another client could read or write to it as if it were stored on its local disks. Its hard for those of us who came to the industry after the introduction of NFS to understand what a revolutionary concept this was. NFS solved a myriad of problems and introduced a number of opportunities that were not envisioned before its introduction.
Just as the LAN allowed the introduction of NFS, the SAN has introduced a new concept the network storage device, or NSD1. An NSD (as Im calling it) is a raw storage device (such as a disk, optical, or tape drive) that is connected to a SAN and appears as a locally attached device to any computer connected to the SAN. To put it in simpler terms, connecting a tape drive to a SAN allows any computer attached to that SAN to perform a backup to that tape drive just as if that tape drive were physically attached to that computer via a SCSI cable.
SAN or LAN?
Many people look at the networks depicted in Figure 4 and say, Whats the difference between a LAN and a SAN? The answer is really easy the protocol that they use. Systems on the LAN use IP, IPX, and other typical network protocols to communicate with each other. Systems on the SAN use the SCSI protocol (typically sent over Fibre Channel) to communicate with the Network Storage Devices (NSDs).
The term SCSI can be a bit confusing because the acronym refers to both the physical medium (a SCSI cable) and to the protocol that carries the traffic. Perhaps a comparison will help. IP traffic is sent over copper twisted-pair cables and fiber2 via the Ethernet protocol. SCSI3 traffic is sent over copper SCSI4 cables via the SCSI protocol. In this case, the SCSI protocol is performing essentially the same duties as the IP and Ethernet protocols. SCSI traffic also travels over fiber via the SCSI protocol running on top of the Fibre Channel protocol. In this case, the SCSI protocol is performing essentially the same duties as IP, and the Fibre Channel protocol is performing essentially the same duties as the Ethernet protocol.
A LAN is a collection of servers, clients, switches, and routers that carry data traffic via IP, IPX, Ethernet, and similar protocols usually IP running over Ethernet. A SAN is a collection of storage devices, servers, switches, and routers that carry data traffic via the SCSI and Fibre Channel protocols usually SCSI running over Fibre Channel.
In the days of RFS (the predecessor to NFS), you could RFS-mount a tape drive, making that tape drive appear as if it were locally attached, but this was not a SAN by todays definition. The data was sent to and from the RFS-mounted tape drive via the UDP and IP protocols, which have a high overhead when you compare them to SCSI. Each block of data must be split into packets by the sending server, have a packet header put on it, and then reassembled by the receiving server back into a data block. A SAN allows you to use the relatively overhead-free SCSI protocol to write to the tape drive or disk drive as if it were locally attached, without the overhead of IP. Again, the difference between a LAN and a SAN is the protocol they use. LANs use IP over Ethernet (or similar protocols) and SANs use SCSI, usually over Fibre Channel.
SAN or NAS?
I also thought it was important to describe how a SAN differs from a NAS. Since the acronyms look so similar, the terms are often been mistaken for one another, especially because they started appearing in trade publications at roughly the same time. They are actually completely different terms. To explain, Ill start with another history lesson.
First, there was NFS, which allowed several UNIX clients to access one file system via the LAN. Then came CIFS5, which allowed several Windows or OS/2 clients to access one file system across the network. A Network Attached Storage (NAS) server is simply a box that is designed from the bottom up to provide nothing but NFS and CIFS services. (Originally, they provided only NFS storage, but have recently started supporting CIFS.) You can see a NAS server in Figure 4 providing NFS services to the three servers above it via the LAN.
A NAS makes a filesystem on the other side of a LAN (running IP, etc.) appear as if its locally attached. The SAN depicted in Figure 4 makes a device on the other side of the SAN appear as if its locally attached.
Backing up Today With SANs
Now that Ive explained what a SAN is and is not, how do you actually build one, and does it make backing up data any easier?
Looking at all the SAN vendor Web sites listed on http://www.backupcentral.com/hardware-san.html could leave you quite confused trying to understand all of the possible combinations of storage available with a SAN. SAN backup applications are relatively simple, however. We want to make an NSD (the tape library) accessible to several storage initiators (the backup clients). In order to do this, we will need a SAN router that can talk SCSI on one side and Fibre Channel on the other side, such as the ones shown in Figure 5. We will need SAN hubs or switches to connect the host bus adapters (HBAs) to the routers. Finally, we will need a backup product that understands SANs. It will act as the traffic cop to all of the clients that will now compete for this central storage resource.
Once the tape libraries in Figure 5 are connected to the SAN routers, and the SAN switches or hubs are connected to both the routers and the servers, all servers see the tape drives in both tape libraries as if they are physically attached to them, because they are physically attached. Both libraries are physically attached to every server connected to the SAN. You can then do whatever you need to do on those servers to get them to recognize the tape drives. For example, on a Sun server, you would enter drvconfig and tapes. On an HP-UX server, you would enter insf. Other operating systems may require a reboot.
Now that all drives are visible by all servers, you could have a mess if they start trying to write to those drives simultaneously. What you need is some software to act as a traffic cop. This software is provided by your backup software vendor. (Such products will be explored in next months article.)
These software packages assign storage resources to clients based on their needs at any particular point. This is called dynamic drive allocation, or drive sharing, depending on which vendor you are talking to. I personally prefer the term dynamic drive allocation, since drive sharing is often confused with library sharing. Library sharing is simply connecting a single librarys tape drives to multiple machines, allowing them to share the library. This can be done without a SAN router. Each backup client can then share the library, but they cannot share the drives. Drive sharing, or dynamic drive allocation, actually allows two different hosts to use the same physical drive at different times, based on their needs. In the configuration in Figure 5, all tape drives in both libraries can be dynamically allocated to any server connected to the SAN. Since there is a separate path to each library, you also have redundancy. If one library, router, or switch malfunctions, drives in the working library can be assigned to the servers that need them.
How Dynamic Drive Allocation Works
Once all clients have physical access to all tape drives within the library, it is possible to dynamically allocate tape drives to the clients that need them. How does this work? Suppose that the tape library in Figure 5 has six tape drives in it. The first thing you would do is stagger the full backups of the three clients in Figure 5 so that no two systems try to do a full backup on the same night. Incremental backups can (and should) be performed every night, since they do not usually require the same amount of throughput that a full backup does.
On a night when no full backups are supposed to occur, each client will get an equal share (two) of the six available drives for its incremental backup. On a night when a full backup is supposed to occur, the two remaining systems will still perform incremental backups, and will get two drives each. The system doing the full backup will also get two drives. So far, things arent much better than the pre-SAN days. However, the incremental backups will finish far sooner than the full backup. The dynamic drive allocation software can then reassign those four drives to the full backup, giving it a three-fold increase in potential transfer rate, since it can now use all six tape drives. This is the difference between a full backup that takes 24 hours, and one that takes eight hours.
With a SAN, all drives get used to their fullest potential. All backups are local and dont affect the LAN. The bandwidth is limited only by the backplane of your server and the capacity of the switches, routers, and libraries you purchase. The best part, though, is that when a restore happens, the server in that needs the restore can have access to every tape drive in both libraries. This can allow for fast restores without having to buy a 20-drive library for every server. Life is good.
The next issue of lost+found will explore the different vendors that make SANs possible. I will discuss at the different backup software vendors that offer dynamic drive allocation software. I will examine the different router and switch vendors that create the physical path between the servers and the storage devices, and I will also look at VARs that offer complete SAN solutions. Back it up, or give it up.
1 For those playing at home, I believe this is the first time this term has been used in a publication. If it you hear it somewhere else, remember you heard it here first!
2 Please note that I use the spelling fiber when referring to the physical fiber cable, and the spelling Fibre when referring to the Fibre Channel protocol.
3 In this case, SCSI is referring to the protocol.
4 In this case, SCSI is referring to the specifications of the copper physical layer upon which SCSI traffic can travel.
5 This was originally referred to as SMB, which actually refers to the protocol that CIFS uses. Incidentally, that is where SaMBa got its name, since it provides SMB services for UNIX servers.
About the Author
W. Curtis Preston is a principle consultant at Collective Technologies (http://www.colltech.com), and has specialized in designing and implementing enterprise backup systems for Fortune 500 companies for more than 7 years. Portions of this article are excerpted from his OReilly book UNIX Backup & Recovery. Curtis may be reached at: curtis@backupcentral.com.
|