Problem Solving with sar
When users complain about performance problems, sys admins may be tempted to prescribe the performance placebo of an increase in RAM. True, RAM is a very important aspect of performance health, but sometimes it has no affect. In this article, I explore the use of sar, your system's cardiac monitor, as a prelude to diagnosis and treatment of performance ailments in a SCO Open Server environment.
Many different hardware and software factors influence system performance. The clock speed of the CPU, the speed of the data bus, the size of the L1 memory, the size of the L2 memory, the amount of RAM, the access time of the hard disk, and the type of serial controllers are just a few hardware components involved. The efficiency and type of application software on the system also taints perception of system performance. A poorly tuned system running small, efficient application programs is always perceived as faster than a well-tuned system running large hogs. However, you usually have no choices with application software. The other software component of system performance, the kernel, gives you many choices that influence system performance.
The main diagnostic tools at our disposal are sar, cpusar, and mpsar. The three commands are basically the same command. However, sar reports activity on a single processor system - both cpusar and mpsar report activity on multiprocessor systems. Although I am not specifically discussing the multiprocessor versions of the command, most of the following discussion also applies to cpuser and mpsar.
sar examines and reports CPU, memory, I/O, and system call activity. It actually encompasses three different programs on the system: sa1, sa2, and sadc. The sa1 program collects, sa2 reports, and sadc records the system activity data.
Open Server does not record system performance data by default. To start accumulating performance data, execute sar_enable -y. This script file places an entry in /usr/spool/cron/crontabs/sys, which collects data every 20 minutes. Reboot the system to start collecting.
The SCO Performance Guide recommends a five-step process for performance tuning:
(1) Define a performance goal for the system. (2) Collect data to get a general picture of how the system is behaving. (3) Formulate a hypothesis based on your observations. (4) Get more specifics to enable you to test the validity of your hypothesis. (5) Make adjustments to the system, and test the outcome of these. If necessary, repeat steps 2 to 5 until your goal is achieved.
I use a modified methodology that is more reactive than SCO's. My methodology assumes you do not regularly monitor the system and are only notified after the users are upset. The SCO methodology assumes you are on-site performing proactive adjustments to the kernel. Although that is how it should be done in a perfect world, my world is less than perfect. Most sites that I encounter are too small to hire a trained system administrator and only have continuing application program support from their software dealer. Operating system support is usually purchased as needed.
The SCO methodology also assumes that a hardware upgrade is the last option. That is appropriate if the technician is an on-staff salaried employee. However, if I believe that $200 worth of memory chips will do as much good for a client as a second or third round of kernel tuning, then I always recommend that the client install the additional memory. I would never recommend that a site install more memory before my first tuning, but the law of diminishing returns usually applies. The greatest performance increases usually occur during the first quick, broad strokes.
Reactive Performance Tuning
1. Look at the soft information. Listen to complaints. Try to define the true problem. Conduct the interview like a lawyer trying to arrive at the truth. Ask a question in more than one way. For example, I had a user once say, "This system is so slow when I'm working from home." After more investigating, I discovered that she actually meant her modem usually didn't make connection with the host on the first attempt - that's what slow meant to her. Admins need to be able to interpret this type of user-speak.
2. Look at the hard information. Take inventory of the system hardware and software. Observe the users working. Review the output of sar.
3. Analyze the information. Compare various alternatives given the current state of the system. For example, increasing the disk buffers on a system that is already low on RAM is not the same as increasing the buffers on a system with RAM to burn. Evaluate the complaints. A system that requires 10 minutes to boot when it is only rebooted once a month is a curiosity, not a major problem. A system that takes two days to perform the daily close of an accounting package is, however, a major problem.
4. Recommend or implement corrective steps.
5. Verify that the corrective steps worked as intended. Repeat as necessary.
Example 1 - Determining Too Little Hardware
The first example is a 40 megahertz 486SX with 16 MB of RAM and an IDE hard drive running a database application with 12 users. Not surprisingly, the users reported very sluggish response. The owner of the business firmly stated that money was too tight for a complete system upgrade. As an alternative to a complete system upgrade, their dealer suggested increasing RAM, but called me before installing it. My first guess was that the dealer was right. If the system was swapping and paging with a relatively slow IDE hard drive, then increasing the RAM would probably give the users better response at the best price. While the system would still be slow by most standards, its moderately increased performance might satisfy the users.
My guess was wrong. Increasing RAM would have been of little help for this site. The output of sar -w indicated that the system was rarely swapping. (Please refer to the sidebar for sample outputs.) Two figures indicate swapping activity: "swpin/s" and "swpot/s." The "swpin/s" field indicates how many transfers occurred from RAM to the swap space. The "swpout/s" field indicates how many transfers occurred from the swap space back into RAM. Both figures were extremely small at this site; RAM was not the answer.
The other suspects were the hard drive and the processor. The output of sar -u, which is the default report of sar, indicates a lot about the performance of both hard drive and processor. The "%wio" indicates the average percentage of processes that are asleep waiting for I/O to become available. These processes are trying to read or write to the hard drive, but can't because the hard drive is busy. The operating system puts the process to sleep until the hard drive can handle the request. If "%wio" is consistently higher than 15%, then the system has an I/O bottleneck. Occasionally, processes were waiting for the hard drive, but the "%wio" very rarely exceeded 15%. A faster hard drive would have increased system performance, but it was probably not the best buy with a limited budget.
A faster CPU seemed to be the best buy. The "%idle" field was consistently between 2 and 10 percent. The "%idle" field indicates the percentage the CPU is idle. A low "%idle" does not always indicate the necessity of a CPU upgrade. The output of sar -q indicates whether the low idle time is due to normal CPU-intensive processes running on the system, or whether there is not enough processing power to handle the site. Look particularly at the "runq-sz" and "%runocc" fields. The "runq-sz" is the number of runnable processes, and the "%runocc" is the percentage of time that the run queue is occupied. According to SCO, if the "runq-sz" is greater than 2 and "%runocc" is greater than 90%, then you should consider upgrading the processor. The "runq-sz" and "%runocc" fields were within those ranges, so a CPU upgrade was indicated. They upgraded their motherboard to a 66 megahertz 486 motherboard. The users noticed an acceptable improvement.
This is a parable of the heart and soul of performance diagnosis and tuning. Performance tuning, by definition, is efficiency. The sys admin must determine how best to use limited resources to do the most work. Think like a miser with both the owner's money and the system's resources.
Example 2 - Using sar to Diagnose Hardware Problems
Sometimes a piece of hardware appears to be broken when it isn't. Replacing the wrong hardware component is expensive for the system owner, frustrating for the users, and embarrassing for the system administrator/consultant. Another site I worked with appeared to have serial port problems. It was a low-end system with about 12 users and 4 serial printers. When the system was first installed, the users, who were unaccustomed to the software, rarely used the system. Usage increased as they became familiar with it. They began to complain of problems that are usually indicative of faulty serial ports. Screens would randomly appear to be missing characters. Print jobs would mysteriously hang or wander off task. A local technician was called to check the wiring, but it was fine. Fortunately, I checked the system with sar before recommending that they replace the multiport cards. After reviewing the output of sar -u and sar -g, another suspect surfaced. The CPU appeared to be bound (indicated by "%idle" in sar -u) and possibly losing interrupts. This condition is called interrupt overrun and happens when interrupts arrive faster than the CPU can process them. It is indicated by the "ovsiohw/s" field in the output of sar -g. The "ovsiohw/s" field should never be more than 0. This system sometimes reported a value of 1 or 2.
Replacing the old dumb multiport cards with new dumb cards would not fix the problem. It would, however, prolong the users' frustration. Replacing their slow CPU with a faster CPU might, in and of itself, fix the problem. Also, upgrading the dumb multiport cards to intelligent serial cards might also fix the problem. I explained both options to the users. This group was more frustrated than poor, and they wanted a sure thing. The serial problems went away after they upgraded both their CPU and multiport cards.
This second example demonstrates a key concept of system diagnosis, performance, and implementation for an outside consultant. I could have saved this client a few dollars by silently trying one solution and then another. However, if one solution didn't work, then the operators would have been frustrated while the system limped. They would also have been completely down again while the system was upgraded during the second attempt. After hearing an honest explanation of the problem, possible solutions, and potential shortfalls of the solutions; the owner decided to implement both options. If it isn't your money or time, then explain the issues clearly in terms the client can understand, and let them decide.
Example 3 - Making Kernel Adjustments
This last example is completely different from the first two. This system had plenty of horsepower for the number of users served. It started as a 120 megahertz Pentium with 32 MB of RAM with only 48 users connected via terminal servers. The users had complained of sluggishness, and the local dealer added an additional 32 MB of RAM. This brought the total to 64 MB. The users still complained of sluggishness, and there was talk of increasing the RAM a third time. Thus, before I started, I was fairly sure that the amount of RAM was not the problem. I set up sar and reviewed the reports. My first guess was correct (which is usually not the case); the output of sar -w indicated that nothing swapped to disk. Every process in the system was running in RAM. Installing more RAM, obviously, was a total waste of time and money.
The sar reports did, however, indicate that there was a problem with memory - not the quantity but allocation. The output of sar -h indicated that the NMPBUF kernel parameter was too low on the system. The NMPBUF parameter sets the number of multiphysical buffers. The multiphysical buffers are buffers between memory and various physical devices. These buffers compensate for two different types of hardware deficiencies. They compensate for disk controllers that do not support hardware-based scatter-gather, and they transfer data to and from DMA and peripheral devices that cannot address memory above 16 MB. You can dynamically set the NMPBUF, or you can let the operating system make its best guess by leaving NMPBUF at 0. The operating system was guessing wrong on this system. The buffers are monitored by the "mpbuf/s" and "ompb/s" fields from sar -h. The "mpbuf/s" is the number of scatter-gather buffers allocated per second, and the "ompb/s" field is the number of times that the system ran out of scatter-gather buffers per second. This system was regularly running out of multiphysical buffers.
Their DMA and most of their peripheral devices could access memory above 16 MB, however, the disk controller could not perform hardware scatter-gather read/write operations. Scatter-gather increases throughput by combining many small requests into a few larger requests. Hardware-based scatter-gather allows the controller card to perform these operations without additional strain on the CPU. Replacing the controller card was one solution. However, the OS could easily compensate for their existing controller card. The 16 KB scatter-gather OS buffers could compensate using the excess RAM.
There are two important concepts about multiphysical buffers. They are very expensive in relation to system RAM, and the buffers are very expensive in relation to performance when the pool is expended. When the operating system performs software scatter-gather operations and runs out of multiphysical buffers, it puts the calling process to sleep until a multiphysical buffer becomes available. I raised the value of NMPBUF, relinked the kernel, and waited for a few days. The users noticed that the system was more responsive.
Another thing that was degrading performance at this site was the number of files in the /tmp directory. A process on their system was creating zero length files in the /tmp directory, but not erasing them when it exited. Several thousand files were already in the /tmp directory, and the number of files was growing daily.
I erased the contents of the /tmp directory. Then, because directories never shrink, I removed the /tmp directory and recreated it. Finally, I added rm /tmp/* to the rc file. This automatically clears the /tmp directory when the system is rebooted.
Estimating the Cost/Benefit of Hardware Upgrades
How do you persuade someone to spend money after you use sar to determine what is needed? Use a cost/benefit analysis. A good cost/benefit analysis has two key features: estimates are given in ranges and the sources for the estimates are clearly documented.
Let's return to the first example - a system with 12 users needing a new motherboard. First, estimate the percentage of time the users are actually using the system. Don't assume they use the computer every second of the day - that's not realistic. Talk to them; observe them working. Operators usually overestimate time spent working on the computer. Suppose you estimate that they devote 40 to 60 percent of their work week to the computer, and you estimate 5 to 10 percent of their time is wasted due to the slow computer. The low estimate is 9.6 hours (12 users * 40 hours * .40 time at computer * .05 time waiting) of waste per week. The high estimate is 28.8 hours (12 users * 40 hours * .60 time at computer * .10 time waiting) of waste per week. Finally, translate hours to dollars. Find the average cost per hour of the operators. This is not their rate of pay (unless they are contract labor); however, you can use pay rate as a rough estimate. Let's add 40% for insurance, taxes, employee benefits, and manager supervision. For example, if the operators make $10 per hour, then estimate the cost per hour as $14 per hour. Given these estimates, the sluggish computer is costing the owners of the company anywhere from $134.40 (9.6 hours * $14/hr) to $403.20 (28.8 hours * $14/hr) per week. If the new motherboard costs around $500, then the break-even point occurs within two to five weeks.
System Performance and Gemini
According to Greg Forest of SCO, there will be a few changes with system tuning in the next release of SCO UNIX, Gemini. First, there will be more dynamically configured parameters in Gemini because it is based on SRV4 code. Dynamically configured parameters are kernel parameters that are adjusted while the system is running. Next, Gemini will ship with a few preconfigured defaults for different types of systems. For example, if you tell it during installation that it will be a database server, then Gemini will select kernel defaults appropriate to a database server.
Other Sources of Information
This article is only a starting point. It is a very basic introduction to system performance, configuration, and diagnosis using sar. For more information, I highly recommend the Performance Guide in the SCO system administrators documentation set and System Performance Tuning (ISBN 0-937175-2206) by Mike Loukides published by O'Reilly.
About the Author
Don is president of Quality Software Solutions, Inc. and the author of TimeClock, TimeClock Lyte, DbDelta, and the Property Presentation System.