Cover V07, I04
Article
Figure 1
Figure 2
Figure 3
Sidebar 1

apr98.tar


Improving Performance with Web Server Clusters

Francisco M. De La Vega

Web servers are being used throughout corporations as the ultimate user interface. Through corporate Web connections, customers, partners, and internal users have access to a variety of documents and information assets critical to business processes. Internet users can reach online brochures, catalogs, employment offers, customer service desks, and so on. Business partners can access privileged data after appropriate authentication through the corporate Extranet. Employees can interact with corporate databases, datawarehouses, transactional systems, groupware and workflow applications, HR documents, and help desks just to name a few. Each instance of these Web applications has its own design and architecture constraints as well as a series of performance and management problems.

Because of the mission-critical status Web services are attaining within corporations, there is concern about the performance issues that are important to provide high throughput, reliability, and availability of Web servers. There is also concern about how the administrative overhead, routine operation, and configuration of these systems are affecting corporate IS groups. The experience gained during the past couple of years on this subject provides a technical rationale for the capacity planning and configuration of Web servers. Recent products have profited from that experience and are providing more efficient, manageable servers. New standard protocols (e.g., LDAP) are also providing some relief from the administrative burden resulting from the operation of distributed Web servers in large corporate networks.

Web Service Architectures

Several architectures can be proposed in the design of Web-based services. At one extreme, a centralized architecture, where most of the information is provided through a big server running many HTTP processes, server-side processes, and containing all the data is the usual architecture for Internet services. As more requests need to be handled, there is the option of upgrading the server to large mutiproccessor systems, or of adding many individual servers to form a server farm that works as a unique logic cluster from the Web browser perspective. On the other extreme, a distributed network of heterogeneous Web servers (each one near to the data source it is serving) can be typical of very large organization intranets. In this situation, some servers providing services to large number of users have more capacity (i.e., database access), whereas workgroup servers providing information to specific teams can be accommodated in any spare platform.

A Web-based service comprises several items:

  • The Web server per se
  • Content in the form of HTML, graphics, sound files, and other multimedia elements.
  • Server-side processing, such as CGI scripts or application servers linking remote or local databases in a three-tier scheme.

Performance and management considerations for each of these items varies due to their different nature. For example, the administrative overhead of managing a distributed Web server architecture is much larger than that of a single large centralized server. For various reasons, large Web server installations tend to distribute the service load among servers specialized for specific tasks, like HTTP service, CGI processing, media handling, and database management. This increases performance and security because each server can be configured appropriately for the specific task it is performing, and the number of processes running per server can be kept to a minimum.

Installations with diversified data sources tend to provide Web serving near to the data source, allowing for faster updating of the information and ease in document publishing by authors. Thus, system administrators of medium-to-large networks often deal with administrative tasks related to configuration, operation, and access control of a number of Web servers based in heterogeneous platforms and locations across the organization. From this perspective, it is desirable to consolidate the user access management, monitoring, and configuration to a few platforms in a cluster-like fashion. This cluster concept is different from the computer system clustering in parallel or fail-over applications and pertains to the logical perspective of administration.

Performance Rationale

The HTTP protocol is an application-level, object-oriented protocol, and is currently implemented over TCP/IP. The HTTP protocol is stateless, and as such, HTTP transactions are short-lived and not particularly well suited to the connection-oriented TCP protocol. The vast majority of HTTP traffic carries small pieces of information, HTML, and graphic files averaging about 15 Kb. Thus, Web servers are required to handle many more connection setups and teardowns than one might expect. However, actual per-server usage is not especially heavy and does not impose a severe load. Most typical Internet sites handle a few thousand hits per day; whereas busy ones process a million or more hits per day. Access distribution is different for Internet and intranet servers. The former are usually accessed around the clock; whereas the latter are usually accessed during the business hours of the host location. Thus, the peak rate for a typical intranet site is only few connections per second [6].

The first generation of HTTP servers forked a server process for every incoming HTTP request. While this implementation is much easier to program, the server spends two times more CPU power creating and destroying the daemon process than the daemon itself does processing the request. This is true for the CERN server and NCSA server up to version 1.4. Second generation servers reduce the overhead produced by the forking mechanism through the use of the thread model. Each HTTP process manages several threads capable of parallel execution streams, thus consuming much less CPU than fork servers. Keep-alive servers provide a further performance increase by allowing multiple file transfers over a single HTTP connection, as described in the HTTP 1.1 protocol specification. This noticeably reduces the overhead associated with the transfer of all the elements included in a single Web page. This kind of server can service up to 250-300 connections per second, and most recent Web severs adhere to this specification (e.g., Netscape Enterprise v 3.0).

Therefore, from the HTTP point of view, most individual systems can easily handle the kind of load associated with the HTTP service and performance issues fall elsewhere. Efforts to increase Web server performance must correctly identify the real limiting steps involved in client requests. The small size and request/response nature of the HTTP requests place a premium on transport and server processing latency.

The available network bandwidth is limited on most servers by the network interface. If a server has a single 10BaseT interface, this means it is limited to about 1 Mb/s throughput. If a 13 Kb figure is used as the average document served, the server would be limited to 77 operations per second - well below its theoretical capacity. For Internet servers, the bandwidth problem is worse because most Internet sites use a T1 line that is limited to 1.5 Mb/s throughput. Even large sites equipped with a 45 Mb/s T3 line are limited to a maximum of 400 hits/s [7].

Perceived performance is actually related to the time required to service a request, in which network latency plays a major role. Do not confuse network latency with network bandwidth, as these do not linearly correlate. Each network router on the pathway between client and server adds latency on the order of 1-2 ms for a fast router, and up to 10 ms for a slow one. For a complex corporate WAN, this can easily add 100-500 ms to packet transmission times (even with plenty of available bandwidth) and cause noticeable delays. Proxies and firewalls are network elements that, if not properly configured, can also add significant latency. However, security can dictate a different cost/benefit perspective, and some overhead must be tolerated.

Finally, access to Internet servers will suffer from network latency problems associated with the random nature of the usage. This behavior typically creates spikes, or so-called congestion storms, throughout the day [2]. This factor is out of our control for the moment, but locating latency bottlenecks and alleviating them within corporate intranets is an attainable goal.

Server-side Processing

Another major performance concern is the server-side processing that many HTTP requests generate. Most of the resources used by Web services are associated with processes behind the HTTP server. A typical example is CGI scripts used to produce dynamic content on Web pages. Also, search engines (which can in turn be implemented as CGI scripts) are very demanding applications and are common on Internet and intranet sites. A CGI script is usually spawned by the HTTP server upon receiving a client request for dynamic content. CGI scripts can be implemented in a variety of languages ranging from shell scripts, Perl scripts, C/C++ executables, and Java servlets. The resources used by these scripts can be significant, esspecially if for each instance the script is used, child process spawning is required. Like HTTP forked servers, CGI scripts implemented in shell and Perl can generate a lot of overhead on a heavily used server, thus accounting for most of the perceived server response latency in fast intranets. On the other hand, server-side services implemented in C/C++ and Java can be listening for requests as persistent processes and benefit from programmatic models like threading, thereby increasing performance noticeably. These services can be designed as CGI scripts, or using server-specific APIs (e.g., NSAPI, MSAPI, Apache modules). Perl is an ideal language for the implementation of scripts used in many common server-side processes, but Perl execution requires the invocation of the Perl interpreter. The advent of a Perl compiler would enhance the performance of Perl in this arena.

Aside from the implementation details of server-side scripts, the whole architecture of the Web-mediated data access applications is a common source of server latency. The common multiple-tier model used to access legacy applications often has several intermediate steps that are more prone to performance problems than the actual Web server interacting with the clients. The tendency in many organizations is to quickly Web-enable legacy applications while preserving the original architecture. This gets things up and running quickly, but often yields systems with too many intermediate interfaces that add considerable latency.

For example, library automation systems have been evolving as client/server applications, usually running on relational DBMS. Protocol Z39.50 was developed to enable interoperation between different library systems. When the Web revolution caught library systems, a Z39.50/HTTP interface was added to quickly allow Web access. The later interfaces are often implemented as CGI scripts (even shell scripts!). The result is a very slow GUI to the library database system even for LAN access. The lesson here is to avoid quickly hacked legacy access systems. Three-tier systems can be more efficiently developed and deployed using Java servlets or compiled binaries of C++ code in the form of application servers. Such compiled binaries are much more efficient alternatives to the CGI model. Furthermore, there are now fast GUI-based, rapid developing environments for middle-tier application servers like Bluestone's Sapphire/Web, which allow programming and deployment of application servers in Java or HTTP server-specific APIs, without spending too much time writing code [5]. This kind of tool allows fast development of Web-enabled database applications without sacrificing performance or scalability.

Web Server Configuration and Capacity Planning

Web server capacity planning is straightforward, since most servers can handle the usual load involved. A target rate of about 10,000 connections per day for Intranet servers, and about 20,000 for Internet sites should be enough for a new service. Normalize these data to hits per second according to the expected usage (i.e., 24 hours vs. 8 hours). Multiply this rate fourfold to account for peak periods. This gives numbers on the order of a few to a dozen connections per second for most situations. Assuming no other load, a small UNIX server (e.g., a SPARCstation 5) can accommodate this load. If you plan to have an order of magnitude or more hits per second, or intensive server-side processing, make sure you have a Fast Ethernet, FDDI, or ATM network interface, or at least install multiport Ethernet cards.

Server configuration involves adjusting kernel parameters for the kind of load to which a Web server is subject. The accompanying sidebar describes the usual server tuning recipes. Monitoring the performance of a specific configuration is important to verify that the capacity planning was correct, and to decide whether further configuration tweaking or a server upgrade is required (see Figure 1).

Minimally, your performance monitoring activities should include the following steps:

  • Check server load and TCP/IP performance on your system.

  • Monitor server port activity (port 80 on many systems). Some Web server software has a monitoring service to check access, errors, and other parameters in real-time (see Figure 1).

  • Monitor HTTP server availability and latency from an external site by periodically requesting a small test document. Monitoring of the full latency of server-side processing can be accomplished by periodically requesting queries that require such processing (e.g., database access).

Coping with Administrative Overhead

The administration of several Web servers distributed across the organization causes a certain amount of administrative overhead due to the need to maintain access control lists and configuration files, to perform log monitoring and analysis, and to watch the status of each server. Often, these servers are installed on diverse platforms (several UNIX flavors and Windows NT). While an IS group would be concerned with the general administration, security, and reliability of these servers, local user groups would be concerned only with the publishing of documents through these servers, or with the update of databases made available by the Web service. Thus, the administration responsibility may be shared by the IS group and some of the users. Access to each server's material would be dictated by the local user group and would be different in every case. Some corporate users would be granted access to some pages, but others not. Maintaining different user databases and configuration files in these circumstances leads to significant duplication. On the other hand, server monitoring becomes complex as the number of servers increase if the only way to check status is by checking processes' status and log files in each server.

New and old open protocols and distributed administration strategies are being applied to circumvent these problems. A key service that solves many of the dilemmas faced when administering access control lists for different corporate services is a directory database containing all user profiles and authentication information, such as passwords and cryptographic certificates. Powered by the now widely accepted LDAP protocol (Light Directory Access Protocol, a simplified implementation of the X.500 service), diverse applications can grab user and control information needed to implement controlled access. The promise of LDAP is the interoperability of all consumers and providers of user information regardless of platform or vendor (there are nevertheless some caveats; see below), thus consolidating these databases. New Web servers and other intranet applications that are able to access LDAP servers provide administrators with a powerful tool to reduce administration overhead (see my article on LDAP in the September 1997 issue of Sys Admin). Netscape Suitespot 3 servers are an example of services consolidated by a LDAP server. Netscape Enterprise 3 Web servers can use this service to set up access control lists for Web documents, distributed server administration, and server-to-server interaction.

Netscape Enterprise Server Cluster Administration

Netscape Suitespot servers are managed by a specialized HTTP server running as root and bound to an arbitrarily set TCP port. Thus, server deployment, configuration, and monitoring can be performed from Netscape Navigator using Javascript-enhanced HTML forms and Java applets after simple user-password authentication of the administrator. Although root powers are needed to add, delete, and modify files on the Netscape server root (/usr/netscape/suitespot by default), access to the administration server is initially restricted to browsers running in the same server for security reasons. You can open access for remote administration to specific IP or DNS addresses, just be aware of the effect of IP spoofing on security. Suitespot v 3 relies on an LDAP directory server to store all user information. This server can be local or remote and is configured at installation, but it can be changed anytime later from the General Administration console.

The LDAP tree also stores information regarding Netscape servers and their administrators. The Suitespot administrator user should be granted all access permissions to the LDAP server to be able to modify users and attributes. Access as the user "Directory Manager" can also be provided.

The General Administration console centralizes the administration of any Suitespot servers found in a particular machine. Thus, it is possible to have an Enterprise Web server and an LDAP server on the same system. Netscape provides the option of distributing administrative duties so that one user can be responsible for the Web server, and another for the LDAP server. This is enabled by activating the Distributed Administration feature specifying a LDAP administrators group (by default cn=Administrators, o=Organization, c=US).

Users that belong to this group can be granted some administrative privileges on the administration server by means of access control lists (see Figure 2). Also, the users of the system can be allowed to update their own profiles. When they point their browsers to the administrative server port, they only get a page to edit their own profiles after authentication. More granular control over which specific attributes users can change on their own profiles can be set on the LDAP access control lists.

When services are distributed across several platforms, you would normally need to access each server administration console to configure and monitor each server, even when using LDAP as a central user database. It is desirable to accomplish the major administrative tasks from just one console, and for this reason Netscape provides a server cluster administration feature.

In this context, a Web server cluster is a group of Web servers that can be administered by a single Netscape Suitespot administration console. In theory, Suitespot supports clustering of any type of server administered by the Suitespot console (e.g., calendar, LDAP, catalog, etc.). Thus, clusters should consist of similar types of servers, though a single administration server can handle several clusters. In practice, however, only Netscape Enterprise Web servers are enabled to use this feature. While the administrator only accesses the master administration console to manage the server cluster, this administration server should be able to access the remote administration servers. Therefore, the administrator authenticates to the master console, and authentication credentials travel through the network to the administrative server as needed. To enhance security, SSL can be used to encrypt all traffic between the master and slave administrative servers. Servers in a cluster can share configuration files, their status can be monitored, stopped, or started, and their log files can be analyzed, all from a single administrative location.

To set up a server cluster using Netscape Suitespot servers, the following steps are needed [3]:

  1. Installation of the servers in their corresponding remote platforms, including their administrative servers. All servers should be 3.x version.

  2. Access configuration. A central LDAP server should be used to manage access information and authentication to the administered servers and consoles. Distributed administration can be set up to ease access. Either the administrative user and password is set identically on all administration servers, or users are added to the Administrators group at the LDAP tree, and Distributed administration is set up accordingly (see Figure 2).

  3. Add servers to the cluster database. Click on "Cluster Management" of the General Administration panel, and then on "Add Server". You need to provide URL, port number, and protocol. If you change configuration to a remote server, do not forget to restart the administration server.

When finished, you should be able to manage the server cluster from the Server Manager forms. Choose "Cluster Management/Cluster control" and select "HTTP Servers". The cluster control panel allows you to check status, load configuration files, retrieve log files, and start/stop any or all of the servers. After a command is issued from this window, a tracking window appears reporting the status of the commands as they reach the remote servers (see Figure 3). When you request log files, you will be able to read their contents remotely.

Conclusion

Web clusters ease the management of distributed Web servers or server farms by allowing all the major tasks to be performed from a single administrative interface. However, this facility provided in Netscape Suitespot servers is not supported in many of the current shareware or commercial Web servers. Thus, in heterogeneous environments where distributed servers from different vendors would benefit from a centralized administration, an open standard protocol for server administration is desirable. Such an open standard for remote administration already exists: SNMP. Some Web servers can actually be monitored by remote SNMP consoles (e.g., NS Suitespot). This can integrate the Web servers into the enterprise network monitoring, but more detailed configuration and management is still not possible with most products. It is very unfortunate that the possibility of using SNMP for controlling Web servers has not been fully exploited.

Other problems for centralizing the management of Intranet services are derived from the difficult interoperability of LDAP clients and servers. Although LDAP promises interoperability for user databases and their consumers, extensions to the LDAP schema used by vendors to provide special object classes turns configuration and interoperability into a complex issue. For example, every time a new Suitespot server is installed, the object classes and attribute schema of the LDAP directory server should be updated to include the new server needs. This update is done directly into the software configuration files and not by LDAP operations. Thus, if the server is remote or from other vendor, considerable manual work might be needed for set up.

Changes in the way access control lists are implemented in Netscape Directory Server v.1 and v.3 also cause problems. For example, Suitespot Calendar server is hardwired to work with LDAP v. 1, and when v. 3 is used, it fails. The "subtree aci" attribute is used for access lists in v. 1, whereas simply "aci" is used in v. 3 (still in beta status at this time). This is one example of the problems LDAP servers from different vendors must overcome to ensure interoperability and the desired administration overhead reduction.

From the above discussion, it is clear that the HTTP server process does not pose many performance problems, and that performance is most frequently a factor of server-side processing, network bandwidth, and latency. So-called combo servers providing several services (e.g., Web server, mail server, FTP server, NFS server-in-one) will suffer from the performance drain exerted by the other processes. Distributing processes enhance performance and provide a more secure operational framework. Finally, the procedures I have outlined above will get you started on the road to improved Web server performance.

References

  1. How to Tune Solaris for Enterprise Server Performance. Netscape Communications Corp. Bulletin available at: http://help.netscape.com/kb/server/966513-73.html
  2. Huberman, B.A. and Lukose, R.M. 1997. Social dilemmas and Internet congestion. Science, 277:535-537.
  3. Managing Netscape Servers. On-line manual included in Netscape Suitespot products. Netscape Communications Corp.
  4. Mosedale, D., Foss, W., and McCool, R. 1997. Lessons Learned Administering Netscape's Internet Site. IEEE Internet Computing, 1(2): 28-35.
  5. Sapphire/Web Application Servers. Whitepaper. Bluestone Software Inc. Available at: http://www.bluestone.com/products/sapphire/papers/appserv050897.htm
  6. Wong, B.L. 1997. Configuration and Capacity Planning for Solaris Servers. Sun Microsystems Press, Prentice Hall.
  7. Wong, B.L. September, 1997. Sizing up your Web server. SunWorld On-line. Available at: http://www.sun.com/sunworldonline/swol-10-1997-sizeserver.html

About the Author

Francisco M. De La Vega is a Sr. Systems Consultant working in the San Francisco Bay Area for Bluestone Consulting, Inc. (http://www.bluestone.com/consulting), a leading IT consulting company based in NJ. He works on UNIX system administration, Internet/Intranet architecture, Web development, and computer security. He can be reached at fvega@computer.org or http://www.paco.com.mx.