Cover V08, I05
Article



Securing Apache

Kyle Dent

When your Web site is made available to the world or even just to your network (your network might be your world), you are providing not only all the information and features you envisioned for your masterpiece of usability and design, but also a portal from which unsavory characters might launch an attack on your system. In this article, we will look at some of the steps you can take to tighten up your defenses. But be aware that this discussion is only one small part of your overall network security. You must start with a general security plan that covers any number of potential vulnerabilities. It is not worth your efforts to secure your Web server when any unsophisticated, would-be hacker can telnet to the same system with a guest account, gain root access, and use your applications and files as a private playground. There are important security concepts such as backups, auditing, monitoring, and networking, which are critical to the safe operation of your system. Two topics commonly discussed in Web server security: user authentication and SSL were covered in the February issue of Sys Admin, so I'll focus on defensive Web administration for the Apache Web server.

In implementing any of these measures, you must always consider the trade-offs between security and cost/convenience. Consider which of the techniques described here will fit into your overall security plan and make sense for the work you and your users are trying to accomplish. Also consider what you are protecting. If you save customers' credit card numbers or other private information, then the strictness of your security plan should reflect that. From the outset, Apache provides some means of protection through its configuration. We have the ability to prevent users from overriding directory settings and to prevent the Web server from following symbolic links, displaying directory listings, and allowing Server Side Includes. We should avail ourselves of all of these protections by including the following lines in the Web server configuration file:

<Directory />
AllowOverride None
Options None
allow from all
</Directory>

By disallowing symlinks, users or attackers are prevented from creating links within their document areas to other areas of the system that are not meant to be displayed through the Web server (/etc/password for example). This is less of an issue if you run your Web server in a chroot (see below), and you may want to leave that option turned on for performance reasons. Again, you must make these decisions based on your security posture and necessity to meet your users' or customers' or boss's requirements. Disallowing directory indexes is a simple way to prevent outsiders from having too much information about your site. It is true that the listing will not be displayed if there is an index file in place, but if that file is renamed or accidently removed, attackers will have the opportunity to pick up whatever information they can use against your site. Server Side Includes are a risky proposition without providing much value to your site. They also don't do much for your Web server's performance. Unless you have some compelling reason to use this feature, you should consider alternatives. If you need to turn on any options or overrides for a specific area of your Web site, you can do that within another <Directory> section for a specific directory only.

Once you have established your Web server's configuration (or a portion of it), you can look at how to protect the configuration and other files from inadvertent or malicious changes. One important security principal that pervades my approach to defensive Web administration is the concept of least privilege. The idea behind least privilege is that access is granted only to those users or processes that actually need it to get the job done. We'll apply this concept first to file permissions.

Many experts recommend that you create an account or a group specifically for Web server administration. Users who are responsible for Web administration will be provided access to the account or added to the group. This might make sense in a situation where several individuals all work on Web server configuration, but do not otherwise have privileges on the system. More likely, however, the people who handle Web administration also deal with system administration and will have access to the root account. If you run the Web server on the standard port 80 or 443 for SSL, you'll need root access anyway to start it up. Another advantage to using the root account is that your normal procedures for using root will be in place for Web administration as well, providing logging and accountability when administrators make configuration or other changes to the Web server.

You must also choose a login id for the Web server process. As indicated, you must be root to start the server, but the configuration file allows you to specify an id for the daemon to switch to for its normal operation (after start up). By default, Apache is configured to run as the user "nobody". However, if there are other "nobody" processes on your system, they might have the same access as the Web server, which is probably not a good idea. A better idea is to create an id and a group used by the Web server exclusively. Let's assume that the configuration files are owned by root and that you've created an id "http" and a group "http" for the Web server itself. Make sure that "http" cannot actually log in on the machine. One way to do this is to set the account's shell to something invalid such as /dev/null.

Now that we've established who should own the files, we should determine who should have access to the files. Least privilege indicates that configuration files should be writable by root and readable by the Web server, but otherwise inaccessible to anyone else. So, for example, if the Web server runs as "http", then your Apache daemon and configuration file would look like the following:

--x------ root http      httpd
rw-r----- root http      httpd.conf

If you use other configuration files, they should be set the same as httpd.conf. Thus, if attackers target your system, it will be difficult for them to study your site's setup or modify it. By restricting access to httpd, it will be difficult for somebody to substitute your daemon with a doctored-up version. The Web site files themselves are probably meant to be available to the world. However, if you want to make sure that your documents are only accessible through the Web server or if you have protected documents requiring a password through the Web server, you will probably want to tighten security a bit on the files as well. Certainly your CGI programs warrant protection from unauthorized changes and should be guarded carefully.

Users who are responsible for content should own the files and be able to write to them, and the Web server will need read access. You should have a file listing like the following:

rw-r----- jsmith    http index.html
rw-r----- jsmith    http blueball.gif

CGI scripts should also be readable and writable by the account of the user responsible for them, and the Web server must be able to execute them. For interpreted programs, such as shell or Perl scripts, the Web server will need both read and execute permissions. Compiled binaries won't require group read access.

rw-r-x--- jsmith    http script1.pl
rw-r-x--- jsmith    http script2.sh
rw---x--- jsmith    http script3

Although we've set restrictive permissions on CGI programs, they still present a serious security concern for the Web administrator. All of your carefully crafted configuration files and server installation can be subverted in an instant with a single mischievous or carelessly written CGI program. You should seriously consider requiring a code and security review of all CGI programs installed with your Web server. This is not always practical, of course, so let's consider some other steps you can to take to lessen your exposure.

If you are hosting multiple sites, or even multiple directories for various users, consider implementing one of the wrapper programs for executing your CGI applications. These programs will allow the script to run as the user responsible for the scripts and not as the account of the Web server. Apache includes a module called suEXEC that provides exactly this function. Another option is called CGIwrap. It provides some additional security checks as well as a method to change the id of a running CGI program. If scripts need to write to files, those files can be owned by the user responsible for them, and they will be accessible through that account only. If it is not possible to use a wrapper, files that must be written by the CGI program should be owned by the user responsible and set to group "http" with group write (and read if necessary) permissions set.

Whether or not you use a wrapper, you should limit CGI execution to specific directories. Although Apache can be configured to allow execution of arbitrary files based on their extension, doing so may open doors that might not otherwise be available to an attacker. This will also help you ensure that only reviewed, approved scripts have been uploaded and that those in place have not been altered.

If you are using Perl, use taint checks and turn on warnings. The warnings will alert you to potential coding problems or problems you might have with variables. These are the kinds of errors that can be exploited by sophisticated hackers. Taint checks will generate error messages if you try to use any values that have been provided directly by a user. You can untaint such values by matching them against a regular expression that will presumably ensure their safe use. For example, you might want to use a regular expression that strips out all characters other than letters or numbers before you pass such a value to the shell for sending mail. Enable taint checks and warnings by using the -T and -w flags. Thus, the first line of your scripts would look like:

#!/usr/local/bin/perl -Tw

You would, of course, use your own path to the Perl binary. If you have not been using these flags, many of your scripts will spew error messages instead of spinning like the tops they once were, but it will be well worth your efforts to figure out why Perl is complaining and address any problems. There are several on-line resources that discuss CGI security issues in more detail. Two important sources are the World Wide Web Security FAQ at the World Wide Web Consortium site and the Safe CGI Programming document.

One of the single most effective defensive techniques you can employ with your Web server is to run it within a chrooted environment. A UNIX chroot(1M) creates a new root for a given command. Starting your Web server with a chroot essentially creates a contained compartment for your Web server, isolating it from the rest of the system. By starting the Web server with its own root, if someone does manage to subvert the Web server protections or an errant CGI program runs amok, the damage is contained to the Web compartment you have created. Your system files and programs will be unavailable to anyone coming through the Web server.

The difficulty in setting up a chroot lies in determining what minimal set of libraries will be required to run your Web server and any other programs that execute as part of your Web site. Unfortunately, this will vary among platforms, so there are no simple instructions for all situations. You may have to review your system's documentation and experiment on your own. On an SGI system using a Stronghold binary obtained from C2 Net, the Web server chroot requires at least the following files in /lib of the Web server's effective root:

-r-xr-xr-t     root sys  libc.so.1
-r--r--r--     root sys  libdisk.so
-r--r--r--     root sys  libgen.so
-r--r--r--     root sys  libm.so
-r--r--r--     root sys  libmalloc.so
-rwxr-xr-x     root sys  rld

The Web server itself requires certain libraries depending on how it was compiled. If you build your own, you may want to build it statically to minimize requirements for the chroot. Sun and Linux provide the command ldd that you can use to determine exactly which shared libraries a particular binary will need in order to execute. SGI uses the command elfdump. Check the man pages on your system for a command like ldd or dump.

In addition to copying libraries into your chroot, you will also have to modify the Web server's configuration file (or files) because its view of the universe has changed. If you previously had a DocumentRoot of /usr/local/etc/httpd/htdocs, but are now starting the Web server chrooted to /usr/local/etc/httpd, you will have to change the DocumentRoot to /htdocs. Any other path information in any of the configuration files will likewise require modifications.

Any CGI programs that use absolute paths will also require changes. If your scripts use Perl or another interpreter, that interpreter and its libraries will need to be installed within the chroot. Consider carefully what you install within your Web compartment. The less that is available, the more secure it will be. Again, you have to weigh the trade-off between security and what you need to get the job done. However, if you decide to allow only compiled CGI programs inside of the Web compartment, the Perl interpreter will not be available to hackers to exploit.

Let's run through an example of setting up a chroot. We'll start with the directory /usr/local/web/ and set the Web server's root to /usr/local/web/site, so we need to create the following directories:

/usr/local/web/site
/usr/local/web/site/htdocs
/usr/local/web/site/cgi-bin
/usr/local/web/site/logs
/usr/local/web/conf

We will also need certain directories for the chroot itself to work. These directories will contain the minimal files to simulate the actual root filesystem on your machine.

/usr/local/web/site/lib
/usr/local/web/site/usr/bin
/usr/local/web/site/etc

Your Web server will have to be able to resolve hostnames to ip addresses, so you'll have to create an appropriate hosts or resolv.conf file in /usr/local/web/site/etc. Remember that you may have to experiment to get everything your system needs, and you may have to include additional files and directories if you run other programs from inside your Web server chroot. Place the necessary libraries and other files into their respective directories.

The directory /usr/local/web/conf will contain our Web server configuration files, but notice that it is outside of the new root we are creating. We'll use a start-up script that will copy the necessary files into the chroot, start the Web server, and then delete the configuration files from the Web server compartment. Thus, once the Web server has started, these files won't be accessible to anyone accessing the site.

Without the chroot, our httpd.conf contained lines like the following:

DocumentRoot /usr/local/web/site/htdocs
ScriptAlias /usr/local/web/site/cgi-bin/ /cgi-bin/
TransferLog /usr/local/web/site/logs/access_log
ErrorLog /usr/local/web/site/logs/error_log
PidFile /usr/local/web/site/logs/httpd.pid

Those lines must be rewritten as:

DocumentRoot /htdocs
ScriptAlias /cgi-bin/ /cgi-bin
TransferLog /logs/access_log
ErrorLog /logs/error_log
PidFile /logs/httpd.pid

Next, we'll use a script that will perform the following steps:

1. Move the configuration files into place.

2. Execute the command that establishes the chroot and starts httpd.

3. Remove the configuration files.

If you are using an SSL server, you will also want to copy the certificate and key files into place and delete them once the server has started. After running the script, if you check your process table, you'll see that the daemon is indeed running. If you create a CGI script to display a file listing and execute it from a browser, you'll see that anything above /usr/local/web/site is completely unavailable. Your Web server's entire world exists below that point.

Now that you have created an isolated compartment, consider very carefully anything you place within it. You should think of that compartment as available to the world, so anything within it is potentially accessible. Also consider that anything within it could be used by attackers against your system. Use this separation to your best advantage. Consider a site that gathers customers' credit card information and saves it on your system. Ideally, you want to capture the information through the Web server, but store the data somewhere other than within your exposed Web compartment. You can achieve this by running a script that is not restricted to the Web compartment, which periodically grabs the information and deletes it from the chroot, making it completely unavailable through any Web server access. You might also create a daemon to serve as a bridge between the Web compartment and some other safe place within your network to store sensitive information. Your CGI scripts could send the information through the daemon, which would then send it to the safe area. By doing this, the data never has to be saved within your Web compartment. If you can sufficiently secure your storage place, possibly not even allowing outside network access, your exposure will be lessened considerably.

If your resources permit, you might also want to isolate an entire machine as your Web server, disallowing any other type of network services on it. You should still follow many of the techniques described above to protect the integrity of the machine, but you can place a dedicated Web server machine outside of your firewall and keep everything else safely behind it.

Conclusion

If you follow all the steps outlined above and implement every security technique possible, you cannot definitively declare that your system is now secure. With security, there are only degrees of more secure and less secure. Your job as administrator is to employ the methods appropriate for your environment and security plan and to maintain a diligent watch for unauthorized access. Monitoring and regular auditing are necessary for the continued "more secure" operation of your Web server. Defensive Web server administration is an on-going task.

Resources

http://www.w3.org/Security/Overview.html

http://www.go2net.com/people/paulp/cgi-security/safe-cgi.txt

http://www.apache.org/docs/misc/security_tips.html

About the Author

Kyle Dent is the founder and owner of SeaGlass Technologies, Inc. a company specializing in secure Web hosting/development and Internet/security consulting. He can be reached at: kdent@seaglass.com.