Cover V08, I08


A Comprehensive Approach to Log Files

Andrew Mickish

My interest in analyzing and archiving log files was inspired mainly by the pursuit of profit. There are other reasons to maintain long-term archives of important files such as system logs, analyzing hacker activity that escaped attention earlier, or gauging usage trends to predict the need for server upgrades and redistribution of resources.

Mostly, I wanted to document the increasing value of our company's Web site by recording the growing number of hits. This data is essential for showing customers, advertisers, investors, and management how much traffic is flowing through the site in justification of advertising fees and prices for hosting services.

Additionally, after doing informal system administration for years, and occasionally deciphering various arcane configuration parameters for syslog and built-in log rotation systems, I finally decided to approach the problem comprehensively and craft a system that would record and notify me of important system events, and archive the raw data permanently.

What to Log

All services provided by a server ought to be logged and monitored. UNIX hosts offer user services such as Sendmail, Web servers, ftp, telnet, and cron. There are also integrated system services provided by the kernel itself, such as authentication of logins and the spawning of user shells.

Apathy toward the events of a particular service suggests that the service is not being used and should not be provided. The presence of unused services at minimum unnecessarily impacts the performance of the server and obscures which services are actually essential. At worst, it provides a gateway for exploiting security vulnerabilities in the unattended services.

For example, during system installation, it is tempting to install "everything", because this will certainly minimize the time spent tracking down and installing additional services and utilities later as the need arises. However, this choice typically installs and activates historically vulnerable services such as NFS (for accessing the server's file system) and rshd/rlogind (for getting a command prompt on the server without supplying a password).

An administrator who has just realized the importance of this problem can get a sense of which services are offered in several ways:

1. Use ps with verbose options (different for each UNIX, probably something like ps -ef or ps aux or ps auxwww) to show all processes being run by all users.

2. Use netstat -a to see which ports are in a LISTEN state, indicating a service that is waiting for a connection attempt. The ESTABLISHED state indicates that the service is currently in use by a connecting client.

3. Examine /etc/inetd.conf for services that are launched on an as-requested basis by the inetd daemon.

4. Examine the rc startup files (different for each UNIX, probably somewhere like /etc/rc.*).

5. Examine the timestamps, sizes, and contents of log files that already exist on the machine (usually in /var/log or /var/adm).

Although disabling services is beyond the scope of this article, it usually involves commenting out the appropriate line in /etc/inetd.conf, removing the package of the service, or at least moving the startup script out of the rc directory so that it will not be started automatically.

Services that Use syslog

The more established UNIX services like Sendmail, innd (the Internet news daemon), and named (the DNS server daemon) have built-in logging capabilities based on the standard logging facility syslog. The syslog daemon is itself a service that is used by the others. Though these daemons already ask syslog to make log entries, syslog will ignore these requests if it has not been specifically told how to handle these messages.

For example, the source code for Sendmail is littered with calls to the UNIX C library function syslog(), which sends messages to syslogd consisting of:

1. The type of program requesting the log entry (e.g., mail, news, or a generic daemon).

2. The priority of the message (e.g., info, notice, warn, err, or emerg).

3. The text of the log entry.

The Sendmail daemon sends info entries when it accepts or rejects incoming mail, and sends notice entries when it loses connection with a client. There are hundreds of other occasions when Sendmail makes entries depending on tasks that it succeeds or fails in performing.

Therefore, in the case of syslog-oriented daemons, the issue is not how to get the services to generate log entries, but instead how to configure syslog to handle them.

Configuring syslog

The syslog configuration file is in /etc/syslog.conf, and looks something like this:

# Log everything (except mail and auth messages)
# of level info or higher.
*.info;mail.!=info;authpriv.!*      /var/log/messages

# Log all the mail messages in one place.
mail.*                              /var/log/maillog

# Log authorization messages in a protected file.
authpriv.*                          /var/log/secure

# Broadcast emergency messages to all users.
*.emerg                             *

The first entry in this configuration causes messages from all services of priority info through emergency to be logged to the file /var/log/messages, except those that come from mail or authentication services. This is reasonable, since Sendmail generates a lot of entries that would obscure more important messages, and authentication entries may contain sensitive information including passwords.

The first entry also catches messages sent by daemons that do not have a specific name reserved for them. The DNS server daemon named is a good example of such a daemon, which may generate hourly statistics about its use (if it was enabled to do so when it was compiled). For this reason, it may be desirable to direct messages to a separate log file, and specifically omit them from the main log file with daemon.!=info.

The second entry directs all mail messages to /var/log/maillog, and the third entry directs authentication messages to /var/log/secure. In the last entry, the destination "*" means that the message should be shown on the screens of all currently logged-in users as a plea for help.

Multiple machines can be configured to send their log entries to a central syslog server, known as a loghost, rather than keeping logs independently on each distributed machine. When configured correctly, the loghost will listen for UDP packets sent by the other machines, and write the messages to its local log files. The following procedure enables this feature for Red Hat Linux:

1. On the loghost, add the command line parameter -r to the invocation of syslogd in the loghost's rc startup script, and

2. On each distributed machine, add the following entry to syslog.conf:

# Send all messages to the loghost,
# and let it filter them
*.*                          @loghost

However, it should be obvious that enabling this feature opens up the loghost to a denial of service attack and pollution of its logs by an attacker. This feature should only be used when the loghost is behind a firewall, or in conjunction with using kernel firewall utilities on the loghost such as ipfwadm or ipchains to restrict access to specific syslog clients.

The man pages for syslog have a discussion of the syntax, a lot of configuration examples, and a full list of the service types and severity levels. It is necessary to view the man pages for all of syslog, syslog.conf, and syslogd to see all the available information. Remember, syslog is the client library function invoked by each service; syslogd is the daemon configured with syslog.conf.

syslog Troubleshooting

If syslog does not write messages to the expected destinations, there are a couple of things to check.

First, restart syslogd to make it re-read its configuration file. One way to restart syslogd is to send it a hangup signal, with the command kill -HUP <pid> where <pid> is the process id of syslogd obtained from the ps command. Another way, under Linux or Solaris, is to execute the startup script with the restart parameter, as in /etc/rc.d/init.d/syslog restart.

Make sure that the destination file exists, which is a prerequisite for syslogd to modify it. The command touch <filename> will create an empty file. The next time syslogd is restarted, it will log the next appropriate message to the file.

To send messages to another host, the other host must be configured to receive the messages. See man syslogd for specific instructions on how to do this for each specific version of UNIX.

Services that Don't Use syslog

A Web server is a good example of a service that uses its own independent logging mechanism. Web server logs tend to be enormous - on the order of megabytes per day for busy machines. The logs contain the names of all files that were requested and errors encountered while trying to serve requests. Additional configuration will log the type of browser that made the request and the last page that referred the user to the current page.

Logging for the Apache Web server is controlled in httpd.conf, often found in /usr/local/etc/httpd or placed in /etc/httpd/conf by the standard Red Hat RPM distribution. The lines that control what will be written to the regular access logs look like:

LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

CustomLog logs/access_log common
CustomLog logs/referer_log referer
CustomLog logs/agent_log agent

The LogFormat directive defines an alias, in this case for the aliases common, referrer, and agent, to stand for the cryptic combination of format specifiers: %h is the requesting host, %r is the requested page, etc. Any of these aliases can subsequently be used in a CustomLog entry, which tells Apache how to write entries to the specified log file. (Note that the log directory is relative to the ServerRoot directory set elsewhere in the configuration file when the destination does not begin with a slash.)

The configuration above directs Apache to write page accesses to access_log. The last page viewed by the browser before requesting the current page will be logged in referer_log, and the type of browser making the request will be logged in agent_log.

An alternative to splitting the entries across multiple log files is to put them all in a single log file. The following definition puts all the information for each request on one line into access_log:

LogFormat "%h %l %u %t \"%r\" %>s %b
          \"%{Referer}i\" \"%{User-Agent}i\"" combined

CustomLog logs/access_log combined

The logging of errors in Apache is reminiscent of syslog, with errors having various severity levels including info, warn, and error. The minimum priority of messages to be sent to the error log is controlled by the LogLevel directive:

ErrorLog logs/error_log
LogLevel warn

Once the Web server is configured to maintain its log files, take care that the log files do not grow out of control and fill up the file system. This can happen rather quickly, not only if the server is busy, but also if there is a configuration problem that causes the error log to grow unexpectedly large. Rather than deleting old log files, they can be compressed and archived systematically, which is a topic discussed at length below.

Monitoring Log Files

Given an assortment of log files on various machines, the best way to stay in touch with what's happening on each machine is to read the most recent entries in each log file. While some administrators might like to watch the live log files scrolling by using tail -f <logfile> on each one, there is a more convenient way to be periodically notified when entries are written.

The script in Listing 1 reports on the entries made in a log file since the last time the log file was checked. (All listings for this article are available from the Sys Admin Web site: or from in /pub/sysadmin.)

Usage: <logfile> <logposfile>

The <logfile> parameter is the log file to monitor, and <logposfile> is a writable file where stores the length of <logfile> each time it is invoked. The script skips over the part of the log that was previously reported, echoing only the latest entries to stdout.

When running as a cron job for each log file, the administrator will get mail containing all the entries made since the last invocation.

# Check the system log every day
55 3 * * * /usr/local/bin/
           /var/log/messages /var/log/messages.last
# Check the security log every day
56 3 * * * /usr/local/bin/
           /var/log/secure /var/log/secure.last
# Check the mail log every day
57 3 * * * /usr/local/bin/
           /var/log/maillog /var/log/maillog.last

If the scripts only run daily, it is important to time them to run just before log rotation occurs (see below), so that none of the entries are missed when the current log file is cleared.

While it may seem a daunting task to read the system log file for each machine on a daily basis, proper sorting of messages into different logs depending on their priority means that only the non-informational messages need to be read with particular attention, and these can be scanned for new entries by the cron job more frequently (perhaps every minute). The info files can be read independently, to investigate problems detected in the higher priority files.

This section would be incomplete without mentioning swatch, a utility that scans log files for specific patterns. swatch can sound terminal bells, send mail, or invoke secondary programs whenever it finds a matching pattern (such as "panic") in a log file. swatch is available from:

Configuring swatch for particular services and machines takes a bit of experience in knowing which log entries to expect, and which to be concerned about. Monitoring the raw log files with is the best way to gain this experience.

Rotating Log Files

Log rotation is the process of saving the current log file under a different name and starting a new log file in its place. The standard UNIX configuration keeps three or four generations of log files around, where the current log file is messages, and the next oldest is messages.1, and so on.

Solaris has a straightforward approach to rotating the system log. By default, a cron job in the root account executes the simple shell script below to rename the current system log to a save file while moving existing saved files to older names, consequently throwing away the oldest one:

# Solaris /usr/lib/newsyslog
cd /var/adm
test -f $LOG.2 && mv $LOG.2 $LOG.3
test -f $LOG.1 && mv $LOG.1 $LOG.2
test -f $LOG.0 && mv $LOG.0 $LOG.1
mv $LOG   $LOG.0
cp /dev/null $LOG
chmod 644    $LOG
kill -HUP `cat /etc/`

Red Hat Linux introduced a significantly more flexible logrotate system, by which whole sets of log files can be controlled by a top-level /etc/logrotate.conf configuration file, and by individual service-specific configuration files in /etc/logrotate.d. By default, any log file whose name appears in a logrotate configuration file will be rotated weekly, keeping four old generations. When the compression feature is enabled in logrotate.conf, it will compress the old logfiles during the rotation. Only one configuration file is responsible for rotating all of the system logs:

# /etc/logrotate.d/syslog
/var/log/messages {
        /usr/bin/killall -HUP syslogd
# similar entries for /var/log/{secure,maillog,spooler}

Both of these strategies constrain the names of the archived log files to just their original name with a numerical suffix, and the archive location is in the same place as the current file. It would be much nicer if the archived files could be renamed according to their dates, and the archived files moved onto a separate partition dedicated for permanent storage. The next section describes how to enhance these log rotation schemes with log file archiving.

Archiving Log Files

The script in Listing 2 moves or copies arbitrary files from one directory into another, while compressing or renaming them according to their timestamp along the way. This script can be used independently, or integrated into a system's existing log rotation scheme.

Usage: --from <srcfile> --to <destdir>
                   [--suffix <pattern>] [--ifexists]
                   [--overwrite] [--compress]
                   [--keepsrc] [--keepdate]
                   [--keepbase] [--keepsuffix]

The optional parameters are all boolean switches that control whether the script should check whether the source or destination files exist before archiving, and control how to name the archive.

By default, interprets everything after the first dot (.) in a filename as its "suffix" by using the pattern '\..*' to match the suffix. This may not be desirable in a file named boot.log.0, which by default would be archived as boot.19990801. To archive this file as boot.log.19990801, the suffix pattern '\.\d*' would need to be used to identify only the numerical suffix of the file.

The Solaris newsyslog script (discussed above) can be enhanced not to discard its oldest log file, but instead archive it in a separate directory, renamed by timestamp. Something like the following line could be inserted at the top of the script:

/usr/local/bin/ --from $LOG.3 --to archive
   --ifexists --compress --keepdate --keepbase

This line moves messages.3 out of the current directory, places it in the archive/ directory, renames it according to its timestamp, and finally compresses it to yield the file archive/messages.19990801.gz.

Under Linux, similar instructions can be inserted into logrotate configuration files. Here is a modification to Apache's rotation file:

# Enhanced /etc/logrotate.d/apache
/var/log/httpd/access_log {
      /usr/local/bin/             \
         --from /var/log/httpd/access_log.3 \
         --to /var/log/httpd/archive        \
         --ifexists --compress --keepdate   \
         --keepbase --suffix '\.\d*'
        /usr/bin/killall -HUP httpd
/var/log/httpd/error_log {
      /usr/local/bin/             \
         --from /var/log/httpd/error_log.3  \
         --to /var/log/httpd/archive        \
         --ifexists --compress -keepdate   \
         --keepbase --suffix '\.\d*'
        /usr/bin/killall -HUP httpd

These modifications to Apache's logrotate file rescue access_log.3 and error_log.3 from being rotated out of existence. Instead, the access log is saved into, for example, /var/log/httpd/archive/access_log.19990801.gz where it can continue to be scanned by log file analysis tools.

Analyzing Archived Log Files

Once the log files are archived permanently, the administrator can conduct a retrospective analysis on system activity, attempted security breaches, and configuration problems. With a good bit of data available about what is usual and unusual activity in the system, an administrator can configure swatch (discussed above) to detect and sound an alarm in response to new system events.

A particularly profitable endeavor involving archived Web server logs is to keep running retrospective summaries of all Web server activity since the server's origin. Presenting an up-to-date summary of server activity makes a great impression on customers who are considering advertising on a Web site, and on potential investors who want to take advantage of Web site traffic or even buy the whole domain.

Setting up retrospective summaries involves storing all of the old access logs on-line and processing them hourly or daily with an analysis tool such as Analog:

Analog scans Web server log files and generates a summary of the number of pages and files accessed, organized in several views according to server directory, specific files, browser domains, and browser types.

The Analog configuration file usually resides in /usr/local/etc/httpd/analog/analog.cfg, or may have been placed elsewhere by a contributed RPM. The configuration file must be modified with the locations of the logs it should analyze, and must contain a format string indicating how the Web server logs have been written. The LOGFORMAT directive is reminiscent of the corresponding LogFormat lines in the Apache configuration file:

# /usr/local/etc/httpd/conf/analog.cfg
LOGFORMAT (%S %j %u [%d/%M/%Y:%h:%n:%j] \
          "%j %r %j" %c %b "%f" "%B")

LOGFILE /var/log/httpd/access_log*
LOGFILE /var/log/httpd/archive/access_log*

HOSTNAME "Company X"
LOGO companyx.gif

OUTFILE "/home/httpd/html/stats/current.html"

DNSFILE "/usr/local/etc/httpd/analog/dnsfile.txt"

This configuration file tells Analog how to interpret the log files in the httpd/ and httpd/archive/ directories, and asks Analog to break the summaries into different views with respect to browser types and referring sites. The resulting summary is written to a Web page in the stats/ directory in the active Web site hierarchy.

A final flourish on top of up-to-date summaries is to archive the daily summaries, for retrospective review of growth over the life of the Web site. The following command uses the script (discussed above) to rename the current summary generated by Analog to include its timestamp in its name, so that all the daily summaries will be listed under the stats/ directory:

/usr/local/bin/                      \
   --from /home/httpd/html/stats/current.html  \
   --to /home/httpd/html/stats -keepsrc        \
   --keepdate --keepsuffix --suffix '\.html'

The set of archived summaries will have names like 19990801.html.


It is said that administrators who boast that no one has ever tried to hack into their servers do not know how to monitor their own systems. Common UNIX services already report their health and activities to the system logs, and these are the most important files for an administrator to review on at least a daily basis. The methods discussed in this article for filtering, monitoring, and archiving logs provide the administrator with crucial tools for detecting security threats and general problems with UNIX services.

Don't forget, for a Web site whose revenue depends on server traffic, the skills gained through careful log file management translate quite tangibly into profit.

About the Author

Andrew Mickish graduated from Carnegie Mellon University with a degree in Computer Science. At CMU he worked on the graphical abstraction layer of early Java-like languages. He is currently a software engineer at SneakerLabs, Inc., developing and hosting high-performance Web applications in Java on Linux servers. He can be contacted at