Logging rm and kill Requests
Steven G. Isaacson
Sometimes users take it upon themselves to play system
administrator.
Usually the assistance is welcome. When it's not, you
have to play
detective.
This article describes two logging programs that make
playing detective
easy. One program logs rm requests (see Figure 1 for
sample
entry), the other logs kill requests (see Figure 2).
Logging kill Requests
Why log kill requests?
We needed to log kill requests because of a problem
with Informix's
Standard Engine database. When a user runs a program
that accesses
the database, two processes are created: (1) the original
program,
and (2) a daemon that handles the database access. (This
is not true
of Informix's OnLine engine.)
We had difficulty with a particular program "locking
up" and
so users took it upon themselves to kill their process
when the program
appeared to be stuck. Killing your own process generally
isn't a problem,
but in the case of the Informix daemon it is. If you
kill the Informix
daemon when it's in the middle of a transaction, it's
possible to
corrupt the database.
The proper procedure is to kill only the process created
from the
original program. The accompanying daemon process eventually
receives
the signal, at which time it shuts down, does whatever
cleanup work
it needs to do, and then stops.
So our problem was one of education.
It's okay to kill your program, we said, but don't kill
the daemon
along with it -- even though you can. The daemon will
die on its
own when it's ready. Most of our users got the message
but a few didn't.
We had to find out who kept killing the engine.
We wanted to know: user id, date and time, and information
about the
process being killed.
User and group id are obtained from /usr/bin/id, date
and
time from /bin/date, and information about the process
about
to be killed is provided by /bin/ps (you have to get
information
about the process before it's killed, of course). The
results are
written to a log file and finally the real kill program
is
called to do the work (i.e., kill the process). Simple.
The
kill script is in Listing 1.
To install the kill logging script you first rename
the real
/bin/kill program to /bin/rkill. Then move the kill
script to /bin/kill, making sure that everyone has execute
permission for kill and write permission for the log
file.
The "real" kill program (/bin/rkill) is called
from the new kill shell script.
Periodic checks of the log file told us who needed to
be reminded
about what not to kill.
The same logging technique was developed for rm.
Logging rm Requests
In addition to the information captured in the kill
script
(user id, date and time, etc.), the rm script records
the
current working directory. It does this so that files
referenced relative
to the current working directory can be uniquely identified.
For example, if someone types "rm myfile,"
you must know the
current working directory before you can determine if
/bin/myfile
was removed or /usr/sneed/myfile was removed. Of course,
the
current working directory is irrelevant if the file
is referred to
by an absolute pathname.
The rm logging script (see Listing 2) is installed in
the
same way the kill logging script is installed. First
the real
rm program is moved to a new name (/bin/rrm) and the
rm shell script is copied to /bin/rm. Now whenever
a user types "rm filename," the request is
recorded in the
rm log file.
But there are several problems to be aware of.
Problems
The first problem with any logging program is the log
file. It keeps
growing. Each request writes multiple lines to the log
file, and with
a frequently used command like rm, this can be a serious
problem;
if left unattended, the log file will eventually fill
up your file
system.
A serious problem, but easy to solve.
What you need is a maxtab entry (See "maxtab: Automatic
File Pruning," Sys Admin March/April 1993, vol.
2, no.
2). Supply the file name and maximum number of lines
and a cronjob
does the rest. We have the kill log file set to 2,000
lines
and the rm log file set to 4,000 lines. This gives us
a rolling
history of approximately the last 285 kill requests
and last
1,000 rm requests.
The second problem is that there is no secret about
what's going on.
The rm and kill logging scripts are world readable
and anyone can look at them to see how they work.
Soon after one user was "caught" and subsequently
publicly
flogged (in a nice way) in email, another user began
using her own
rm script out of her $HOME/bin directory. Her rm
script was the same as the newly installed rm logging
script
(that is, it calls the real /bin/rrm program), but hers
was
different in that it didn't bother doing any of the
logging.
This problem was quietly addressed by writing a C version
of the scripts.
The C versions work the same, but the logging feature
is hidden because
the contents of the resulting binary file are not as
obvious as those
of a shell script.
The rewrite of the rm-logging program was straightforward
(see Listing 4
for rm.c). The user id, current working directory,
etc.,
are easily obtainable in a C program, and once that
information is
obtained, it's simply a matter of passing the command-line
arguments
on to the real rm program.
The C version of the kill program (see Listing 3 for
kill.c).
was also straightforward, since the difficult part --
logging information
about the process about to be killed -- was already
available (see
"sukill: Stopping Unruly Processes," Sys Admin
November/December
1992, vol. 1, no. 4).
There are other problems, too -- for example, the Network
File
System (NFS). If you have access to a file system from
any one of
several machines, then you also have access to several
rm
and kill programs. This means that in order to log all
requests,
you must install the logging programs on all machines.
Also there may be other kill or rm programs on your
system. We have a program called top that dynamically
displays
the current processes. You can kill processes from within
the top
program -- and bypass /bin/kill logging.
Also, a C program that calls unlink() or kill() directly
is trivial to produce.
In spite of the security shortcomings, the logging programs
are valuable.
They are valuable because they provide information not
previously
available, information that can make your job easier,
information
that makes playing detective as easy as checking a log
file.
About the Author
Steven G. Isaacson has been programming professionally
since 1985. He works for FourGen Software,
the leading developers of accounting software and
CASE Tools for the UNIX market. He may be reached at
uunet!4gen!stevei or stevei@fourgen.com.
|