Questions and Answers
Bjorn Satdeva
UniForum
The UniForum trade show took place in San Francisco,
March 21th to
25th. Because of the size of the show, it is impossible
to visit every
stand on the show floor. This year I had decided to
look for system
administration tools designed to provide overall, site-wide
support
to the system administration team -- what the marketing
people
like to call enterprise system management.
General System Administration Tools
So far I have found only two products, both of which
have been around
for some time, that actually qualify for this category:
Tivoli Systems
and Computer Associates' CA-Unicenter. The Tivoli demonstration
was
very surprising to me, but not in the good sense of
the word. Like
most of the other vendors, they had a small circus going
on in one
corner of the stand, giving what was supposed to be
a technical demonstration.
However, in the best tradition of such marketing shows,
that presentation
did not have much technical content. I was surprised
to learn that
the user administration package, which made up the original
base of
the product, is still presented as an important feature.
In my experience,
user administration has never been a significant part
of the total
system administration work load.
I had quite a bit of difficulty finding someone who
could provide
a more in-depth demonstration of the system (I had to
return three
times), but after seeing the demonstration, I felt that
there was
good reason for this, as the product seems to me to
have very little
of practical value implemented at this time. The exception
to this
is their file distribution system, which is implemented
using a subscription-oriented
strategy. While it will not solve all file distribution
problems,
I know of no other product which solves this specific
kind of file
distribution problem (which is not an easy one to solve).
However,
overall, the current product does not have much appeal
to me, as it
appears not to provide enough benefits to justify the
complication
of supporting a new product.
The second product is Computer Associates' UniCenter.
This is actually
an existing product which was originally written for
the mainframe
environment and later ported to UNIX. When I first encountered
the
UniCenter, it seemed that everybody involved with the
product believed
that what was good for the business-oriented mainframe
would be good
for the rest of the world. What I saw at UniForum convinced
me that
the people behind UniCenter have since learned that
much of the UNIX
world is different because of the very large installed
base of systems
used for scientific and engineering applications. For
such installations,
many of the solutions in the original UniCenter were
aimed at nonexistent
problems. Now, however, UniCenter has evolved in a direction
where
it can provide a useful contribution to the system administration
team, whether the site is business or scientific/engineering
oriented.
In comparing the two products, I believe that UniCenter
may soon arrive,
while Tivoli is still more of a promise than an actuality.
If I were
to install either product, I would most certainly require
the vendor
to allow for an extended evaluation period, during which
the product
could be tested on my machines by my administrators,
to see if it
would deliver as promised. As part of the evaluation,
I would take
into account the fact that Computer Associates certainly
has the financial
muscle to develop a good product. Whether either company
has the vision
to develop a truly efficient system administration tool
that can be
used to administer a large number of hosts in a networked
environment
still remains to be seen.
Bellcore Pingware
One of the fun things at a show like UniForum is to
visit the small,
inexpensive booths located at the edge of the show floor.
It is very
often here that you will find the new and exciting technology,
which
will make the headlines in the future. However, this
year, I did not
see anything of particular interest for us system administrators,
as the main topic seemed to be object-oriented technology.
At the Bellcore booth I did find one completely new
product which
looked interesting -- a security analysis tool called
Pingware.
This tool is designed to scan a network for connected
systems, make
a security audit of each system, and create a report
on system vulnerabilities.
It appears to be a useful and much needed tool for legitimate
use
by system administrators, but can also be abused by
crackers. When
I asked about this very touchy and controversial issue,
I got the
impression that the company is struggling, as so many
of us are, to
come to a reasonable and balanced solution. I think
it is very good
that the UNIX community is finally moving away from
the old paradigm
of not disclosing security vulnerabilities, even though
the process
of disclosure is painful because we do not yet have
any good idea
of how this should be done. The black hats have had
this kind of information
easily available for for years; only by also making
it available to
the people who have a legitimate use for it will we
be able to establish
any kind of reasonable balance.
To BSDI or Not to BSDI
One last thing: I had very much looked forward to this
year's UniForum
conference as an opportunity to ask the vendors of UNIX
SVr4 for the
PC platform about their thoughts on the resolution of
AT&T's lawsuit
against BSDI (the suit was settled out of court, and
the case was
sealed, but as BSDI is still in business, it is a fairly
good guess
that they, from a practical point of view, won the case).
At UniForum
in 1991, when the suit had just been filed, several
PC UNIX vendors
present at UniForum stated that the BSDI version of
UNIX with full
source code was no threat to them, as the lawsuit would
put BSDI out
of business within months. I had therefore planned to
ask the the
same question again, but to my disappointment, none
of those vendors
were present.
SANS III
The Third Conference for System Administration, Networking
and Security
took place in Washington, D.C. in the first week of
April. The conference
was highly successful, doubling the number of attendees
from last
year. It has been very interesting to watch this conference
from year
to year, as it matures. While it is very different from
the LISA conference,
which traditionally takes place in the fall, somewhere
on the West
Coast, it appears to be following the same pattern,
with respect both
to the increasing quality of the papers presented and
the increasing
number of attendees. However, while LISA tends to focus
on the leading
edge of UNIX system administration methods and technology,
SANS focuses
on the practical usability of the tools and methods
presented. And
while LISA is oriented towards scientific and engineering
sites, SANS
has a slant towards the administration of UNIX in a
business environment.
It is not possible to describe all the papers in this
space, but to
give a taste of the conference, I will outline some
of them briefly.
Marcus Ranum's "A Network Perimeter with Secure
External Access"
describes an overall strategy for protecting systems
from various
outside threats. Once again Marcus has successfully
treated a very
fuzzy topic in a systematic manner. Gene H. Kim and
Eugene H. Spafford
wrote of practical experiences with tripwire in the
paper "Experiences
with Tripwire: Using Integrity Checkers for Intrusion
Detection."
Michael Neuman and Gary Christoph described a special
shell providing
restricted root access. Hal Pomeranz presented a re-implementation
of Paul Anderson's disk caching as a simple but effective
method of
reducing NFS traffic in "A New Network for the
Cost of SCCI Cable";
and Michele D. Crabb outlined an overall strategy used
at Ames Research
Center in her "Guarding the Fortress, Efficient
Methods to Monitor
Security on 300 Systems." I also want to mention
Matt Bishop's
talk on common security problems and Dan Geer's talk
on security breaches
in some commercial sites he had experienced. Unfortunately,
neither
talk was accompanied by a paper.
The Proceedings from the SANS III conference are available
from
USENIX, 2560 Ninth St., Ste. 215, Berkeley, CA 94710,
(510) 528-8649.
A Few Updates
Before going on to this issue's questions, there are
a few items of
old business that need to be taken care of.
In the January issue, the explanation of how to do subnetting
contained
an unfortunate mistake. The correct address for the
common network
mask is 255.255.255.0.
With respect to the same column, I have been asked if
I know of a
program which can calculate the various address which
must be specified
when using subnets. I do not know of any, but if any
reader knows
of one (or has written one), I will publish it here.
I usually use
the UNIX utility bc (for Board Calculator). Its ability
to
convert between base 10 and base 16 makes it adequate,
but not necessarily
a user-friendly tool for this purpose.
Again on the same subject, a few readers have asked
how using netmasks
in class B addresses differs from class C. In principle,
there is
no difference. In each case, the network address part
is expanded
at the cost of the size of the host's address. The only
practical
difference is the starting point of where the IP-address
is split
between network address and host address.
One reader pointed out that with a router which supports
separate
subnet masks for each interface, it is possible to use
different subnet
masks on different subnets. I didn't mention this because
I wanted
to keep a topic which is generally considered to be
very confusing
as simple as possible.
Finally, I have gotten a number of reminders about RFC
1219, "On
the Assignment of Subnet Numbers." The RFC is always
a good place
to search for information, and I agree with this. However,
the purpose
of the column was to explain how subnets work, and given
the limited
space available, I sometimes have to eliminate material
that would
certainly be included were I covering the same topic
in a book. This
is an unfortunate fact of life.
Several readers noted that the Trojan horse example
described in the
March issue will not work on all systems. I hinted at
that in the
discussion. My goal was not to provide a portable Trojan
horse, but
rather to give a good explanation of why it is a very
bad idea to
have the current directory in the search path. I believe
I made made
my point and rest my case.
[Note: This question has been paraphrased from a very
long and very specific one.] Your description of subnets
was very
useful to us. However, we have three class C addresses
here, and two
subdomains. How do I manage to split the name server
between the various
subdomains?
I can understand why this question has arisen, as the
use of IP-addresses in networks and the name server
seems similar.
However, while subdomains and subnets may seem similar,
they are very
different, and should not be confused. named does not
understand
subnets, and with good reason: it is not a network management
tool,
but rather an information server, used to map hostnames
to IP addresses
or the reverse thereof. Unfortunately, this doesn't
make it any easier
to administer the subnet and subdomains in a reasonable
manner. The
way I would solve this specific problem would be to
specify both the
host name and the subdomain for each entry in the name
server configuration
files. This will work, as long as both subdomains are
served by one
primary name server.
You have in the past mentioned two software packages
which can be used to implement a firewall, SOCKS and
the Firewall
Tool Kit. Which of the two is the best?
It depends on your site and its users. Both packages
implement what is usually called a proxy service. SOCKS,
the older
of the two packages, requires that the client software
(for example
ftp) be replaced on all inside hosts with a version
that understands
the SOCKS protocol, which is used to connect to the
firewall, which
in turn makes the connection to the desired host on
the Internet.
I believe that SOCKS is the first publicly available
software to have
implemented this kind of service, and it has worked
very well for
a number of sites. However, because it requires replacement
of the
client, it will not work well at sites where a large
number of PCs
or MacIntoshes are used, as there are no clients available
for those
machines (at least not that I know of). In comparison,
the Firewall
Tool Kit is only installed on the firewall; all systems
on the inside
use their usual clients for ftp or telnet. However,
the Firewall
Tool Kit is not transparent to users, as it requires
them to type
slightly different commands than those they would otherwise
use. It
is of course possible to replace the inside clients
which interact
with the toolkit, to make this change invisible, but
you then have
the same software distribution problem you have with
SOCKS.
In my opinion, the Firewall Tool Kit is the better of
the two packages,
as it not only implements a needed service, but does
so in a well
thought out and very secure manner. In fact, our firewall
was originally
implemented with SOCKS, but we've redone it with the
Firewall Tool
Kit, due to what I think is a better design. The Firewall
Tool Kit
also has the advantage of being able to build on the
experiences gained
from SOCKS. On the other hand, SOCKS provides support
for the very
popular Mosaic program, which is not supported by the
Firewall Tool
Kit. A major negative for SOCKS, however, is that it
has recently
been used by intruders to open up connections from the
outside (pretty
ironic, that the very tools which should protect our
systems are used
to penetrate them). If you are using SOCKS, at least
be very sure
to run the very latest version of sendmail.
In your Q&A in the March/April 1994 issue of Sys
Admin, you answered a question regarding using rdump
to
a remote tape host without enabling root access from
that
host. I tried to set this up on my systems, but it did
not work. I
created an account "operator" on system_1
and gave
it Group ID sys so that it would have read access to
the disk
devices (which are UID root and GID sys on these systems).
I created the same account on the tape host: system_2.
I can
execute any number of remsh <command> type of
commands from
system_1 to system_2, but rdump fails with
a message: "rresvport: bind: Permission Denied."
Apparently,
only super-user can obtain a socket with a privileged
address bound
to it. I do not know of any way around this limitation
in using rdump,
without being root, that is. Do you?
Check to see if the dump program is SUID root
(some vendors ship it without this). Your conclusion
-- that you
need to be root in order to access a socket below port
1024
is correct -- which is why it is necessary to make dump
and rdump (which are actually the same program, with
a link)
SUID root. This is less of a problem than opening up
general
root access between the systems.
About the Author
Bjorn Satdeva is the president of /sys/admin, inc.,
a consulting
firm which specializes in large installation system
administration.
Bjorn is also co-founder and former president of Bay-LISA,
a San Francisco
Bay Area user's group for system administrators of large
sites. Bjorn
can be contacted at /sys/admin, inc., 2787 Moorpark
Ave., San Jose,
CA 95128; electronically at bjorn@sysadmin.com; or by
phone
at (408) 241-3111.
|