Questions and Answers
Bjorn Satdeva
LISA VI Conference
The USENIX LISA VI system administration conference
is now over. A
number of good papers were presented, but, as always,
many of the
highlights of the conference came in the course of social
contacts
between system administration peers in the hallways
and bars of the
conference hotel. Except for those at very large sites,
system administrators
usually have no peers at work. Users may be friends,
but they cannot
be peers, because their perspectives and needs are so
different from
the system administrator's.
Among the interesting papers presented were Paul Anderson's
"Effective
Use of Local Workstation Disks in an NFS Network,"
Peg Shafer's
"Is Centralized System Administration the Answer?"
Carol Kubicki's
"Customer Satisfaction, Metrics and Measurement,"
and Richard
Elling and Matthew Long's "user-setup: A System
for Custom Configuration
of User Environments, or Helping Users Help Themselves."
Most
interesting to me was Michael A. Cooper's "Overhauling
Rdist
for the '90s," a description of a new version of
rdist,
which was first introduced in 4.3 BSD UNIX. This version
promises
to fix many of the problems in the original rdist that
had
to do with large-scale distribution.
LISA VII in 1993
Although LISA VI is barely over, the work on next year's
LISA conference,
to take place November 1-5 in Monterey, California,
is already underway.
The Call for Papers will be out before the USENIX Winter
conference
in January, but in the meantime, here's a little taste
of what we
are working on. The topic, "The Human Aspect of
UNIX System Administration,"
reflects the fact that system administrators have come
to recognize
that providing good support is not only a technical
task, but also
one which requires dealing with human beings. This is
not to say that
LISA will become an amateur psychology gathering. What
we hope for
is submissions that deal practically with management
of the human
aspect -- through policies, procedures, and improved
forms of communication.
Of course, traditional technical papers will be welcomed
in the usual
fashion.
The Interop Exhibition
The Interop exhibition, one of the major tradeshows
in the UNIX community,
took place in San Francisco the week after LISA VI.
The fact that
this show has grown so much must be proof of the commercial
success
of UNIX. This year was the first in San Francisco --
the show had
been in San Jose, in the heart of Silicon Valley, in
previous years.
The size of the show is now almost intimidating, and
it's packed with
vendors who claim to have all the solutions to one's
problems, if
one will only purchase their application. While many
vendors did indeed
have good solutions to some of the problems, the oft-repeated
claim
of having the one and only solution served to heighten
my natural
skepticim. Taken in moderation, however, the show is
a very good source
of information. I decided ahead of time to focus on
low-priced routers,
and was able to obtain some good information in this
area.
The routers provided by well-known companies such as
Cisco and Wallfleet
are all of the high-performance kind and are priced
accordingly. I
was looking for alternatives, capable of performing
well enough for
a slip connection or a 56Kbit lease line and priced
within reason
for smaller companies. I found a couple of possible
solutions.
One of the most facinating possibilities was a T1 radio-wave
solution
from Cylink. Using this kind of technology, you pay
only the setup
cost and the cost to the Internet service provider,
but no cost for
leased lines from the phone company. Cylink claims that
this technology
works up to a distance of 10 miles, with very slight
deterioration
in bad weather.
Network Application Technology showed an IP router,
the LANB/290 Remote
IP Router, which seems to qualify as one as the lowest-priced
routers
on the market. Each router comes with a LAN connection,
one console
port, and a data link connection for RS-232, RS-449/422,
V.35, or
X.21. It uses the PPP over the serial link, and will
support SNMP.
CMC Network Products unit of Rockwell International
presented
the Net Hopper, a dialup TCP/IP router, which seems
to be positioned
as competiton to the NetBlazer from Telebit. With the
modem(s) built
in, the Net Hopper is very competitively priced at $2,000
with one
modem and one LAN connection, or $3,500 with one LAN
and three modems.
CMC claims that the Net Hopper is easier to set up than
a stereo system
(I find this hard to believe, especially since the Net
Hopper
seems to support package filtering).
Defense Fund for Berkeley UNIX
Berry Shein, president of Software Tool & Die, is
working on creating
a defense fund for the University of California in the
UNIX System
Laboratories copyright suit against BSDI and the University
of California
at Berkeley.
And now to this month's questions.
What is the ARPANET?
The ARPANET no longer exists, so the question must
be rephrased as "What was the ARPANET?" However,
since the ARPANET had a very significant influence on
the
development of the Internet as it exists today, the
question is worth
answering.
The ARPANET was funded by the US Department of Defense
Advanced Research
Projects Agency, ARPA (later DARPA) in the late 1960s.
It was an experimental
network that spanned the United States, and was used
by the goverment
to share computer resources across the continent. During
the early
1980s, the TCP/IP protocol family was developed, and
made generally
available through the University of California at Berkeley.
TCP/IP
made it easier for many organizations, such as universities,
to connect
to the ARPANET. In just a few years, ARPANET grew from
connecting
relatively few machines to become the backbone of a
large number of
local networks. And the Internet was born. In the 1980s
the ARPA network
experiment was terminated by DARPA, and the NFS network,
provided
by the National Science Foundation took over. Today,
the Internet
is made up of many wide area networks, such as NFSNET,
and in fact
covers the entire world.
You often emphasize in your writing and talks the need
for system administrators to be able to deal with people.
Are there
any books or tutorials you can recommend to help me
with this?
Unfortunately, I know of no books or courses that address
these issues. Over the years, I've learned from practical
experience
and mistakes made in the process. The books I've found
useful have
been various books on management, even though the form
of management
practiced by a system administrator in dealing with
users and daily
management is in a somewhat different category. If I
had to recommend
one book, it would probably be Tom DeMarco's People
Ware. This
book deals mainly with software development, but it
also contains
good deal of common sense on how to work with people.
Changes have been slow in coming about. In the beginning,
when I encouraged
people to work on this area, many administrators were
still completely
caught up in the technical issues of the profession.
Now a number
of people have told me they are excited about the theme
of the next
LISA conference, and in just the last few weeks, an
additional SAGE
working group has been created. The new group, sage-managers,
will
explore how, from the system administrator's perspective,
to manage
management.
Your best bet, however, is still to use common sense,
and to be able
to listen to users' and management's wishes and requirements.
In articles on USENET, I often see references to something
called a "firewall." It seems to be something
you need when
you are connecting to the Internet. Could you explain
a bit more?
A firewall is a security tool that you can use to protect
your site from unwanted access from the Internet. Strictly
speaking,
it is not necessary, but it is very much recommended.
The firewall serves two purposes. One is to give you
very high degree
of control over who and what can access your site; the
second is to
limit this control only to the point where your site
connects to the
Internet. Your site is then rather like a shellfish,
hard on the outside
and soft on the inside.
The firewall itself consist of two items, a router and
a computer,
the latter often referred to as the gateway (as it is
the single point
of access to the Internet). The router, sometimes also
called the
choke, must be set up to ensure that any network packet
it lets through
must come from, or be destined to, the gateway machine.
This makes
it impossible to connect to or from any other machine
on your network
other than the gateway. In turn, the gateway must be
set up to forward
any package to or from the Internet in a reasonable
manner -- otherwise,
users will have to log on to the gateway machine itself,
which could
all too easily compromise the security of both the gateway
and your
system. The problems this strategy creates for e-mail
can fairly easily
be resolved through use of aliases and MX records. Problems
with other
services, like ftp, are more difficult, but can be resolved
through
use of a proxy mechanism, such as SOCKS (written by
Michelle and David
Koblas). SOCKS is available by anonymous ftp from s1.gov.
In the above example, the router must be configured
to reject all
traffic, except the one fulfilling certain requirements.
It is also
possible to set up a firewall with a different filtering
mechanism,
where traffic is let through by default and specific
configurations
are denied. I believe this approach to be somewhat less
secure, however,
and a lot more difficult to make functional.
The above explanation is somewhat simplified due to
space constraint.
For more information, I recommend two good books from
O'Reilly and
Associates, both of which can be helpful in setting
up a firewall.
One is the Practical UNIX Security, by Simson Garfinkel
and
Gene Spafford; the second is DNS and BIND, by Paul Albitz
and
Cricket Liu.
About the Author
Bjorn Satdeva -- email: bjorn@sysadmin.com /sys/admin,
inc. The Unix
System Management Experts (408) 241 3111 Send requests
to the SysAdmin
mailing list to sysadm-list-request@sysadmin.com
|