Migrating to an Automounted Network Filesystem
Jay Allen
Why use the network file system automounter? Users like
it because their
home directories follow them to all the machines they
login to.
Developers like it because source code is always stored
in the same
place. System administrators like it because the whole
thing is actually
easier to take care of than tens-to-hundreds of hand-built
/etc/fstab
files. If the Network filesystem is your friend in systems
administration, the NFS automounter is your best friend.
Automounter
gives you everything you need plus more network bandwidth,
because it's
also the auto-un-mounter. In this article I'll address
the common
problems, and offer a few suggested solutions for transitioning
your
network filesystem from a static, labor-intensive setup
to a centrally
managed high-performance automounted scheme. (If your
system doesn't
include automounter, see the sidebar "An Automounter
Replacement.")
The transition to a fully automounted network filesystem
can be divided
into several component parts. First, you need to make
decisions about an
administrative information system. This system is what
will allow you to
make changes in as few as three files, but have the
contents of those
files magically transported all over your network. Next,
you will want
to take your first pass at making a few automounter
maps. These first
maps should be made without changing the NFS file servers.
You can use
your experience in making these first maps and testing
the results to
make decisions about how to organize the NFS file server
to maximize the
benefit of automounting. Last, you will spend some time
updating the
client systems and verifying the installation. Iterate
through this
cycle of map changes, NFS file server organization,
and client
verification as needed. At some point your automounter
"transition" will
turn into infrequent maintenance.
How Will You Distribute Automount Maps?
If you have not already done so, one of the first things
you'll need to
do is choose a system to propagate user and file system
information.
This administrative data includes /etc/passwd, /etc/group,
/etc/netgroup, and the automounter maps. One of the
primary goals of an
information system is to reduce the work required to
change NFS mount
points. You want to edit as few files as possible, on
as few machines as
possible. Without such a system, you would have to edit
at least one
file (/etc/fstab) on every machine each time you wanted
make a change to
your NFS environment. Ideally, your map maintenance
could be stored in a
few files, on one host. The traditional, and most common,
solution is to
use the Network Information System (NIS). NIS was originally
introduced
by Sun Microsystems, and is now supported by most major
UNIX vendors.
The automounter was also originally written by Sun,
and is designed to
run using NIS. As of this writing, DEC's Ultrix, IBM's
AIX3/AIX4, and
HP's HP-UX all ship with NIS and the automounter. As
you might suspect,
all Sun products include support for NIS and the automounter
as well.
Some companies have chosen to avoid NIS because of its
reputation as a
security risk. If you have security needs that preclude
using NIS (if,
for instance, your corporate security officer forbids
it) and if you
have a homogeneous Solaris environment, you might be
able meet your
automounter needs (and mollify the security department)
by using NIS+.
NIS+ can use secure RPC and DES credentials to protect
administrative
data from the kind of casual abuses that NIS is know
for. NIS+ is Sun's
updated version of NIS, with many management and security
enhancements.
It has a very rich feature set, which gives maximum
flexibility but can
be tough to get going. NIS+ has very little in common
with NIS, so if
you plan to "upgrade," go easy, and do a lot
of reading and testing. You
can also get training from the folks who wrote it, (in
my opinion, Sun
offers very good training in the management of NIS+).
One thing that has
been missing is support of the NIS+ system from other
UNIX vendors. It
was only recently -- August 1995 -- that SunSoft produced
a press release,
annoucing that many of its business partners have licensed
the ONC+
technolgy. (NIS+ is part of a product bundle that SunSoft
calls ONC+.)
Hewlett-Packard, IBM, and Sequent representatives are
all quoted as
planning to include support for NIS+ in future releases
of their
respective UNIX products (HP-UX, AIX, and Dynix/PTX).
A few of the less common ways to distribute the automounter
maps, and
other adminstrative data, include using rdist, SPM (/etc/passwd|group
only), Carnegie Mellon's SUP, Project ATHENA's HESIOD,
and MIT's
Kerberos5. Combinations of these programs can be used
to push the map
files out to clients, manage user information, and do
it all with
security. If using noncommercial products is a problem,
there also
several third-party commercial products for managing
problems like this
(Tivoli, and friends). In any case, your goal should
be a system that
allows you to make changes easily and quickly. My test:
can you explain
how it works without a whiteboard? In an email? If the
answer is no,
then you should probably rethink your solution. (For
more suggestions
about NIS alternatives see the sidebar "I Can't
Run NIS or NIS+").
Mapping Your Network Filesystems
Automounter uses maps to locate the appropriate remote
NFS file server,
exported filesystem, and mount options. These maps can
be text files or
NIS/NIS+-derived tables. In general, they are constructed
with tuples of
"key value." For example, to map my local
/cdrom directory to the
jukebox CD-ROM server megaCD, I might have a map entry
like this:
/cdrom megaCD:/jukebox/root
This is called a "direct" map because I have
mapped it explicitly
relative to the local root directory "/".
With "indirect" maps, the
automounter manages a single mount point relative to
root, like /pkgs.
The map entries in an indirect map are all relative
to the
automounter-managed mount point and do not include a
reference to the
root directory. To mount my CD-ROM jukebox via an indirect
map, I might
have an entry like:
cdrom megaCD:/jukebox/root
This indirect entry would be located in a indirect map
for the /pkgs, so
I would refer to the jukebox via the local path /pkgs/cdrom.
User home
directories are usually mapped with indirect semantics.
Let's back up a bit, and talk about the master automount
map.
Traditionally, this is a file/NIS table called auto.master
(auto_master
on Solaris). auto.master is a map of maps, and does
not contain any
filesytem entries. This is how you tell automounter
where to get maps
(NIS or local files) and which maps are direct or indirect.
An example
of a reference to a direct map is:
/- auto.direct
Most automounters use the special entry "/-"
to indicate that it is a
direct map. In this example the auto.direct map would
contain tuples
that all start relative to "/". Because the
reference to auto.direct
does not contain a filesystem path, automounter assumes
that the map can
be found via NIS. The most common indirect map for home
directories.
/homes auto.home
is also derived via NIS. Because it is an indirect map,
none of the
tuples in auto.home includes pathnames relative to root.
So, if the
master automount file has the indirect map entry
/homes auto.home
then a user account entry in auto.home would be
jay homeserver:/export/home/jay
In this case the user's path would be /homes/jay.
Lastly, if you need a special map that does not come
from NIS or you are
not running NIS, you can specify the filesystem path
for the map in the
auto.master entry. For example, if I want to use the
local file
/etc/auto.direct, and not the auto.direct from NIS,
my auto.master entry
would be:
/- /etc/auto.direct
Remember though, if the number of hosts with local files
starts to
increase, you may be defeating your central management
scheme. Spend the
time to work around problems -- avoid solutions that
might be expedient,
but costly to maintain long-term. (See Figure 1 for
example automounter
files.)
Organize the NFS Servers
The most challenging part of a transition to fully automounted
NFS is
the re-organization of the NFS server(s). Like many
aspects of UNIX
administration, this is best done in an iterative fashion.
Make a few
simple maps that use your existing NFS layout. Then
revisit it, looking
for opportunities to consolidate and organize the server.
At Portland
State University, the CS department encourages the use
of packages. The
idea is that shared software is compiled/installed in
such a way that it
can be localized into one directory. For instance, a
program's runtime
binaries, library, and configuration files could all
be stored in one
directory, instead of the traditional /usr/local/bin,
/usr/local/lib/,
and ?? or whatever. In the context of the autmounter,
this becomes a
great asset. Since everything that is needed to run
a program is
clustered into one directory hierarchy, it is exported
via NFS as a
single mount point. Clients that need the program simply
reference that
one directory, and they have the whole suite of tools.
This is the
easiest way to have several versions of the same tools
running at the
same time, and can be a great aid for making software
upgrades. (Take a
look at the sidebar "The Package Mounting Scheme"
for more on the
PSU-package scheme.)
You also want to think about serving up NFS for different
platforms. The
basic promise of NFS is that the underlying "real"
filesystem is not
important, so your AIX boxes will be happy using Solaris
NFS. In our
shop, we have a few large NFS servers running Solaris,
lots of Solaris
clients, a few RS/6000 clients, and a few SunOS clients.
As a first cut
we considered NFS layouts like /volume/rs6000, /volume/sun/solaris,
/volume/sun/sunos. Because we also have different versions
of operating
systems (which may or my not be binary compatible),
we ended up with
/volume/rs6000/aix3, /volume/rs6000/aix4, /volume/sun/solaris24,
/volume/sun/solaris23, and /volume/sun/sunos413. You
get the idea. This
kind of structure is very important when you start creating
maps with
variable substitution.
Choose mount points carefully. Mounting a src tree for
Solaris under
/opt/src might make sense, but it might look strange
to have an /opt
directory for the corresponding AIX src tree. I recommend
that you
choose mount points that don't already exist on any
of your machines. In
our shop, we use /homes for the home directories, /pkgs
for indirect
maps that can be package-ized, and /admin for administrative
data.
One of the most powerful features of automounter maps
is their use of
variable substitution. This feature is what allows a
single NFS machine
to serve up common executables to different hardware
types, and even
different operating systems. There are two different
types of variable,
"built-in" and "command line." In
the automount map, you can add $ARCH,
which expands to the output of arch. So, if you want
to export gnu tar
to both Sparcstations and RS/6000s, you might have an
NFS server layout
like this: /volume/rs6000/gnu/bin /volume/sun4/gnu/bin.
The automounter
map would include a line like:
/pkg/gnu nfserver:/volume/$ARCH/gnu/bin
So from the client's perspective, the gnu tar could
be found in
/pkg/gnu/tar, and it would be in the same location whether
you logged
into a Sparc or an RS6000. All the automounters I have
seen have the
ability to internally interpret the $ARCH variable.
Solaris offers the
widest array of internal variables including $OS, $CPU,
$OSREL, and
others. If your UNIX does not have the variety of variables
that Solaris
does, you can still build flexible maps using the second
type of
automounter variable, command-line variables. Automounter
lets you
define any arbitrary variable and value at runtime.
To do this, add
-D var=value
to the command line of your automount demon. This variable
then becomes
part of the automounter's environment and can be used
in your maps. At
my company we use the $ARCH and $OS variables for Solaris
client maps;
unfortunately, when the AIX boxes started to be integrated,
we found
that AIX's automounter does not support $OS internally.
We already had a
big investment in the ARCHH/OS/package layout on the
NFS servers, so
instead of changing the NFS server to meet the needs
of the clients, we
manually define the $OS variable on the command line
as automounter
starts up (in the rc.nfs script).
Switching All the Clients
Finally, we have the technically easy but tedious job
of reconfiguring
all the clients. Keep this inevitable part of the job
in mind as you
design your NFS server layout and put together the maps.
Ideally, you
can find a way to make changes on the server side that
allow the old
clients to continue as always, but also support the
new automounted
clients. That way you can make the transition at your
own pace. On the
other hand you can use the John Wayne technique: do
the whole thing in
one step. Pick a day, a long day, when you can reboot
all the affected
hosts. Plan on having to do a lot of hand editing, and
give yourself
plenty of time. Depending on the complexity of your
environment (and how
much money is at stake if you break it), you may need
to have other
personnel on hand to verify that your changes don't
clobber their
production. In my case, I help support a complex electronic
publishing
system from Xerox. I'm not intimately familiar with
all the details, but
I do know that I provide NFS services to some higher
level file service
(GlobalView from Xerox). I don't personally understand
it, but when I
have to make changes to NFS and the automounter maps,
and everything
continues to run, we all shake hands and smile like
maniacs.
The easiest way to "transition" machines is
install them that way. By
using a standardized install process for new machines,
you can make
automounter schemes part of your company's computer
installation
process. As new clients or servers arrive, spend extra
time on getting
them to use automounter. Once they use automount maps,
you can change
the contents later. The hard part is getting them on
automounter at all.
In our shop, we use the "jumpstart" process
to create new Solaris
systems. It's a great way to install the OS, and apply
your polices, in
a consistent manner.
About the Author
Jay D. Allen received a BS in Chemistry 1991, from
Portland State
University. He is currently Lead Systems Engineer at
Blue Cross Blue
Shield of Oregon. You can contact him as jay@fork.com
or via his home
page at http://www.teleport.com/~poppy/cv.html.
|