Cover V11, I01

Article

jan2002.tar

Questions and Answers

Amy Rich

Q I recently had one of my machines panic and spit out the following message. I have the root disk mirrored with DiskSuite, which I assume has something to do with the md output:

savecore: [ID 570001 auth.error] reboot after panic: md: state database problem
A The md: state database problem message probably indicates that something happened to one of the mirrors, and there weren't enough remaining metadevice database instances to reach a majority consensus. The majority consensus algorithm is used because there is an inherent problem determining which metadevice state database instances have reliable data in them. With a majority consensus, a majority of the database replicas must agree with each other before any of them are declared valid. This requires creating at least three metadevice state databases with any metadevice, stripe, concat, or mirror.

When building two-way mirrors with DiskSuite, however, be sure that each physical disk has at least two metadevice state databases on it instead of just three total. If one of the disks were to die, the remaining disk must establish a majority, so it needs to have 50% of the replicas on it. If you put less than 50% of the replicas on the remaining good disk, there is a chance that all metadevice configuration information would be lost.

If you're creating the databases by hand, do the following (assuming you're mirroring things on /dev/rdsk/c0t0d0 and /dev/rdsk/c1t0d0, keeping your metadevice state databases on slice 7 on both disks):

/usr/sbin/metadb -a -c 2 /dev/rdsk/c0t0d0s7 /dev/rdsk/c1t0d0s7
The -a flag tells /usr/sbin/metadb to add the metadevice state database replicas to the specified slice, and the -c tells /usr/sbin/metadb how many replicas to put on each slice.

Add all the replicas to a slice at once. If you've already created your metadevice state databases and you put the wrong number on each slice, you first need to clear them, and then re-add them. I suggest leaving the slice containing the master replica for last. You can determine which slice that is by running /usr/sbin/metadb -i. The output will look like the following for our example case:

        flags         first blk     block count
     a m  pc luo      16            1034            /dev/dsk/c0t0d0s7
     a    pc luo      16            1034            /dev/dsk/c1t0d0s7
 o - replica active prior to last mddb configuration change
 u - replica is up to date
 l - locator for this replica was read successfully
 c - replica's location was in /etc/lvm/mddb.cf
 p - replica's location was patched in kernel
 m - replica is master, this is replica selected as input
 W - replica has device write errors
 a - replica is active, commits are occurring to this replica
 M - replica had problem with master blocks
 D - replica had problem with data blocks
 F - replica had format problems
 S - replica is too small to hold current data base
 R - replica had device read errors
Therefore, you want to leave the one with the m in the flags section for last:

/usr/sbin/metadb -d /dev/rdsk/c1t0d0s7
/usr/sbin/metadb -a -c 2 /dev/rdsk/c1t0d0s7
/usr/sbin/metadb -d /dev/rdsk/c0t0d0s7
/usr/sbin/metadb -a -c 2 /dev/rdsk/c0t0d0s7
Q One of our database servers uses Sendmail to transfer results to another system via a text attachment. I've noticed that a number of these mail messages are rejected because a spurious "!" character appears approximately 990 characters into the line of text data. It looks like it's treating this as some sort of new-line delimiter. What's putting this character in my reports, and how can I get rid of it?

A The "!" in your messages is because you have reached the maximum line length that Sendmail will allow. The SMTP specification (RFC 2821: http://www.rfc-editor.org/rfc/rfc2821.txt) is 1000 characters per line and Sendmail leaves a few characters of extra padding in there. You could try increasing the size (L= in the mailer definition) if you control all machines that the mail will go through, but I wouldn't really recommend it. It may be dangerous to increase the limit, because you may create a buffer overflow. It's expected that the max line limit will adhere to spec.

You can get around this by breaking your message into multiple lines or encoding the message (even though it's already ascii) with something like uuencode and then decoding it on the receiving end. You can also use split to break the message up into multiple chunks.

Q I have a dynamically addressed cable modem at home on which I want to run some services (Web, mail, etc.). I can't run an externally visible DNS server on the machine because the IP is dynamic, but I do run various DNS servers (BIND 8) connected to the Net that have static IPs. I also run an internal DNS server (BIND 8) on the machine attached to the cable modem.

I want to run my own dynamic DNS service (as opposed to one of the public ones) so that I can keep my home IP address tied to a hostname. I've seen lots of information on how to use other site's dyndns service, but how do I set up my own?

A My guess is that you want to set up your own dyndns service for security purposes and so that you can control the zone files. There are a few things you need to do to set up your own secure dyndns server.

On the client (your home machine), you'll need to:

1. Generate a TSIG ID key.

2. Set up a script that will notify the master name server via nsupdate when your IP changes (assuming you're using DHCP).

On the master nameserver, you'll need to:

1. Create a key entry in your named.conf.

2. Add an allow-update entry for your zone in named.conf.

3. Confirm that the timeout in your zone file for your dynamically addressed hosts is very low.

To create the TSIG key on the client machine, you can run the following command (as root), substituting whatever name you want for <keyname>:

dnskeygen -H 128 -h -n <keyname>
This creates two files -- K<keyname>.+157+00000.key and K<keyname>.+157+00000.private. The base64-encoded key included in these files is what you'll later want to put in the key directive in your named.conf file on your nameserver. I suggest putting these two files in with your zone records on the client machine.

For our example, say that the files are called /etc/namedb/tsig/Kbob.+157+00000.key, which has the following contents:

bob. IN KEY 513 3 157 SzFhe0tSf/s5BAlCn0IWzw==
and /etc/namedb/tsig/Kbob.+157+00000.private, which contains:

Private-key-format: v1.2
Algorithm: 157 (HMAC)
Key: SzFhe0tSf/s5BAlCn0IWzw==
On the master nameserver that'll be handling the externally visible zones, make the following entry in named.conf:

key bob {
    algorithm HMAC-MD5.SIG-ALG.REG.INT;
    secret hrCDCUNBtlY3sgF8NPnJrg==;
};
Add the following to any zones that will be updated by the client:

allow-update {
        key bob;
};
Be sure to HUP named on the master nameserver after making the changes to named.conf.

Your master nameserver is now ready to accept dynamic updates from your client. To send an update, use the nsupdate command on the client. To do an interactive update, do:

nsupdate -k /etc/namedb/tsig:bob.
which will bring you to a ">" prompt. Say, for example, that you wanted to add a record for mail.yourcomain.com to be the IP of your cable modem with a TTL of 10 minutes (600 seconds). Use the loopback address of 127.0.0.1 as an example, but be sure to put the correct address in when you're making your actual changes. Do the following:

update add mail.yourdomain.com. 600 A 127.0.0.1
Hit return again so that nsupdate executes the command you put in the previous line. You can do multiple lines of changes destined for the same zone file before you hit return the second time, but if you want to make changes to two different zone files, you need to make the changes in two blocks.

After you hit return a second time, a lookup is performed on yourdomain.com to extract the nameserver information, and then that nameserver is contacted and the key authentication takes place. On the nameserver, you should now see a log file appear wherever you keep your zone file. The log file will be called <zone file>.log, and it contains the directives to make the changes you specified with the nsupdate command. Eventually, these changes will be rolled into the actual zone file and the log file will be removed. For the time being, however, these entries are merely in named's cache. See the nsupdate manpage for more information on the nsupdate syntax and usage.

Q I'm running Solaris 8 on two E220Rs. One of the hosts has a tape drive, and the other one is using the remote tape drive to back up data. Instead of using rsh, I want to send my backups over the pipe with OpenSSH, but performance is miserable. Should I be doing hardware compression on the tape drive, or should I be doing software compression? Should it be on the client or the tape server?

A First, whether or not you should do hardware or software compression depends on where your bottleneck is (client, network, or tape). I'd suggest doing a few benchmarking tests before deciding where to add the compression. Here are some examples:

The absolute minimum time it will take to do a full dump of /:

/usr/bin/time /usr/sbin/ufsdump 0cf /dev/null /
The minimum time it takes to do a dump of / with software compression (gzip):

/usr/bin/time /usr/sbin/ufsdump 0cf - / | gzip -1 -c > /dev/null
You can also vary the speed of the compression versus the size of the compressed dump with -1 being fastest and -9 being best compression.

Also be aware that the ssh encryption will add some overhead. Do some benchmarking to see how much time delay you're experiencing due to ssh encryption with and without software compression. You may want to test with ssh compression (-C) enabled and disabled, too. When you enable ssh, use blowfish encryption (-c blowfish) for a significant speed increase.

Now, benchmark the speed of your tape drive by timing a local dump to the tape device. If you're using dd, you may also want to benchmark with different block sizes to optimize for your tape drive.

Measure your client overhead and tape benchmarks against the speed of your network. Where is your bottleneck? Are you saving anything by compressing on the fly? If so, you'll probably have something that looks like this (where backup is your remote user on the tape host that has access to the tape drive) when you put it all together:

/usr/sbin/ufsdump 0cf - / | gzip -1 -c | ssh -c blowfish -l backup tapehost 'dd of=/dev/rmt/0n obs=32k'
If you decide to use gzip compression, make sure that you have a copy of gzip handy for any restores that you might have to do. If you decide not to use gzip compression, you'd want to take out the gzip section of the pipeline and add in hardware compression by changing the tape device to /dev/rmt/0cn.

For ssh authentication, you can do role-based keys, .shosts, do the dump initiation from the server instead of the client, etc. The authentication method you choose will depend on your site security policy.

Q I have Solaris 8 machines in a production environment. When new recommended patches come out, we want to apply them to the machine as soon as possible. Is there any automated way to download all of the new recommended patches and apply them automatically? We do have a SunSolve account, if that matters.

A Let me first point out that you probably do not want to automate your patching to that degree, especially on production machines. What if the patch is bad, you didn't have enough space to keep the backouts, the patch needed to be applied when the system was quiet, or you had software that conflicted with the patch, etc.? Automatic patching can be a very dangerous thing and can wind up costing you downtime.

That having been said, there is a Sun-supplied tool written in Perl that checks your system and automatically creates a list of patches you need based on a patch cross-reference file (patchdiag.xref) from Sun's Web site, which includes all of the current patch revisions. The old version of this tool was called patchdiag, and the newer version is called Patch Check. Patch Check and the patchdiag.xref file are available from:

http://sunsolve.sun.com/pub-cgi/show.pl?target=patchk
Patch Check can output its results in either text or HTML. The HTML page it generates has an option to create and download a tar file from sunsolve.sun.com if you want to do things interactively. Patch Check does not automatically download and install patches, however.

If you want to do automated downloads, you can wrap Patch Check with your own script that downloads the latest patchdiag.xref file, runs Patch Check, and then parses the output file and grabs the patches individually from sunsolve.sun.com.

A completely automated patch tool from someplace other than Sun is PatchReport, available at:

http://www.cs.duke.edu/~wjs/pr.html
PatchReport is also written in Perl and requires that you have the following Perl additions: libnet, Data-Dumper, MD5, libwww-perl, and IO. PatchReport operates in two main modes. The first mode makes use of the Recommended patch list available to the public from http://sunsolve.sun.com/pub-cgi/us/ \ pubpatchpage.pl. PatchReport will retrieve the recommended patch list and compare it to the output of your showrev -p output. If invoked with the -r flag, PatchReport will retrieve the needed patches.

The second mode of PatchReport, available only to Sun customers with a support contract and a SunSolve account, will retrieve the patchdiag.xref file. In this mode, PatchReport will compare the installed patches to the current patches and give a report of the patches that can be added. If invoked with the -r switch, PatchReport will download the patches, checking the CHECKSUMS file to make sure that the md5 checksums match. If summoned with the -i flag, PatchReport will install the patches, skipping patches that failed the checksum test.

Amy Rich, president of the Boston-based Oceanwave Consulting, Inc. (http://www.oceanwave.com), has been a UNIX systems administrator for more than five years. She received a BSCS at Worcester Polytechnic Institute, and can be reached at: arr@oceanwave.com.