Questions and Answers
I have been placed in a position where I must manage security for our computer systems, but I do not really know where to begin. Do you have any suggestions?
First of all, stay calm :-). While it sounds like you have a large learning curve ahead of you, keeping your common sense is certainly a good place to start. Next, contact your vendor(s), and get a list of security patches that they recommend installing. This will create a base line and give you some breathing room while you learn more about these topics. It will also close many old and well-known holes in your systems, without requiring you to have a lot of knowledge about security issues.
Second, get all the CERT advisories relevant to the types of system used at your site, and read them. They will give you some ideas of the type of problems you need to defend yourself against.
Third, get a good book on security. I like Practical UNIX & Internet Security by Simson Garfinkel and Gene Spafford from O'Reilly and associates, and Linda McCarthy's Intranet Security (Sun Books). The O'Reilly book will provide an overall understanding of security, although the book is huge. Intranet Security is filled with case stories, which will provide insight not only into technical issues, but also into many people and policy issues. This book might also give you some good arguments to use when discussing security with your users or your managers.
Fourth, you can take some courses. If you are able to attend the SANS conference in April, in Monterey, California, you should look into taking one or more of the security courses given by Matt Bishop. Further information about SANS can be obtained from:
The Computer Security Institute (www.gocsi.com) is another possible source of information.
Finally, check the Internet for software that can be used to check or increase security. I have mentioned many such packages over the years, as have other articles in Sys Admin.
The bottom line is that you are in a position where you may need to learn a lot, but I hope you will also find that computer security is not only a fascinating field, but also one where there is always new stuff to learn. In many ways, computer security is a continuously escalating process, where the black hats are finding new ways to compromise systems, and where we must learn to defend those systems against intrusion and denial of service attacks. Go for it!
I have tried to standardize the mail programs we are using to just mailtool and elm. However, some of my users are objecting to this and want to use other mail readers. It seems to me that supporting fewer programs will increase my productivity.
From your perspective, you are absolutely right; however, from the users' perspective, you are wrong. From a practical perspective, it is good to limit the number of mail readers (or any other type of program) to just a few, but different people have different work habits. Limiting the mail readers to just the two you mentioned may be too limiting. As a compromise, find out which mail readers are in common use and see if it will be realistic to standardize on those. There may still be users who insist on using obscure mail readers. You might want to let them know that they can use it, but that it will be unsupported, so they are on their own if a problem occurs.
The most important thing is to remember that you are working with people and must make your decision accordingly. You cannot apply a technical solution to a people problem and expect it work (nor will the opposite work). Technical and people considerations must go hand in hand.
I am not going to give a list of mail readers that should be supported, because it depends very much on the type of users working at the site. A site with a majority of C programmers may use emacs to read mail, while a site with an overwhelming number of PC users might use Eudora or Netscape as mail readers. Only by looking at the habits of your users can you determine the set of programs you must support.
It is also important to have a list of supported software, so you don't end up supporting nearly all the software packages in the known universe. If a user requests support of an odd package that only he or she is using, and if the package is not critical for the operation of the site, it might be proper to deny such support. Know your site, know your users, and know your management, you will be able to make good decisions in these matters.
I have been told that I should always use the resolver configuration file, even if the host I am on is running a name server. It seems to me that I will get a faster response when going directly to the name server. What is your opinion?
When everything works as it should, you might be right. However, in my experience, one of the most important natural laws governing the use of computers seems to be Murphy's Law. If anything can go wrong, it will! If for some reason your local name server stops responding, then your host will no longer be able to resolve host names and addresses.
When designing a network structure, it is always important to include as much redundancy as you practically can. In terms of name servers, it means that you should have at least two, and preferably at least one on each subnet. You are getting two benefits from this. Most importantly, if one or more name servers fail, it will be more or less invisible to the user, because the resolver will simply go on to inquire from the next name server in the list.
For BIND, this also has the benefit of decreasing the timeouts in the resolver. The exact time out values depend on the version of BIND that you are using, but if you have only a single name server, the typical values are a timeout of 5 seconds for the first query, 10 seconds for the second, then 20 seconds and 40 seconds before giving up. If you have several name servers, then each timeout will be around 5 seconds each, until the list has been exhausted. In all cases, the total time out for a name server query is about 75 to 80 seconds, but this timeout will be reached only if all of the name servers are out of action. If a client sends an inquiry about a nonexistent host, then it will get an error reply to this effect. It will then not try another name server, because (in theory) all authoritative name servers will return the same result. Due to the time lag from the update of a primary until the zone transfer to the secondary occurs, there is a small window where changes will show up in one name server and not in another. If only one of the name servers has problems and does not respond, the actual time to receive an answer will not be 75 seconds, but will be the sum of the timeouts of each failed lookup and the time to receive the respond from the working name server.
Planned redundancy is an important part of good system administration and should be applied whenever possible. Whenever there is a service that can fail and effectively leave the network inoperable, it should be duplicated if possible. Name servers of any kind (e.g., NIS or NIS+) should have slaves configured across the network. When practical, fall-back network routes should automatically take over if the primary routes fail.
Is it really necessary to have an MX record for each host? Why not just use one wildcard record and save a lot of typing?
It may save you some typing, but it is also likely to cause a lot of grief. The use of wild cards in the name server configuration should be avoided, except in the rare cases where there is a special effect you want to achieve. Generally, the only place where I would use wild cards is in the reverse zones, where I put a wild card in the end that will return the hostname "unknown" for an IP address that has not been configured. The effect is harmless and will ensure that a result is always returned on a reverse inquiry. Many system administrators forget to add the reverse entry when adding a new host to the name server. If you use a wild card for MX records, then the mail delivery agent making the lookup will think it found a legal and functioning address. However, the mail cannot be delivered on the target host, because it will not accept mail for hosts other than the ones for which it has been configured. In other words, instead of the remote mail delivery returning the mail immediately due to the lack of a name server record for the target host, it will attempt to deliver the mail. Only when the mail gets to its destination, will it be rejected.
It takes very little extra work to enter an MX record when you add the A record for a host. In fact I like to keep these records together in one place. The time savings is, in my opinion, illusory. It is better to try to get the work so that it stays done, instead of creating later problems.
Would you recommend TCL or Perl as a scripting language?
I much prefer Perl over TCL, simply because I am much more familiar with that language. I have written many thousands of lines of code in Perl, while my experience with TCL has been more limited. Perl also seems a better choice to me because of its similarity to C, a real programming language; while TCL reflects the style and capabilities of a shell interpreter. However, this is my choice, and you really need to take local conditions into account. Which language do you know best, and what environment are you most comfortable with? Is there a large body of utilities at your site written in either language? Are other people using one or the other? If questions such as these came out fairly even, then I would recommend using Perl.
Have you heard about a new and improved backup program called Pax?
Pax was developed several years ago in response to the backup war going on in the POSIX committee. The committee was split between standardizing on tar or cpio for backup. At one meeting, the tar faction would be in the majority and decide on the use of tar. At the next meeting, all the people in favor of cpio would show up and change the standard documents to have cpio as the standard. After this went on for a while, somebody came up with the idea of writing a program that could read both tar and cpio archives, but which would write new archives in a format of its own. The program was named Pax (Latin for peace). As far as I know, it has had limited use, because most people have continued to use the software they were already familiar with. If it is better than either tar or cpio, I cannot say. The original implementation was not very good, but I believe it was reimplemented later. For myself, I stick with tar for archiving and dump for backup.
We have a direct connection to one of our remote offices. Both the corporate office and the remote office have their own Internet connetion. I would like to use this as an oppotunity to have each office be a backup connection, so mail can still get in and out in case their primary Internet connection goes down. However, I do not want to use it as a general fallback solution, as the Web traffic will saturate our link. How can I do this?
For mail it is very simple. Define a MX record for each gateway, where the local Internet connection has a higher priority (lower number) than the remote connection. Then use this MX record when you configure your sendmail to use these gateways to send out mail to remote locations.
You will need to make sure that your sendmail is compiled with the capability to understand MX records. On some vendors' systems, the default sendmail does not understand MX records, and will need to be replaced by a version that does (and is provided in the same directory). As you should get, compile, and use the latest version of sendmail anyway, this should not be a real issue.
About the Author
Bjorn Satdeva is the president of /sys/admin, inc., a consulting firm which specializes in large installation system administration. Bjorn is also co-founder and former president of Bay-LISA, a San Francisco Bay Area user's group for system administrators of large sites. Bjorn can be contacted at /sys/admin, inc., 2787 Moorpark Ave., San Jose, CA 95128; electronically at email@example.com; or by phone at (408) 241-3111.