Cover V11, I05

Article
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Listing 1

may2002.tar

Supporting Screened Hosts with BIND 9.x Views

Scott DeJong

Internally, many companies maintain their own DNS tree and keep the local DNS namespace separate from the Internet namespace. This separation is commonly known as split DNS and allows the owner of the namespace to more effectively manage internal addresses and how those addresses are advertised to external entities. Instead of being rigidly bound to an Internet-wide standard for internal addresses, administrators can use names like time.ikep.west and time.ikep.east, while externally using names like www.ikep.com and www.ikep.org. Further more, most companies utilize a proxy for all internally initiated Internet communication. This setup allows authorization, logging, and caching of things like Web traffic.

One problem with the split DNS model appears when the local resolver (the client-side piece that sends the query to a DNS server) queries for addresses. If a host asks an internal DNS server for information that is not defined internally, the DNS server will respond positively with "Non-existent domain" and the host will not continue to look to DNS for resolution of that query. If you attempt to work around this issue by using standard forwarding at the DNS server and sending all queries for hosts in unknown domains to another DNS server, you can slow overall performance of DNS by attempting to externally resolve all mistyped addresses. Additionally, you lose the control of deciding who can resolve external information and who cannot and, if users can resolve an address internally but cannot ping or telnet to it, they are generally much more confused than if they could never resolve it in the first place.

Views

A relatively new feature of BIND allows you to create Access Control Lists (ACL) consisting of IP addresses and subnet masks, and to use these ACLs to determine which hosts can acquire what data from DNS. BIND 9.x performs this determination by associating an ACL with a view. In its simplest form, a view allows you to let hosts with internal IP addresses (e.g., 192.168.0.0/16) resolve addresses that you designate as internal and let everyone else resolve only addresses you designate as external. The views feature provides the following benefits:

1. One DNS server handles resolution for both your Internet and intranet clients while maintaining a split DNS model.

2. The internal network information does not get published externally. The view blocks external clients from accessing internal addresses.

3. The internal network is forced to perform external resolution through the proxy. The view blocks external addresses from direct access. A sample of this type of configuration could look like this:

acl internals { 10.10.10.0/24; 192.168.15.0/24; };
acl externals { any; };

view "internal" {
   match-clients { internals; };
   recursion yes;
   zone "ikep.com" {
      type master;
      file "internal/db.ikep.com";
   };
};

view "external" {
   match-clients { externals; };
   recursion no;
   zone "ikep.com" {
      type master;
      file " external/db.ikep.com";
   };
};
In this example, the ACL statements define what is considered internal and what is considered external. The match-clients statement inside the view stanza then determines which view pertains to which ACL. Inside each view statement, the same zone is defined with each, taking the data from a different source file. In the sample, you also may have noticed the configuration of the recursion statement. Hosts that match the internal ACL are allowed to use the DNS server recursively, but external hosts are not. Basically, the two views are like two different DNS servers running under the same process and listening on the same port on the same box.

Real-World Configuration

If requirements are simple, the configuration in the preceding section works beautifully. Sometimes, however, real-world requirements can be more complex. Consider the following types of hosts.

Trusted hosts:

  • On the internal network
  • Separated from the Internet by a firewall
  • Require a proxy for communication with the Internet
  • Should not be able to resolve Internet addresses without a proxy
  • Should be able to resolve internal addresses

Screened hosts:

  • On a screened internal subnet
  • Separated from the internal network by a firewall
  • Separated from the Internet by a firewall
  • Do not require a proxy for communication with the Internet
  • Should be able to resolve Internet addresses without a proxy
  • Should be able to resolve internal addresses

External hosts:

  • On an external network
  • Should be able to resolve Internet addresses
  • Should not be able to resolve internal addresses

The screened hosts make this configuration more difficult. As stated before, in a simple split-DNS architecture, the screened hosts would be able to resolve either internal or external addresses, but not both. Hosts files, multiple zone files for each individual type of host, or non-split DNS could be used to take care of naming service requirements for all of the above host types, however, each of these solutions requires a lot of additional administrative overhead.

Smart Forwarding

To solve this problem, I have designed a multi-tier DNS structure consisting of multiple single purpose BIND version 9.x servers that, through the use of views, allow for decision-based forwarding of queries.

The server types are defined as follows.

External Authoritative -- A secondary DNS server that:

  • Holds authoritative information for external zones
  • Does not maintain a cache
  • Is not recursive
  • Only allows queries from external hosts and loopback

Screened Caching -- A caching only DNS server that:

  • Does not hold authoritative information
  • Does maintain a cache
  • Is recursive
  • Only allows queries from screened hosts, trusted caching DNS, and loopback

Trusted Caching Decision -- A caching and forwarding DNS server with multiple views that:

  • Does not hold authoritative information
  • Does maintain a cache
  • Is recursive
  • Forwards queries to screened caching and trusted authoritative DNS
  • Only allows queries from screened hosts, internal hosts, and loopback

Trusted Authoritative -- A secondary DNS server that:

  • Holds authoritative information for internal zones
  • Does not maintain a cache
  • Is not recursive
  • Holds a copy of the internal root zone
  • Only allows queries from internal hosts and loopback

Management Server -- A stealth master DNS server with multiple views that:

  • Controls the named process on all other servers through rndc
  • Holds all master zones for both internal and external
  • Is not listed or advertised as a name server
  • Does not maintain a cache
  • Is not recursive
  • Only allows queries from valid name servers and loopback

This structure allows selected hosts to resolve addresses contained in the Internet DNS root along with a good portion of your internal network and looks something like that shown in Figure 1. The authoritative servers are configured just like normal secondaries, and the screened caching-only servers are configured just like normal external caching DNS servers. The management server and the trusted caching decision servers are where views are used.

The management server just has internal and external views configured. Through the use of the defined ACL, which restrict down to the host level, only known name servers are allowed to query and transfer zones. Because of this, it should not be listed as a name server in any of your zone files. Hiding the master server further secures the zone files from the potential for malicious modification.

The sample named.conf file for the trusted caching decision servers also has only internal and external views defined though more could be added. The ACL used, however, restricts to the network level instead of the host level. Only hosts on known networks are allowed to query. If a host is on a network that is considered internal, the server acts like an internal caching-only name server. If a client that is considered external is posing the query, the server acts like a forwarder. It then forwards all queries for known domains internally and all queries for unknown domains to the external caching layer. The type of host is determined by the ACL and the appropriate view of the DNS server is presented. The forwarder for the query is determined by the zone statement in the view.

The sample queries are as follows.

Query from a screened host for an external host (see Figure 2):

1. dmzhost requests the address of www.isc.org.

2. tcaching1 makes the decision to forward the request to scaching1.

3. scaching1 asks the Internet for the address.

4. The Internet responds with 204.152.184.101.

5. scaching1 caches the response and sends it to tcaching1.

6. tcaching1 caches the response and sends it to dmzhost. Both scaching1 and tcaching1 cache the response until the time to live for the record has been reached.

Query from a screened host for a trusted host (see Figure 3):

1. dmzhost requests the address of thost.ikep.west.

2. tcaching1 makes the decision to forward the request to tns1.

3. tns1 responds authoritatively with 192.168.10.10.

4. tcaching1 caches the response and sends it to dmzhost. Subsequent queries to tcaching1 for thost.ikep.west will be answered from cache until the time to live associated with the record is reached.

Query from a trusted host for an external host (see Figure 4):

1. thost requests the address of www.isc.org.

2. tcaching1 makes the decision to forward the request to tns1.

3. tns1 responds authoritatively with non-existent host/domain.

4. tcaching1 caches the response and sends it to thost. This example assumes that the tool used in the attempt to resolve www.isc.org was not configured to use a proxy. If a user on thost had used something like an appropriately configured browser, the request would have been forwarded to the proxy for resolution, and the site (assuming it existed and was functional) would have been viewable without issue.

Query from a trusted host for another trusted host (see Figure 5):

1. thost requests the address of tserver.ikep.west.

2. tcaching1 makes the decision to forward the request to tns1.

3. tns1 responds authoritatively with 192.168.1.30 to tcaching1.

4. tcaching1 caches the response and sends it to thost. Subsequent queries to tcaching1 for tserver.ikep.west will be answered from cache until the "time to live" associated with the record is reached.

Security

To add security to this design, BIND was compiled to install into its own directory (not /usr/local). Named is run as a user that is not root and is "change rooted" to its own directory. Further, there is a complete separation of authoritative and caching servers, and the primary master is not accessible from an external source. Information on how to do this (along with other security features, such as dnssec and signed zones) can be found in the BIND documentation.

Management

This setup is not particularly difficult to manage. With the use of any file transfer program (ssh/rsync), you can have master copies of everything located on the management workstation. All changes take place there. Using shared keys and rndc, which comes with BIND 9.x, you can invoke a reload on any of the servers in this design from the management workstation. You can also create a script with a configuration file that allows you to push an updated named.conf to the appropriate servers when you change it on the management server. To add configuration management, I've placed all of the zone files and named.conf files in CVS.

Summary

Views is a powerful feature of the new version of BIND, and this is just one way that you can utilize it to help solve complex DNS problems. Furthermore, BIND is very lightweight, which means that the entire structure described here can be implemented in a secure, redundant, highly available, and load-balanced manner using ten very small workstations. To get the benefits without redundancy, you could implement this setup with as little as two small workstations after modifications to the sample config files (see Listing 1).

Scott is an independent consultant and an Adjunct Faculty Member at the University of Phoenix. His motto is "Always be smarter than the tools you work with." He is well versed in various development languages, though recently his primary concentration has been systems integration, server consolidation, highly available systems, and enterprise storage. He has worked extensively with most UNIX derivatives along with Microsoft Windows NT and 2000. Scott can be contacted at: code8@primenet.com.