The Art of Attack and Penetration - Defending Your Site
Chris Prosise and George Kurtz
Last month, in the article "The Art of Attack and Penetration - Understanding Your Security Posture," we discussed how to perform an Attack and Penetration (A&P) review - a real-life test of an organization's exposure to security vulnerabilities. The review concentrated on examining an organization from the perspective of an Internet attacker. Specifically, what could an attacker with access to the Internet learn about an organization's networks, and what kind of access would that information allow the attacker to gain? This month, we examine security strategies to help an organization correct the vulnerabilities discovered during an A&P review. This article will focus on securing an organization's network from an Internet threat; however, many of the concepts and techniques can be applied to other access paths and technologies. We hope to provide a better understanding of security countermeasures and the resources necessary to help implement them.
Due to the complexity and evolving nature of today's networked environment, we also believe it is possible, even likely, that new attacks will provide techniques to circumvent today's security countermeasures. For this reason, our goal in introducing countermeasures is not only to prevent attacks, but to raise the level of difficulty and length of time necessary to successfully penetrate a network. Raising the bar in this manner increases the likelihood of detecting and monitoring an intrusion. Our strategies are designed on the security principle of "defense in depth" - having more than one countermeasure for each attack. We include both host and network countermeasures for individual attack techniques, and include monitoring as a security strategy.
As we discovered during the A&P review, an attacker's success directly correlates to the amount of information that can be gathered about an organization. Accordingly, many of our countermeasures are designed to reduce the amount of information available to potential attackers on the Internet. We also cover ways to eradicate vulnerabilities on the systems that are available to attackers. Since no system is infallible, and new vulnerabilities are discovered daily, we strongly recommend host and network monitoring to augment the security of systems on your network.
Reducing the Internet Footprint
Last month's article described a "footprinting" process by which an organization's unique profile or "footprint" on the Internet could be determined. The footprinting process can provide a great deal of critical network information using techniques that are not traditionally considered intrusive. In fact, organizations often ignore or dismiss these footprinting activities as harmless. As we demonstrated last month, footprinting activities allow an attacker to determine the specific composition and topology of a target network - exactly the information an attacker needs to perform a strike. Here, we address some of the countermeasures to prevent an attacker from gaining an accurate and complete footprint.
Gathering network information from the Internic (whois.internic.net) and American Registry for Internet Numbers (ARIN; http://whois.arin.net) is straightforward. From the Internic, you can gain information such as the name of administrative and billing contacts, registered net-blocks, and authoritative name servers. And, enumerated network block information can be expanded on the ARIN database. Since this information is publicly available, there are a few security considerations. First, make sure the information is accurate. In the information technology field especially, turnover is huge. The administrative, technical, and billing contacts who are registered with the Internic may have left the company years ago - they should not maintain the ability to change the organization's Internic information. Furthermore, carefully consider which phone numbers and addresses are listed, as these can be used by an attacker to "social engineer" their way into a network. An attacker can use the listed phone number as the starting point for a dialing attack. Consider using a toll-free number or other number not in your main exchange range.
All name servers listed with the Internic should be secure and accurate. These name servers should only list IP address and hostname information for the systems connected directly to the Internet. Just as the contact information changes with time, so does nameserver information. Possibly your organization had an Internet presence without a firewall at one time. Has the information on the nameservers been updated to reflect the changes in your network topology? When performing A&P reviews, we often find that the primary nameserver is secure, but the secondary or tertiary nameserver was never updated from an older, unsecure configuration. This is especially true when the tertiary nameserver is run by an ISP rather than the organization. Details on DNS security are covered in the next section.
A potentially large security vulnerability with domain registration arises from the way the Internic allows updates. The Internic allows automated, online changes to the information associated with registered domains. To change the information, the Internic verifies the domain registrant's identity through one of three methods: the FROM field in the email header, a password, or through the domain registrant's PGP key. The default is the FROM field in the email header. That's right, anyone that can spoof email can change the information associated with a domain name. If a hacker changes your nameserver, then anyone trying to connect to your domain name will be directed to the IP of the hacker's choosing. For example, AOL's records were changed October 16th of last year, forcing an embarrassing denial of service when Net users attempting to reach AOL were instead redirected to autonet.net.
DNS queries can quickly reveal too much information about your network to unauthorized users. If "zone transfers" are allowed, an attacker can gain all IP address and hostname information in one fell swoop. Additionally, if the host information record (HINFO) data is included on the Domain Name Server, then the attacker may have the exact system type. To prevent attackers from gaining this valuable information, zone transfers should be prohibited or restricted to authorized hosts. Zone transfers can be restricted through network or host countermeasures. The firewall or packet filter can be configured to deny all TCP connections to port 53, the DNS port. Since name lookups are generally UDP, and zone transfers are always TCP, this will prevent zone transfers. A host-based restriction is available for UNIX name servers running BIND version 4.9.3 or later. The xfernets directive in the named.boot file can be used to restrict which hosts or networks are allowed to perform zone transfers.
Restricting zone transfers reduces the ease with which attackers can gain information from nameservers. However, since name lookups are still allowed, an attacker could still gain information by performing a name lookup against every IP address in an organization's net-block. For this reason, any nameserver available to the Internet should only provide information about machines available to the Internet. Additionally, these external nameservers should contain only IP addresses and hostnames, not host information records. A separate nameserver behind the firewall should provide all necessary information to trusted hosts on the internal network.
Preventing Host Identification
Whether or not an attacker gains DNS information, the attacker must still determine which hosts are alive and accessible. Despite your best efforts to conceal systems, an attacker will almost always be able to determine your mail relay system, Web server, nameserver, and net-block. Whether or not an attacker can find an access path to other systems depends on the organization's Internet architecture. Internal systems should be screened from the Internet by a firewall. For purposes of this article, a router with packet filters can be considered the firewall, albeit a weak one. The firewall should prevent any packets originating on the Internet from connecting to any internal system. If the firewall is successful, potential attackers will find no access paths to internal computers. Of course this is a simple configuration, and your services access policy will depend on your business needs.
The success of the firewall should not be taken for granted. The most common technique used for host identification is the ping "sweep". The ping sweep consists of an attacker sending an ICMP packet to every potential target host. Any replies constitute a live host. Although almost any firewall can filter ICMP packets, organizational needs may dictate that the firewall pass ICMP traffic. If a true need exists to pass ICMP traffic to the internal network, carefully consider which types of ICMP traffic to pass. A minimalist approach may be to allow only echo-reply, destination unreachable, time exceeded, and admin-prohibited filter packets into the internal network. Also, it is best to limit ICMP traffic with ACLs to specific IP addresses of your ISP if possible. Keep in mind, ICMP is a powerful protocol for diagnosing network problems, however, it is also easily abused. Allowing unrestricted ICMP traffic into your border gateway may allow an attacker to mount a denial of service attack (e.g., smurf). Even worse, if an attacker actually manages to compromise one of your systems, they may be able to backdoor the OS and covertly tunnel data within an ICMP echo-reply packet.
The proliferation of advanced scanning techniques also tests the ability of the firewall to prohibit information-gathering attacks. When ICMP traffic is blocked, the attacker can revert to a variety of scanning techniques that attempt to elicit a response from open services. If a service responds, the attacker has found a live host, and a full port scan will certainly ensue. These advanced scans, which can be difficult to detect, do offer one advantage for the network defenders. These scans require large amounts of traffic to find live hosts, thereby increasing the likelihood that intrusion detection systems will identify an attack. The countermeasures for this type of scan are discussed below, as this type of scan is essentially a port scan conducted against suspected live hosts.
Preventing Port Scans
After identifying live hosts, the attacker's next step will be to determine which services (TCP/UDP) are present and listening. Identifying listening ports provides the attacker with possible avenues of attack. A plethora of scanning methods are designed to circumvent various intrusion detection and packet filtering implementations.
The same packet filtering that prevents host identification should also prevent port scanning. Depending upon the complexity of the firewall, some scanning techniques may be successful. For example, if your firewall allows established TCP connections, will it allow a TCP packet with the "ACK" bit set? As mentioned last month, the premier scanning tool, nmap (http://www.insecure.org/nmap) can be used to attempt a variety of advanced scanning techniques such as stealth scanning, scanning with fragmented packets, and ftp "bounce" scans. The ftp bounce scan is particularly dangerous. If one of the services you are providing to the Internet is ftp, make sure the version you are using is not "bounce-able" - that it does not allow the use of the "PORT" command to specify a third-party address and port for the data communication channel. If your ftp does allow this behavior, an attacker can perform TCP port scans that appear to originate from your ftp server rather than the attacker's address. In this case, you must treat the ftp server as an untrusted host and filter connections from the ftp server to other internal hosts. For a more detailed description of the vulnerability, check out:
Of course, some services must be enabled and available to hosts on the Internet. These services pose an additional risk with the advent of TCP fingerprinting (www.insecure.org/nmap and www.apostols.org/projectz/queso). TCP fingerprinting allows an attacker with access to a single listening port to accurately identify the operating system of the target computer. One of the few countermeasures to this type of information gathering is the use of a firewall that provides application proxies. Application proxies prohibit an attacker from reaching services directly and thereby identifying the operating system. Although this approach still allows TCP fingerprinting to identify the type of firewall, the firewall is presumably hardened to withstand attack.
The attacker's next step, information retrieval, involves extracting all possible information from available services. As demonstrated in last month's article, a tool such as netcat provides a simple mechanism to learn the specific version of software. For services such as mail and Web, the specific version of the software provides the attacker with all necessary information to perform an attack. Several strategies can be employed to make the attacker's job more difficult. Application proxies can reduce the information available to attackers. Alternatively, consider changing the banner displayed by various services. This is generally done through configuration files. For example, the Sendmail configuration file, sendmail.cf, controls the banner message through the SmtpGreetingMessage. Edit this line to remove specific version and software information.
Information retrieval also includes the attacker exploiting "informational" services such as finger and rusers to gain user name and account information. In general, these services should be disabled and filtered, because they unnecessarily provide sensitive information that can aid an attacker. If these services are absolutely necessary, consider host-based access controls such as TCP Wrappers, which restrict access to specific hosts or networks.
Another approach to frustrating an attacker's attempts at information retrieval is the use of deception. Consider a machine that appears to be running many vulnerable services. An attacker using normal information retrieval and exploitation techniques attempts to identify and exploit these services. In reality, these vulnerable services are diversions that cannot be exploited and, in fact, log all exploitation attempts. This will increase the likelihood of detection and frustrate exploitation attempts. For details on this free software, check out:
Reducing the size of your Internet footprint certainly reduces the exposure and number of vulnerabilities available to outside attackers. However, the reduction of available information does not eradicate vulnerabilities; it makes them harder to detect. To eradicate vulnerabilities, we concentrate on three main areas: configuration control, security patches, and security awareness. New vulnerabilities will always be discovered, but there is little reason why any known vulnerability should exist on any well-managed network.
Configuration control encompasses the actions a system administrator can take to optimize security through operating system and application features. In the absence of Trojan code, the only way an attacker can gain access to a system is through an active service. Turn off the services and ensure that routing or IP forwarding is disabled. While many services such as ftp, telnet, www, finger, and mail may be enabled by default on your systems, they are most likely unnecessary on most of these hosts. Disable those that are not essential to network operation. In UNIX, this means disabling or deleting them from the /etc/inetd.conf file or the startup files usually located in the /etc/rc.d directory. The next step is to carefully configure the services that are enabled, taking advantage of applicable security features. Since a complete discussion of secure configuration for operating systems and applications is beyond the scope of this article, we'll just mention a few sources to help you get started. For UNIX systems, try the old but still useful AUSCERT security checklist, available from:
Of course, the bible for UNIX security is still Practical UNIX Security by Garfinkel and Spafford. For Windows NT, try:
Two software packages greatly enhance security and are worth mentioning. TCP Wrappers (for UNIX) was created by Wietse Venema to filter and log TCP and UDP services. It can even be used with Sendmail and secure shell. A pair of simple configuration files, /etc/hosts.allow and /etc/hosts.deny provide the system administrator a great deal of granularity in allowing specific hosts or networks to connect to specific services. Additionally, every connection to any "wrapped" service is logged with the time, service, and initiating host via syslog. Consult the documentation for more details:
Another must-have is secure shell, a replacement for the r-services, telnet, and ftp. Secure shell provides encrypted authentication and communication. Secure shell is extremely powerful and flexible and is capable of tunneling and encrypting almost any connection. Free versions of secure shell are available for Windows and UNIX environments. Check out:
http://www.ssh.fi/sshprotocols2/index.html for details.
Up to Date Software
Vulnerabilities are regularly discovered in virtually all operating systems and applications. You can assume that potential attackers are aware of these vulnerabilities as soon as they are made public, if not sooner. Check with your operating system and application vendors regularly to obtain the latest patches and security updates. Most vendors provide patches free of charge and are responsive to bug reports. In general, newer software releases are desirable because they fix most known problems; however, the latest and greatest release may also break something that has worked for months. Check with each vendor's home page, or try http://nasirc.nasa.gov/patches.html for a list of vendor patch sites. Additionally, many vendors provide mailing lists that announce bugs and fixes as soon as they become available. However, if you would like to find out the vulnerability and potential countermeasure before the vendor announces a fix (which could take months), we highly recommend subscribing to the Bugtraq mailing lists:
A third key to eradicating vulnerabilities is administrator and user security awareness. The most secure service, with all relevant patches applied, may still be vulnerable if a user chooses a poor password. The primary concerns for user awareness involve password issues: composition, storage, sharing, etc. A security awareness program with a strong orientation session for new users, regular updates, and regular exercises is an aid to developing user awareness. We have found that system and network administrators face a different awareness challenge: many are not cognizant of the magnitude of security vulnerabilities. We are constantly amazed by the number of administrators who have never applied a security patch, who regularly telnet to the company router from their home ISP, or who have never visited an "underground" Web site. Once administrators read a few issues of Phrack (http://www.2600.com/phrack) or see the ease with which common cracking utilities can discover and exploit vulnerabilities, awareness is generally not an issue. To discover attacker capabilities and available countermeasures, visit this comprehensive list of security resources:
Maximizing Intrusion Detection Capabilities
Our strategy thus far has been to minimize the Internet footprint and eradicate vulnerabilities. These actions reduce exposure and the number of vulnerabilities open to attackers. However, new vulnerabilities will be discovered, and old vulnerabilities may be introduced. To counter attempts to exploit these vulnerabilities, we monitor traffic. Monitoring ensures that even if vulnerabilities exist and are exploited, we can detect and repel intruders.
While traditional logging and auditing is not geared toward detecting network mapping and information retrieval activity, these attacks can and should be detected. Out of necessity, exploitation attempts are almost always preceded by information gathering activity, which, by its very nature, is noisy. Many packets must be sent to the target network to determine live hosts, identify available services, and gather information about the available services. Attackers can go to great lengths to mask information-gathering activity, using techniques such as randomized scans, stealth port scanning, bounce attacks, increased time between scans, and multiple simultaneous scans from different source addresses. Despite these techniques, the sheer number of packets necessary to accurately map a network increases the ease with which these activities can be monitored.
Network-based monitoring is well suited to discovering network mapping and information retrieval activity. A monitoring device placed at an Internet gateway is in an ideal position to see all packets entering and leaving the network. Detecting information-gathering attacks is then simply a matter of configuring the device to monitor, record, and alert on certain patterns. Several excellent and free intrusion detection software packages are available, most notably Network Flight Recorder (http://www.nfr.com) and SHADOW (http://www.nswc.navy.mil/ISSEC/CID/part3.html). Whether you choose freeware, a Cisco router logging packets that meet access control list criteria, or a commercial intrusion detection product, it is fairly simple to configure the device to detect information gathering activity.
As an example, let's consider the case of detecting zone transfer attempts. If you are using a Cisco router for packet filtering, simply add the keyword "log" to the end of the access control list that denies TCP connections to port 53. Configure the router to send all log messages to a remote (and secure) syslog server using the "logging" command. Automated or manual review of the syslog will alert you to attempted zone transfers. Although this is not an optimal solution, it does show that common platforms already in use can be configured to enhance an organization's security posture. Most intrusion detection systems have more sophisticated configuration options for monitoring and recording network events. For example, to detect ping sweeps, configure the intrusion detection system to alert when X number of ICMP echo request packets come from a single host within Y number of minutes.
Even with the advent of powerful network monitoring systems, traditional host-based monitoring should not be abandoned. Host-based monitoring provides redundancy and introduces monitoring at a level of greater detail than that available from network-based systems. Examples of host-based monitoring include httpd logs, sulog, utmp, wtmp, and syslog. TCP Wrappers, as mentioned earlier, can enhance logging capabilities, although some flavors of UNIX offer similar logging of inetd services (when inetd is run with the -s option). The auditing capabilities of Windows NT may add some value, but are disabled by default.
We have provided a general guide to securing an organization's network from common Internet attacks. Unfortunately, due to the complex nature of this task, it is impossible to provide more than a general guide. The security strategy presented in this article focuses on redundancy - defense in depth. Redundancy is important because of the constantly evolving nature of information technology. New vulnerabilities are discovered daily by vigilant computer users world wide. We realize that no usable security solution is 100% secure; however, by providing several solutions for each security problem, we hope to raise the security bar. By following some of the countermeasures outlined above, we hope you can increase the security level of the systems you administer. In the end, a security solution is much like a safe or a burglar alarm, in that additional resources only buy you additional time. You must still patrol the safe and listen for the alarm.
About the Authors
Mr. Prosise is a Manager in Information Security Services with extensive experience in attack and penetration testing, incident response, and intrusion detection. Before joining E&Y, Mr. Prosise served as an officer in the U. S. Air Force, where he led computer security engineering teams on many computer attack and incident response missions. Mr. Prosise can be reached at: email@example.com or firstname.lastname@example.org.
George Kurtz is a Senior Manager in the Information Security Services practice of Ernst & Young and serves as the National Attack and Penetration leader within the Profiling service line. Additionally, he is one of the lead instructors for "Extreme Hacking: defending your site" a class designed to help others learn to profile their site (www.ey.com/security). Mr. Kurtz can be contacted at: email@example.com.