Friday, December 4, 2009

Fault

The most widely accepted fault taxonomy that has been created was done by Taimur Aslam, Ivan Krsul, and Eugene H. Spafford. The work was produced at Purdue University, COAST Laboratory, and the taxonomy was used to catalog the first vulnerabilities used for the database COAST was constructing. The vulnerabilities, supplied from a number of private people, eventually evolved into the CERIAS project.

Fault is the logic behind the vulnerability, the actual cause of existence. The numbers of causes are infinite, and fault, as described by this particular taxonomy, is an all-encompassing enough description to handle the cataloging of all four types of vulnerabilities. However, the primary different between the description presented in this book and just the concept of Fault as presented by Aslam, Krsul, and Spafford is that fault described in this chapter was conceptualized as being the highest level of classification, and this book considers it an attribute.

Faults are cataloged into two separate conditions: coding faults and eminent faults. These faults have numerous subcategories and promote the whole logic into a large tree of possibilities. This chapter will break down three levels of thee and describe how the taxonomy works.

Coding Faults

A coding fault is a when the problem exists inside of the code of the program, a logic error that was not anticipated that came from a mistake in the requirements of the program. Independent of outside influence, the problem exists completely in the way the program was written. There are two basic forms of coding faults, the synchronization error and the condition validation error.

A synchronization error is a problem that exists in timing or serialization of objects manipulated by the program. Basically, a window of opportunity opens up where an outside influence may be able to substitute a fake object with an anticipated object, thereby allowing a compromise in security.

A condition validation error is a high level description of incorrect logic. Either the logic in a statement was wrong, missing, or incomplete.

Synchronization Errors

These errors always involve an element of time. Because of the computer CPU often times being far faster than the hardware that connects to it, the delays between the completion of functions may open up a vulnerability which can be exploited.

According to the taxonomy, synchronization errors can be classified as:

• A fault that can be exploited because of a timing window between two operations.

• A fault that results from improper serialization of operations

Race Condition Errors

A race condition can be thought of as a window of opportunity that one program may have to perform an action to another running program which will allow for a vulnerability to be exploited. For example, a privileged account creates a new file, and for a small period of time, any other program can modify the contents of the file, the race condition would exist in the window of opportunity that exists to change it.

Temporary File Race Condition

Temporary files are created in the /tmp directory in UNIX flavors, as well as /usr/tmp, /var/tmp, and a number of specially created "tmp" directories created by specific applications. In cases where temporary files are created, the directory they are placed in are often world readable and writable, so anyone can tamper with the information and files in advance. In many cases, its possible to modify, tamper, or redirect these files to create a vulnerability.

Sample Vulnerability [ps race condition, Solaris 2.5, Administrator Access, Credit: Scott Chasin]

A race condition exists in /usr/bin/ps when ps opens a temporary file when executed.   After opening the file, /usr/bin/ps chown's the temporary file to root and renames it to /tmp/ps_data.

In this example, a temporary file was created called /tmp/ps_data, and it is possible to "race" the "chown" function. It may not be exactly specific from the vulnerability description, but consider what would happen to the /tmp/ps_data file if the permissions of the file were to make the file setuid (chown 4777 /tmp/ps_data) before the file were chowned to root? The file would then become a setuid root executable that can be overwritten by a shell program and the exploiter would have "root" access! The only trick is to race the computer! In UNIX, it is easy to win these races by setting the "nice" level of an executing program to a low value.

Serialization Errors

Often times, its possible to interrupt the flow of logic by exploiting serialization, often in the form of "seizing control" of network connections. A number of problems can happen from this, not the least of which is easy control of someone's network access.

Network Packet Sequence Attacks

Network packet data is serialized, with each previous packet containing information that tells the order in which it is supposed to be received. This helps in cases where packet data is split from network failure or unusual routing conditions. It is possible to take over open network connections by predicting the next packet sequence number and start communicating with the open session as if the exploiter was the original creator of the network session.

Sample Vulnerability [TCP Sequence Vulnerability, Digital Unix 4.x, Administrator Access, Credit: Jeremy Fischer]

Digital Unix 4.x has a predictable TCP sequence problem. Sequence attacks will work against unpatched hosts.

In this example, the sequence numbers are predictable. These numbers tell the other host the order in which information will be received, and if the packets are guessed, another computer can seize the connection.

Condition Validation Errors

    A predicate in the condition expression is missing.    This would

evaluate the condition incorrectly and allow the alternate execution path to be chosen.

A condition is missing.    This allows an operation to proceed regardless of the outcome of the condition expression.

• A condition is incorrectly specified.   Execution of the program would proceed along an alternate path, allowing an operation to precede regardless of the outcome of the condition expression, completely invalidating the check.

Failure to Handle Exceptions

In this broad category, failure to handle exceptions is a basic approach to the security logic stating that the situation was never considered in terms of code, although it should. Many texts have been written on producing secure code, although the numbers of things that can be overlooked are infinite. Provided here are a number of examples of exceptions that should exist in code but were completely overlooked.

Temporary Files and Symlinks

A very common example of this is where files are created without first checking to see if the file already exists, or is a symbolic link to another file. The "/tmp" directory is a storage location for files which exist only for a short period of time, and if these files are predictable enough, they can be used to overwrite files.

Sample Vulnerability [Xfree 3.1.2, Denial of Service, General, Credit: Dave M.j

/tmp/.tX0-lock can be symlinked and used to overwrite any file.

In this particular case, the exploit is referring to the ability to eliminate the contents of any file on the system. For example, to destroy the drive integrity of the host, the following could be done:

The information that was meant to be written in the file /tmp/.tX0-lock will now instead be written over the raw data on the hard drive. This example may be a bit extreme, but it shows that a minor problem can turn into a serious one with little effort.

Usage of the mktemp() System Call

Related very closely to the temporary files and symlinks problem that was talked about earlier, the usage of the mktemp(3) function is a common mistake by UNIX programmers.

The mktemp() function creates a file in the /tmp directory as a scratch file that will be deleted after use. A random filename is picked for this operation. However, the filename that it picks is not very random, and in fact, can be exploited by creating a number of symlinks to "cover the bases" of the few hundred possibilities it could be. If just one of these links is the proper guess, the mktemp() call happily overwrites the file targeted by the symlink.

Sample Vulnerability [/usr/sbin/in.pop3d, General, Read Restricted, Credit: Dave M.j

Usage of the mktemp()  system creates a predictable temp filename that can be used to overwrite other files on the system, or used to read pop user's mail.

$ $ $ $ cd /tmp

rm -f /tmp/.tX0-lock

ln -s /tmp/.tx0-lock /dev/hd0

startx

Input Validation Error

An input validation error is a problem where the contents of input were not checked for accuracy, sanity, or valid size. In these cases, the effect on the system can lead to a security compromise fairly easily by providing information of a hostile nature.

Buffer Overflows

Warranting an entire chapter by itself, buffer overflows were introduced to the public by the Morris Worm attack in 1988. These vulnerabilities resurfaced in a highly reformed state in the later part of 1995. The premise behind breaking into a computer via a buffer overflow is that a buffer may have a fixed length but there may be no checking done to determine how much can be copied in. So, one could easily let the computer try to overwrite a 128 byte buffer with 16 kilobytes of information. The information the extra data overwrites could be changed to grant the user higher access.

Origin Validation Error

An origin validation error is a situation where the origin of the request is not checked, therefore it is erroneously assumed the request is valid.

Sample Vulnerability [General, Apache Proxie Hole, Read Restricted, Credit: Valgamon]

When using the proxy module is compiled into Apache's executable, and the access configuration file is set up for host-based denial, an attacker can still access the proxy and effectively appear to be coming from your host while browsing the web:

In this case, the logic error is in the expectation of the user to follow exactly the standard it was expecting. If the user provided the exact URL, as according to the standard format, they would be denied. However, if they provided a slightly off version, but still valid, the security would not be triggered because the match couldn't be exactly made.

Broken Logic / Failure To Catch In Regression Testing

Sometimes a programmer knows what they are trying to program, but get confused as to their approach. This creates a basic logic flaw, which can be used to gain higher access in some conditions. This appears mostly in cases where it is clear that the security was written incorrectly.

Sample Vulnerability [Linux 1.2.11 process kill, Denial of Service]

The kernel does not do proper checking on whom is killing who's task, thus anyone can kill anyone's tasks. User can kill tasks not belong to them,  any task,   including root!

In this example, the user has the ability to kill any user's tasks, including root. An administrator of such a box would probably be frustrated by the minor sabotage, and any user prior to the hack attempt could disable any security program running on the host. Killing select processes could render the host completely useless. Simply put, the failure of the author to write the security correctly allowed the heightened access.

GET http://www.yahoo.com GET http://www.yahoo.com/

<-­

<--

gives the user the page denies you,  like it's supposed to.

Access Validation Error

An access validation error is a condition where a validation check takes place, but due to incorrect logic, inappropriate access is given. Like the logic error, this specifically pinpoints an authentication process.

Sample Vulnerability [froot bug, AIX 3.2, Administrator Access]

The command:

$ rlogin victim.com -l -froot

allows root access remotely without validation because of a parsing error in the way that substitutes "root" as the name of the person being validated.    Likewise,  the login is always successful regardless of the password due to missing condition logic.

This cute vulnerability was the cause of no end of woe to Linux and AIX users. Ironically, this particular vulnerability was odd in the fact it manifested itself in two separate and unrelated developments. Both code was reviewed, and independently both developments made the exact same mistake.

Emergent Faults

Emergent faults are problems that exist outside of the coding of the problem and rest in the environment the code is executed within. The software's installation and configuration, the computer and environment it runs within, and availability of resources from which it draws to run are all possible points of failure which are classified as Emergent Faults.

Configuration Errors

A configuration error is a problem with the way the software is installed and operational on a computer. Not limited to just default configurations, if a program is configured in a particular way which allows for vulnerability, this fault is present. Some examples of configuration errors are:

• A program/utility is installed in the wrong place

• A program/utility is installed with incorrect setup parameters.

• A secondary storage object or program is installed with incorrect permissions.

Wrong Place

Sometimes vulnerability will exist from a program or file being installed in the wrong place. One example of this would be placing a file in an area where people have elevated access and can read and/or write to the file. Because these problems tend to be mostly operator error, no example vulnerability will be presented directly from the database. However, consider that NFS (Network File System) doesn't have strong authentication, so altering documents served by NFS may be easy enough to justify: installing any security critical file on a read/write NFS exported directory would be considered a "bad place".

Setup Parameters

Incorrect setup parameters often lead to faults in software. In many cases, software may install in a somewhat insecure state in order to prevent the blocking of other programs on the same or affected hosts. Initial setup parameters may not describe their impact well enough for the installer to know what is being installed on the host.

Sample Vulnerability [Firewall-1 Default, Administrator(?)]

By default,  Firewall-1 lets DNS and ICMP traffic pass through the firewall without being blocked.

In this example (excellent example of a weakness), the default configuration of Firewall-1 appears to defy the actual purpose of a firewall (which is to prevent arbitrary network traffic from passing.) However, this configuration was created to simplify new installs for the less informed network administrator. If the outsider knows of a vulnerability that can be exploited through the firewall, they can gain considerably higher access.

Access Permissions

In many cases, access permissions are often incorrectly judged or erroneously entered so that too much access is given for all or part of the application. There usually is an ongoing battle about security standards, and which users should exist, and which users should own which files. Other cases, debate is made about the permissions themselves. It may seem that common sense should prevail and security should be tight, but "tight" actually is more difficult to define than one would expect. The debate on access permission security will probably continue on without abating for decades.

No comments:

Post a Comment