MLS Introduction

Multilevel security (MLS) has posed a challenge to the computer security community since the 1960s. MLS sounds like a mundane problem in access control: allow information to flow freely between recipients in a computing system who have appropriate security clearances while preventing leaks to unauthorized recipients. However, MLS systems incorporate two essential features: first, the system must enforce these restrictions regardless of the actions of system users or administrators, and second, MLS systems strive to enforce these restrictions with incredibly high reliability. This has led developers to implement specialized security mechanisms and to apply sophisticated techniques to review, analyze, and test those mechanisms for correct and reliable behavior.

Despite this, MLS systems have rarely provided the degree of security desired by their most demanding customers in the military services, intelligence organizations, and related agencies. The high costs associated with developing MLS products, combined with the limited size of the user community, have also prevented MLS capabilities from appearing in commercial products.

Caveat: This description was developed in 2005. Since then the US government has made many changes to its implementation and assurance programs. Here are two examples:

  • The old A1-B2-C2 designations were obsolete but still common in 2005. They are no longer used.
  • The NIST Risk Management Framework has replaced the certification and accreditation process.

All references are listed alphabetically at the end of the article.

Portions of this article also appear as Chapter 205 of the Handbook of Information Security, Volume 3, Threats, Vulnerabilities, Prevention, Detection and Management, Hossein Bidgoli, ed., ISBN 0-471-64832-9, John Wiley, 2006.

The MLS Problem

Many businesses and organizations need to protect secret information, and most can tolerate some leakage. Organizations who use MLS systems tolerate no leakage at all. Businesses may face legal or financial risks if they fail to protect business secrets, but they can generally recover afterwards by paying to repair the damage. At worst, the business goes bankrupt. Managers who take risks with business secrets might lose their jobs if secrets are leaked, but they are more likely to lose their jobs to failed projects or overrun budgets. This places a limit on the amount of money a business will invest in data secrecy.

The defense community, which includes the military services, intelligence organizations, related government agencies, and their supporting enterprises, cannot easily recover from certain information leaks. Stealth systems aren’t stealthy if the targets know what to look for, and surveillance systems don’t see things if the targets know what camouflage to use. Such failures can’t always be corrected just by spending more money. Even worse, a system’s weakness might not be detected until after a diplomatic or military disaster reveals it. During the Cold War, the threat of nuclear annihilation led military and political leaders to take such risks very seriously. It was easy to argue that data leakage could threaten a country’s very existence. The defense community demanded levels of computer security far beyond what the business community needed.

We use the term multilevel because the defense community has classified both people and information into different levels of trust and sensitivity. These levels represent the well-known security classifications: Confidential, Secret, and Top Secret. Before people are allowed to look at classified information, they must be granted individual clearances that are based on individual investigations to establish their trustworthiness. People who have earned a Confidential clearance are authorized to see Confidential documents, but they are not trusted to look at Secret or Top Secret information any more than any member of the general public. These levels form the simple hierarchy shown in Figure 1. The dashed arrows in the figure illustrate the direction in which the rules allow data to flow: from “lower” levels to “higher” levels, and not vice versa.

Hierarchical security levels
Figure 1: The hierarchical security levels

When speaking about these levels, we use three different terms:

  • Clearance level indicates the level of trust given to a person with a security clearance, or a computer that processes classified information, or an area that has been physically secured for storing classified information. The level indicates the highest level of classified information to be stored or handled by the person, device, or location.
  • Classification level indicates the level of sensitivity associated with some information, like that in a document or a computer file. The level is supposed to indicate the degree of damage the country could suffer if the information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The defense community was the first and biggest customer for computing technology, and computers were still very expensive when they became routine fixtures in defense organizations. However, few organizations could afford separate computers to handle information at every different level: they had to develop procedures to share the computer without leaking classified information to uncleared (or insufficiently cleared) users. This was not as easy as it might sound. Even when people “took turns” running the computer at different security levels (a technique called periods processing), security officers had to worry about whether Top Secret information may have been left behind in memory or on the operating system’s hard drive. Some sites purchased computers to dedicate exclusively to highly classified work, despite the cost, simply because they did not want to take the risk of leaking information.

Multiuser systems, like the early timesharing systems, made such sharing particularly challenging. Ideally, people with Secret clearances should be able to work at the same time others were working on Top Secret data, and everyone should be able to share common programs and unclassified files. While typical operating system mechanisms could usually protect different user programs from one another, they could not prevent a Confidential or Secret user from tricking a Top Secret user into releasing Top Secret information via a Trojan horse.

A Trojan horse is software that performs an invisible function that the user would not have chosen to perform. For example, consider a multiuser system in which users have stored numerous private files and have used the system’s access permissions to protect those files from prying eyes. Imagine that the author of a locally-developed word processing program has an unhealthy curiosity about others in the user community and wishes to read their protected files. The author can install a Trojan horse function in the word processing program to retrieve the protected files. The function copies a user’s private files into the author’s own directory whenever a user runs the word processing program.

Unfortunately, this is not a theoretical threat. The “macro” function in modern word processors like Microsoft Word allow users to create arbitrarily complicated software procedures and attach them to word processing documents. When another user opens a document containing the macro, the word processor executes the procedure defined by the macro. This was the basis of “macro viruses” like the “I Love You” virus of the late 1990s. A macro can perform all the functions required of a Trojan horse program, including copying files.

When a user runs the word processing program, the program inherits that user’s access permissions to the user’s own files. Thus the Trojan horse circumvents the access permissions by performing its hidden function when the unsuspecting user runs it. This is true whether the function is implemented in a macro or embedded in the word processor itself. Viruses and network worms are Trojan horses in the sense that their replication logic is run under the context of the infected user. Occasionally, worms and viruses may include an additional Trojan horse mechanism that collects secret files from their victims. If the victim of a Trojan horse is someone with access to Top Secret information on a system with lesser-cleared users, then there’s nothing on a conventional system to prevent leakage of the Top Secret information. Multiuser systems clearly need a special mechanism to protect multilevel data from leakage.

(back to top)

Multiuser Operating Modes

In the United States, the defense community usually describes a multiuser system as operating in a particular mode. For the purposes of this discussion, there are three important operating modes:

  • Dedicated mode – all users currently on the system have permission to access any of the data on the system. In dedicated mode, the computer itself does not need any built-in access control mechanisms if locked doors or other physical mechanisms prevent unauthorized users from accessing it.
  • System high mode – all users currently on the system have the right security clearance to access any data on the system, but not all users have a need to know all data. If users don’t need to know some of the data, then the system must have mechanisms to restrict their access. This requires the typical file access mechanisms of typical multiuser systems.
  • Multilevel mode – Not all users currently on the system are cleared for all data stored on the system. The system must have an access control mechanism that enforces MLS restrictions. It must also have a mechanisms to enforce multiuser file access restrictions.

The phrase “need to know” refers to a commonly-enforced rule in organizations that handle classified information. In general, a security clearance does not grant blanket permission to look at all information classified at that level or below. The clearance is really only the first step: people are only allowed to look at classified information that they need to know as part of the work they do.

In other words, if we give Janet a Secret clearance to work on a cryptographic device, then she has a need to know the Secret information related to that device. She does not have permission to study Secret information about spy satellites, or Secret cryptographic information that doesn’t apply to her device. If she is using a multiuser system containing Secret information about her project, other cryptographic projects, and even spy satellites, then the system must prevent Janet from browsing information belonging to the other projects and activities. On the other hand, the system should be able to grant Janet permission to look at other materials if she really needs the information to do her job.

A computer’s operating mode determines what access control mechanisms it needs. Dedicated systems might not require any mechanisms beyond physical security. Computers running at system high must have user-based access restrictions like those typically provided in Unix and in “professional” versions of Microsoft Windows. In multilevel mode, the system must prevent data from higher security levels from leaking to users who have lower clearances: this requires a special mechanism.

Typically, an MLS mechanism works as follows: Users, computers, and networks carry computer-readable labels to indicate security levels. Data may flow from “same level” to “same level” or from “lower level” to “higher level” (Figure 2). Thus, Top Secret users can share data with one another, and a Top Secret user can retrieve information from a Secret user. It does not allow data from Top Secret (a higher level) to flow into a file or other location visible to a Secret user (at a lower level). If data is not subject to classification rules, it belongs to the “Unclassified” security level. On a computer this would include most application programs and any computing resources shared by all users.

Data flows allowed by an MLS mechanism
Figure 2: Data flows allowed by an MLS mechanism

A direct implementation of such a system allows the author of a Top Secret report to retrieve information entered by users operating at Secret or Confidential and merge it with Top Secret information. The user with the Secret clearance cannot “read up” to see the Top Secret result, since the data only flows in one direction between Secret and Top Secret. Unclassified data can be made visible to all users.

It isn’t enough to simply prevent users with lower clearances from reading data carrying higher classifications. What if a user with a Top Secret clearance stores some Top Secret data in a file readable by a Secret user? This causes the same problem as “reading up” since it makes the Top Secret data visible to the Secret user. Some may argue that Top Secret users should be trusted not to do such a thing. In fact, some would argue that they would never do it because it’s a violation of the Espionage Act. Unfortunately, this argument does not take into account the risk of a Trojan horse.

For example, Figure 3 shows what could happen if an attacker inserts a macro function with a Trojan horse capability into a word processing file. The attacker has stored the macro function in a Confidential file and has told a Top Secret user to examine the file. When the user opens the Confidential file, the macro function starts running, and it tries to copy files from the Top Secret user’s directory into Confidential files belonging to the attacker. This is called “writing down” the data from a higher security level to a lower one.

A macro function with a Trojan horse

Figure 3: A macro function with a Trojan horse could leak data to a lower security level

In a system with typical access control mechanisms, the macro will succeed since the attacker can easily set up all of the permissions needed to allow the Top Secret user to write data into the other user’s files. Clearly, the system cannot enforce MLS reliably if Trojan horse programs can circumvent MLS protections. There is no way users can avoid Trojan horse programs with 100% reliability, as suggested by the success of e-mail viruses. An effective MLS mechanism needs to block “write down” attempts as well as “read up” attempts.

(back to top)

Bell-LaPadula Model

The most widely recognized approach to MLS is the Bell-LaPadula security model (Bell and La Padula, 1974). The model effectively captures the essentials of the access restrictions implied by conventional military security levels. Most MLS mechanisms implement Bell-LaPadula or a close variant of it. Although Bell-LaPadula has accurately defined a MLS capability that keeps data safe, it has not led to the widespread development of successful multilevel systems. In practice, developers have not been able to produce MLS mechanisms that work reliably with high confidence, and some important defense applications require a “write down” capability that renders Bell-LaPadula irrelevant (for example, see the later section “Sensor to Shooter”).

In the Bell-LaPadula model, programs and processes (called subjects) try to transfer information via files, messages, I/O devices, or other resources in the computer system (called objects). Each subject and object carries a label containing its security level, that is, the subject’s clearance level or the object’s classification level. In the simplest case, the security levels are arranged in a hierarchy as shown earlier in Figure 1. More elaborate cases involve compartments, as described in a later section.

The Bell-LaPadula model enforces MLS access restrictions by implementing two simple rules: the simple security property and the *-property. When a subject tries to read from or write to an object, the system compares the subject’s security label with the object’s label and applies these rules. Unlike typical access restrictions on multiuser computing systems, these restrictions are mandatory: no users on the system can turn them off or bypass them. Typical multiuser access restrictions are discretionary, that is, they can be enabled or disabled by system administrators and often by individual users. If users, or even administrative users, can modify the access rules, then a Trojan horse can modify or even disable those rules. To prevent leakage by browsing users and Trojan horses, the Bell-LaPadula systems always enforce the two properties.

Simple Security Property:

A subject can read from an object as long as the subject’s security level is the same as, or higher than, the object’s security level. This is sometimes called the no read up property.

*-Property:

A subject can write to an object as long as the subject’s security level is the same as or lower than the object’s security level. This is sometimes called the no write down property.

The simple security property is obvious: it prevents people (or their processes) from reading data whose classification exceeds their security clearances. Users can’t “read up” relative to their security clearances. They can “read down,” which means that they can read data classified at or below the same level as their clearances.

The *-property prevents people with higher clearances from passing highly classified data to users who don’t share the appropriate clearance, either accidentally or intentionally. User programs can’t “write down” into files that carry a lower security level than the process they are currently running. This prevents Trojan horse programs from secretly leaking highly classified data. Figure 4 illustrates these properties: the dashed arrows show data being read in compliance with the simple security property, and the lower solid arrow shows an attempted “write down” being blocked by the *-property.

If the program can read Top Secret, it can't write lower levels
Figure 4: If the program can read Top Secret data, it can’t write data to a lower level

A system enforcing the Bell-LaPadula model blocks the word processing macro in either of two ways, depending on the security level at which the user runs the word processing program. In one case, shown in Figure 4, the user runs the program at Top Secret. This allows the macro to read the user’s Top Secret files, but the *-property prevents the macro from writing to the attacker’s Confidential files. When the process tries to open a Confidential file for writing, the MLS access rules prevent it. In the other case, the Top Secret user runs the program at the Confidential level, since the file is classified Confidential. The program’s macro function can read and modify Confidential files, including files set up by the attacker, but the simple security property prevents the macro from reading any Secret or Top Secret files.

(back to top)

Compartments

Although the hierarchical security levels like Top Secret are familiar to most people, they are not the only restrictions placed on information in the defense community. Organizations apply hierarchical security levels (Confidential, Secret, or Top Secret) to their data according to the damage that might be caused by leaking that data. Some organizations add other markings to classified material to further restrict its distribution. These markings go by many names: compartments, codewords, caveats, categories, and so on, and they serve many purposes. In some cases, the markings indicate whether or not the data may be shared with particular organizations, enterprises, or allied countries. In many cases these markings give the data’s creator or owner more control over the data’s distribution. Each marking indicates another restriction placed on the distribution of a particular classified data item. People can only receive the classified data if they comply with all restrictions placed on the data’s distribution.

The Bell-LaPadula model refers to all of these additional markings as compartments. A security level may include compartment identifiers in addition to a hierarchical security level. If a particular file’s security level includes one or more compartments, then the user’s security level must also include those compartments or the user won’t be allowed to read the file.

A system with compartments generally acquires a large number of distinct security levels: one for every legal combination of a hierarchical security level with zero or more compartments. The interrelationships between these levels form a directed graph called a lattice. Figure 5 shows the lattice for a system that contains Secret and Top Secret information with compartments Ace and Bar.

The arrows in the lattice show which security levels can read data labeled with other security levels. If the user Cathy has a Top Secret clearance with access to both compartments Ace and Bar, then she has permission to read any data on the system (assuming its owner has also given her “read” permission to that data). We determine the access rights associated with other security labels by following arrows in Figure 5.

Lattice for Secret, Top Secret, and compartments
Figure 5: Lattice for Secret and Top Secret including the compartments Ace and Bar

If Cathy runs a program with the label Secret Ace, then the program can read data labeled Unclassified, Secret, or Secret Ace. The program can’t read data labeled Secret Bar or Secret Ace Bar, since its security label doesn’t contain the Bar compartment. Figure 4 illustrates this: there is no path to Secret Ace that comes from a label containing the Bar compartment. Likewise, the program can’t read Top Secret data because it is running at the Secret level, and no Top Secret labels lead to Secret labels.

(back to top)

MLS and System Privileges

A high security clearance like Cathy’s shows that a particular organization is willing to trust Cathy with certain types of classified information. It is not a blank check that grants access to every resource on a computer system. MLS access rules always work in conjunction with the system’s other access rules.

Systems that enforce MLS access rules always combine them with conventional, user-controlled access permissions. If a Secret or Confidential user blocks access to a file by other users, then a Top Secret user can’t read the file either. The “need to know” rule means that classified information should only be shared among individuals who genuinely need the information. Individual users are supposed to keep classified information protected from arbitrary browsing by other users. Higher security clearances do not grant permission to arbitrarily browse: access is still restricted by the need to know requirement.

Users who have Top Secret and higher clearances don’t automatically acquire administrative or “super user” status on multilevel computer systems, even if they are cleared for everything on the computer. In a way, the Top Secret security level actually restricts what the user can do: a program running at Top Secret can’t install an unclassified application program, for example. Many administrative tasks, like installing application programs and other shared resources, must take place at the unclassified level. If an administrator installs programs while running at the Top Secret level, then the programs will be installed with Top Secret labels. Users with lower clearances wouldn’t be authorized to see the programs.

(top)

Limitations in MLS Mechanisms

Despite strong support from the military community and a strong effort by computing vendors and computer security researchers, MLS mechanisms failed to provide the security and functionality required by the defense community. First, security researchers and MLS system developers found it to be extremely difficult, and perhaps impossible, to completely prevent information flow between different security levels in an MLS system. We will explore this problem further in the next section on “Assurance.” A second problem was the virus threat: when we enforce MLS information flow we do nothing to prevent a virus introduced at a lower clearance level from propagating into higher clearance levels. Finally, the end user community found a number of cases where that the Bell-LaPadula model of information flow did not entirely satisfy their operational and security needs.

Self-replicating software like computer viruses became a minor phenomenon among the earliest home computer users in the late 1970s. While viruses weren’t enough of a phenomenon to interest MLS researchers and developers, MLS systems caught the interest of the pioneering virus researcher Fred Cohen (1990, 1994). In 1984 he demonstrated that a virus inserted at the unclassified level of a system that implemented the Bell-LaPadula model could rapidly spread throughout all security levels of a system. This particular infestation did not reflect a bug in the MLS implementation. Instead, it indicated a flaw in the Bell-LaPadula model, which strives to allow information flows from low to high while preventing flows from high to low. Viruses represent a security threat that exploits an information flow from low to high, so MLS protection based on Bell-LaPadula gives no protection against it.

Viruses represented one case in which the Bell-LaPadula model did not meet the end users’ operational and security needs. Additional cases emerged as end users gained experience with MLS systems. One problem was that the systems tended to collect a lot of “overclassified” information. Whenever a user created a document at a high security level, the document would have to retain that security level even if the user removed all sensitive information in order to create a less-classified or even unclassified document. In essence, end users often needed a mechanism to “downgrade” information so its label reflected its lowered sensitivity.

The downgrading problem became especially important as end users sought to develop “sensor to shooter” systems. These systems would use highly classified intelligence data to produce tactical commands to be sent to combat units whose radios received information at the Secret level or lower (see the later section on “Sensor to Shooter”). In practice, systems would address the downgrading problem by installing privileged programs that bypassed the MLS mechanism to downgrade information. While this served as a convenient patch to correct the problem, it also showed that practical systems did not entirely rely on the Bell-LaPadula mechanisms that had cost so much to build and validate. This further eroded the defense community’s interest in MLS based on Bell-LaPadula products.

The MLS Assurance Problem

Members of the defense community identified the need for MLS-capable systems in the 1960s, and a few vendors implemented the basic features (Weissman 1969, Hoffman 1973, Karger and Schell 1974). However, government studies of the MLS problem emphasized the danger of relying on large, opaque operating systems to protect really valuable secrets (Ware 1970, Anderson 1972). Operating systems were already notorious for unreliability, and these reports highlighted the threat of a software bug allowing leaks of highly sensitive information. The recommended solution was to achieve high assurance through extensive analysis, review, and testing.

High assurance would clearly increase vendors’ development costs and lead to higher product costs. This did not deter the US defense community, which foresaw long-term cost savings. Karger and Schell (1974) repeated an assertion that MLS capabilities could save the US Air Force alone $100,000,000 a year, based on computing costs at that time.

Every MLS device poses a fundamental question: does it really enforce MLS, or does it leak information somehow? The first MLS challenge is to develop a way to answer that question. We can decompose the problem into two more questions:

  • What does it really mean to enforce MLS?
  • How can we evaluate a system to verify that it enforces MLS?

The first question was answered by the development of security models, like the Bell-LaPadula model summarized earlier. A true security model provides a formal, mathematical representation of MLS information flow restrictions. The formal model makes the enforcement problem clear to non-programmers. It also makes the operating requirement clear to the programmers who implemented the MLS mechanisms.

To address the evaluation question, designers needed a way to prove that the system’s MLS controls indeed work correctly. By the late 1960s, this had become a really serious challenge. Software systems had become much too large for anyone to review and validate: Brooks (1975) reported that IBM had over a thousand people working on its ground breaking system, OS/360. In the book, The Mythical Man-Month, Brooks described the difficulties of building a large-scale software system. Project size wasn’t the only challenge in building reliable and secure software: smaller teams, like the team responsible for the Multics security mechanisms, could not detect and close every vulnerability (Karger and Schell, 1974).

The security community developed two sets of strategies for evaluating MLS systems: strategies for designing a reliable MLS system and strategies to prove the MLS system works correctly. The design strategies emphasized a special structure to ensure uniform enforcement of data access rules, called the reference monitor. The design strategies further required that the designers explicitly identify all system components that played a role in enforcing MLS; those components were defined as being part of the trusted computing base, which included all components that required high assurance.

The strategies for proving correctness relied heavily on formal design specifications and on techniques to analyze those designs. Some of these strategies were a reaction to ongoing quality control problems in the software industry, but others were developed as an attempt to detect covert channels, a largely unresolved weakness in MLS systems.

(back to top)

System Design Strategies

During the early 1970s, the US Air Force commissioned a study to develop feasible strategies for constructing and verifying MLS systems. The study pulled together significant findings by security researchers at that time into a report, called the Anderson report (1972), which heavily influenced subsequent US government support of MLS systems. A later study (Nibaldi 1979) identified the most promising strategies for trusted system development and proposed a set of criteria for evaluating such systems.

These proposals led to published criteria for developing and evaluating MLS systems called the Trusted Computer System Evaluation Criteria (TCSEC), or “Orange Book” (Department of Defense, 1985a). The US government established a process by which computer system vendors could submit their products for security evaluation. A government organization, the National Computer Security Center (NCSC), evaluated products against the TCSEC and rated the products according to their capabilities and trustworthiness. For a product to achieve the highest rating for trustworthiness, the NCSC needed to verify the correctness of the product’s design.

To make design verification feasible, the Anderson report recommended (and the TCSEC required) that MLS systems enforce security through a “reference validation mechanism” that today we call the reference monitor. The reference monitor is the central point that enforces all access permissions. Specifically, a reference monitor must have three features:

  • It must be tamperproof – there must be no way for attackers or others on the system to intentionally or accidentally disable it or otherwise interfere with its operation.
  • It must be nonbypassable – All accesses to system resources must be mediated by the reference monitor. There must be no way to gain access to system resources except through mechanisms that use the reference monitor to make access control decisions.
  • It must be verifiable – there must be a way to convince third-party evaluators (like the NCSC) that the system will always enforce MLS correctly. The reference monitor should be small and simple enough in design and implementation to make verification practical.

Operating system designers had by that time recognized the concept of an operating system kernel: a portion of the system that made unrestricted accesses to the computer’s resources so that other components didn’t need unrestricted access. Many designers believed that a good kernel should be small for the same reason as a reference monitor: it’s easier to build confidence in a small software component than in a large one. This led to the concept of a security kernel: an operating system kernel that incorporated a reference monitor. Layered atop the security kernel would be supporting processes and utility programs to serve the system’s users and the administrators. Some non-kernel software would require privileged access to system resources, but none would bypass the security kernel. The combination of the computer hardware, the security kernel, and its privileged components made up the trusted computing base (TCB) – the system components responsible for enforcing MLS restrictions. The TCB was the focus of assurance efforts: if it worked correctly, then the system would correctly enforce the MLS restrictions.

(back to top)

Verifying System Correctness

Research culminating in the TCSEC had identified three essential elements to ensure that an MLS system operates correctly:

  • Security policy – an explicit statement of what the system must do. This was based on a security model that described how the system enforced MLS.
  • Security mechanisms – features of the reference monitor and the TCB that enforce the policy.
  • Assurances – evidence that the mechanisms actually enforce the policy.

The computer industry has always relied primarily on system testing for quality assurance. However, the Anderson report recognized the shortcomings of testing by repeating Dijkstra’s observation that tests can only prove the presence of bugs, not their absence. To improve assurance, the report made specific recommendations about how MLS systems should be designed, built, and tested. These recommendations became requirements in the TCSEC, particularly for products intended for the most critical applications:

  • Top-down design – the product design should have a top-level design specification and a detailed design specification.
  • Formal policy specification – there must be a formal specification of the product’s security policy. This may be a restatement of Bell-LaPadula that identifies the elements of the product that correspond to elements in the Bell-LaPadula model.
  • Formal top-level specification – there must be a formal specification of the product’s externally visible behavior.
  • Design correctness proof – there must be a mathematical proof showing that the top-level design is consistent with the security policy.
  • Specification to code correspondence – there must be a way to show that the mechanisms appearing in the formal top-level specification are implemented in the source code.

These activities were not substituted for conventional product development techniques. Instead, these additional tasks were combined with the accepted “best practices” used in conventional computer system development. These practices tended to follow a “waterfall” process (Boehm, 1981; Department of Defense, 1985b): first, the builders develop a requirements specification, from that they develop the top-down design, then they implement the product, and finally they test the product against the requirements. In the idealized process for developing an MLS product, the requirements specification focuses on testable functions and measurable performance capabilities while the policy model captures security requirements that can’t be tested directly. (see Figure 6). shows how these elements worked together to validate an MLS product’s correct operation.

Product development has always been expensive. Many development organizations, especially smaller ones, try to save time and money by skipping the planning and design steps of the waterfall process. The TCSEC did not demand the waterfall process, but its requirements for highly assured systems imposed significant costs on development organizations. Both the Nibaldi study and the TCSEC recognized that not all product developers could afford to achieve the highest levels of assurance. Instead, the evaluation process identified a range of assurance levels that a product could achieve. Products intended for less-critical activities could spend less money on their development process and achieve a lower standard of assurance. Products intended for the most critical applications, however, were expected to meet the highest practical assurance standard.

Interrelation of policy, specifications, implementation, and assurances
Figure 6: Interrelation of policy, specifications, implementation, and assurances

(back to top)

Covert Channels

Shortly after the Anderson report appeared, Lampson (1973) published a note which examined the general problem of keeping information in one program secret from another, a problem at the root of MLS enforcement. Lampson noted that computer systems contain a variety of channels by which two processes might exchange data. In addition to explicit channels like the file system or interprocess communications services, there are covert channels that can also carry data between processes. These channels typically exploit operating system resources shared among all processes. For example, when one process can take exclusive control of a file, it prevents other processes from accessing the file, or when one process uses up all the free space on the hard drive, other processes will “see” this activity.

Since MLS systems could not achieve their fundamental objective (to protect secrets) if covert channels were present, defense security experts developed techniques to detect such channels. The TCSEC required a covert channel analysis of all MLS systems except those achieving the lowest assurance levels.

In general, there are two categories of covert channels: storage channels and timing channels. A storage channel transmits data from a “high” process to a “low” one by writing data to a storage location visible to the “low” one. For example, if a Secret process can see how much memory is left after a Top Secret process allocates some memory, the Top Secret process can send a numeric message by allocating or freeing the amount of memory equal to the message’s numeric value. The covert channel consists of setting the contents of a storage location (the size of free memory) to a value by the “high” process that is readable by the “low” one.

A timing channel is one in which the “high” process communicates to the “low” one by varying the timing of some detectable event. For example, the Top Secret process might instruct the hard drive to visit particular disk blocks. When the Secret process goes to read data from the hard drive itself, the disk activity by the Top Secret process will cause varying delays in the Secret program when it tries to use the hard drive itself. The Top Secret program can systematically impose delays on the Secret program’s disk activities, and thus transmit information through the pattern of those delays. Wray (1991) describes a covert channel based on hard drive access speed, and also uses the example to show how ambiguous the two covert channel categories can be.

The fundamental strategy for seeking convert channels is to inspect all shared resources in the system, decide if any could yield an effective covert channel, and to measure the bandwidth of whatever covert channels are uncovered. While a casual inspection by a trained analyst may often uncover covert channels, there is no guarantee that a casual inspection will find all such channels. Systematic techniques help increase confidence that the search has been comprehensive. An early technique, the shared resource matrix (Kemmerer, 1983; 2002), can analyze a system from either a formal or informal specification. While the technique can detect covert storage channels, it cannot detect covert timing channels. An alternative approach, noninterference, requires formal policy and design specifications (Haigh and Young, 1987). This technique locates both timing and storage channels by proving theorems to show that processes in the system, as described in the design specification, can’t perform detectable (“interfering”) actions that are visible to other processes in violation of MLS restrictions.

To be effective at locating covert channels, the design specification must accurately model all resource sharing that is visible to user processes in the system. Typically, the specification focuses its attention on the system functions made available to user processes: system calls to manipulate files, allocate memory, communicate with other processes, and so on. The development program for the LOCK system (Saydjari, Beckman, and Leaman, 1989; Saydjari, 2002), for example, included the development of a formal design specification to support a covert channel analysis. The LOCK design specification identified all system calls, described all inputs and outputs produced by these calls, including error results, and represented the internal mechanisms necessary to support those capabilities. The LOCK team used a form of noninterference to develop proofs that the system enforced MLS correctly (Fine, 1994).

As with any flaw detection technique, there is no way to confirm that all flaws have been found. Techniques that analyze the formal specification will detect all flaws in that specification, but there is no way to conclusively prove that the actual system implements the specification perfectly. Techniques based on less-formal design descriptions are also limited by the quality of those descriptions: if the description omits a feature, there’s no way to know if that feature opens a covert channel. At some point there must be a trade-off between the effort spent on searching for covert channels and the effort spent searching for other system flaws.

In practice, system developers have found it almost impossible to eliminate all covert channels. While evaluation criteria encourage developers to eliminate as many covert channels as possible, the criteria also recognize that practical systems will probably include some channels. Instead of eliminating the channels, developers must identify them, measure their possible bandwidth, and provide strategies to reduce their potential for damage. While not all security experts agree that covert channels are inevitable (Proctor and Neumann, 1992), typical MLS products contain covert channels. Thus, even the approved MLS products contain known weaknesses.

(back to top)

Evaluation, Certification, and Accreditation

How does assurance fit into the process of actually deploying a system? In theory, one can plug a computer in and throw the switch without knowing anything about its reliability. In the defense community, however, a responsible officer must approve all critical systems before they can go into operation, especially if they handle classified information. Approval rarely occurs unless the officer receives appropriate assurance that the system will operate correctly. There are three major elements to this approval process in the US defense community:

  • Evaluation – validates a set of security properties in a particular product or device. Product evaluation is not strictly required.
  • Certification – verifies that a specific installation and configuration of a system meets the requirements for that site. While not absolutely required, systems are rarely approved for operation without at least an attempt at certification.
  • Accreditation – formal approval for operation by a senior military commander. The approval is almost always based on the results of the certification.

In military environments, a highly-ranked officer, typically an admiral or general, must formally grant approval (accreditation) before a critical system goes into operation. Accreditation shows that the officer believes the system is safe to operate, or at least that the system’s risks are outweighed by its benefits. The decision is based on the results of the system’s certification: a process in which technical experts analyze and test the system to verify that it meets its security and safety requirements. The certification and accreditation process must meet certain standards (Department of Defense, 1997). Under rare, emergency conditions an officer could accredit a system even if there are problems with the certification.

Certification can be very expensive, especially for MLS systems. Tests and analyses must show that the system is not going to fail in a way that will leak classified information or interfere with the organization’s mission. Tests must also show that all security mechanisms and procedures work as specified in the requirements. Certification of a custom-built system often involves design reviews and source code inspections. This work requires a lot of effort and special skills, leading to very high costs.

The product evaluation process heralded by the TCSEC was intended to provide off-the-shelf computing equipment that reliably enforced MLS restrictions. Although organizations could implement, certify, and accredit custom systems enforcing MLS, the certification costs were hard to predict and could overwhelm the project budget. If system developers could use off-the-shelf MLS products, their certification costs and project risks would be far lower. Certifiers could rely on the security features verified during evaluation, instead of having to verify a product’s implementation themselves.

Product evaluations assess two major aspects: functionality and assurance. A successful evaluation indicates that the product contains the appropriate functional features and meets the specified level of assurance. The TCSEC defined a range of evaluation levels to reflect increasing levels of compliance with both functional and assurance requirements. Each higher evaluation level either incorporated the requirements of the next lower level, or superseded particular requirements with a stronger requirement. Alphanumeric codes indicated each level, with D being lowest and A1 being highest:

  • D – the lowest level, assigned to evaluated products that achieve no higher level; this level was rarely used.
  • C1 – a single-user system; this level was eventually discarded.
  • C2 – a multi-user system, like a Unix time sharing system.
  • C3 – an enhanced multi-user system; this level was eventually discarded.
  • B1 – the lowest level of evaluation for a system with MLS support.
  • B2 – an MLS system that incorporates basic architectural assurance requirements and a covert channel analysis.
  • B3 – an MLS system that incorporates more significant assurance requirements, including formal specifications.
  • A1 – an MLS system whose design has been proven correct mathematically.

Although the TCSEC defined a whole range of evaluation levels, the government wanted to encourage vendors to develop systems that met the highest levels. In fact, one of the pioneering evaluated products was SCOMP, an A1 system constructed by Honeywell (Fraim, 1983). Very few other vendors pursued an A1 evaluation. High assurance caused high product development costs; one project estimated that the high assurance tasks added 26% to the development effort’s labor hours (Smith, 2001). In the fast-paced world of computer product development, that extra effort can cause delays that make the difference between a product’s success or failure.

(back to top)

The Empty Shelf

To date, no commercial computer vendors have offered a genuine “off the shelf” MLS product. A handful of vendors had implemented MLS operating systems, but none of these were standard product offerings. All MLS products were expensive, special purpose systems marketed almost exclusively to the military and government customers. Almost all MLS products were evaluated to the B1 level, meeting minimum assurance standards. Thus, the TCSEC program failed on two levels: it failed to persuade vendors to incorporate MLS features into their standard products, and it failed to persuade any vendors to produce products that met the “A1” requirements for high assurance.

A survey of security product evaluations completed by the end of 1999 (Smith, 2000) noted that only a fraction of security products ever pursued evaluation. Most products pursued “medium assurance” evaluations, which could be sufficient for a minimal (B1) MLS implementation.

Click here for more information about historical trends in security product evaluations

TCSEC evaluations were discontinued in 2000. The handful of modern MLS products are evaluated under the Common Criteria (Common Criteria Project Sponsoring Organizations, 1999), an evaluation criteria designed to address a broader range of security products.

The most visible failure of MLS technology is its absence from typical desktops. As Microsoft’s Windows operating systems came to dominate the desktop in the 1990s, Microsoft made no significant move to implement MLS technology. Versions of Windows have earned a TCSEC C2 evaluation and a more-stringent EAL-4 evaluation under the Common Criteria, but it has never incorporated MLS. The closest Microsoft has come to offering MLS technology has been its “Palladium” effort announced in 2002. The technology focused on the problem of digital rights management – restricting the distribution of copyrighted music and video – but the underlying mechanisms caught the interest of many in the MLS community because of potential MLS applications. The technology was slated for incorporation in a future Windows release codenamed “Longhorn,” but was dropped from Microsoft’s plans in 2004 (Orlowski, 2004).

Arguably several factors have contributed to the failure of the MLS product space. Microsoft demonstrated clearly that there was a giant market for products that omit MLS. Falling computer prices also played a role: sites where users typically work at a couple of different security levels find it cheaper to put two computers on every desktop than to try to deploy MLS products. Finally, the sheer cost and uncertainty of MLS product development undoubtedly discourage many vendors. It is hard to justify the effort to develop a “highly secure” system when it’s likely that the system will still have identifiable weaknesses, like covert channels, after all the costly, specialized work is done.

(back to top)

Multilevel Networking

As computer costs fell and performance soared during the 1980s and 1990s, computer networks became essential for sharing work and resources. Long before computers were routinely wired to the Internet, sites were building local area networks to share printers and files. In the defense community, multilevel data sharing had to be addressed in a networking environment. Initially, the community embraced networks of cheap computers as a way to temporarily sidestep the MLS problem. Instead of tackling the problem of data sharing, many organizations simply deployed separate networks to operate at different security levels, each running in system high mode.

This approach did not help the intelligence community. Many projects and departments needed to process information carrying a variety of compartments and code words. It simply wasn’t practical to provide individual networks for every possible combination of compartments and code words, since there were so many to handle. Furthermore, intelligence analysts often spent their time combining information from different compartments to produce a document with a different classification. In practice, this work demanded an MLS desktop and often required communications over an MLS network.

Thus, MLS networking took two different paths in the 1990s. Organizations in the intelligence community continued to pursue MLS products. This reflected the needs of intelligence analysts. In networking, this called for labeled networks, that is, networks that carried classification labels on their traffic to ensure that MLS restrictions were enforced.

Many other military organizations, however, took a different path. Computers in most military organizations tended to cluster into networks handling data up to a specified security level, operating in system high mode. This choice was not driven by an architectural vision; it was more likely the effect of the desktop networking architecture emerging in the commercial marketplace combined with existing military computer security policies. Ultimately, this strategy was named multiple single levels (MSL) or multiple independent levels of security (MILS).

(back to top)

Labeled Networks

The fundamental objective of a labeled network is to prevent leakage of classified information. The leakage could occur through eavesdropping on the network infrastructure or by delivering data to an uncleared destination. This yielded two different approaches to labeled networking. The more complex approach used cryptography to keep different security levels separate and to prevent eavesdropping. The simpler approach inserted security labels into network traffic and relied on a reference monitor mechanism installed in network interfaces to restrict message delivery.

In practice, the cryptographic hardware and key management processes have often been too expensive to use in certain large scale MLS network applications. Instead, sites have relied on physical security to protect their MLS networks from eavesdropping. This has been particularly true in the intelligence community, where the proliferation of compartments and codewords have made it impractical to use cryptography to keep security levels separate.

Within such sites, the network infrastructure is physically isolated from any contact except by people with Top Secret clearances supported by special background investigations. Network wires are protected from tampering, though not from sophisticated attacks that might tempt uncleared outsiders. MLS access restrictions rely on security labels embedded in network messages. If cryptography is used at all, its primary purpose is to protect the integrity of security labels.

Standard traffic using the Internet Protocol (IP) does not include security labels, but the Internet community developed standards for such labels, beginning with the IP Security Option (IPSO) (St. Johns, 1988). The US Defense Intelligence Agency developed this further when implementing a protocol for the DOD Intelligence Information System (DODIIS). The protocol, called DODIIS Network Security for Information Exchange (DNSIX), specified both the labeling and the checking process to be used when passing traffic through a DNSIX network interface (LaPadula, LeMoine, Vukelich, and Woodward, 1990). To increase the assurance of the resulting system, the specification included a design description for the checking process; the design had been verified against a security model that described the required MLS enforcement.

In the US, MLS cryptographic techniques were exclusively the domain of the National Security Agency (NSA), since it set the standards for encrypting classified information. Traditional NSA protocols encrypted traffic at the link level, carrying the traffic without security labels. During the 1980s and 1990s the NSA started a series of programs to develop cryptographic protocols for handling labeled, multilevel data, including the Secure Data Network System (SDNS), the Multilevel Network System Security Program (MNSSP), and the Multilevel Information System Security Initiative (MISSI).

These programs yielded Security Protocol 3 (SP3) and the Message Security Protocol (MSP). SP3 protects messages at the network protocol layer (layer 3) and has been used in gateway encryption devices for the DOD’s Secret IP Router Network (SIPRNET), which shares classified information at the Secret level among approved military and defense organizations. However, SP3 is a relatively old protocol and will probably be superseded by a variant of the IP Security Protocol (IPSEC) that has been adapted for the defense community, called the High Assurance IP Interface Specification (HAIPIS). MSP protects messages at the application level and was originally designed to encrypt e-mail. The Defense Message System, the DOD’s evolving secure e-mail system, uses MSP.

Obviously an MLS network uses encryption to protect traffic against eavesdropping. In addition, MLS protocols can use cryptography to enforce MLS access restrictions. The FIREFLY protocol illustrates this. Developed by the NSA for multilevel telephone and networking protocols, FIREFLY uses public-key cryptography to negotiate encryption keys to protect traffic between two entities. Each FIREFLY certificate contains the clearance level of its owner, and the protocol compares the levels when negotiating keys. If the two entities are using secure telephones, then the protocol yields a key whose clearance level matches the lower of the two entities. If the two entities are establishing a computer network connection, then the negotiation succeeds only if the clearance levels match.

(back to top)

Multiple Independent Levels of Security (MILS)

Despite the shortage of MLS products, the defense and intelligence communities dramatically expanded their use of computer systems during the 1980s and 1990s. Instead of implementing MLS systems, most organizations chose to deploy multiple computer networks, each dedicated to a security level they needed. This eliminated multiuser data sharing risks by eliminating the multiuser data sharing at multiple security levels. When necessary, less-classified data was copied one-way onto servers on a higher-classified network from a removable disk or tape volume.

To simplify data sharing in the MILS environment, many organizations have implemented devices to transfer data from networks at one security level to networks at other levels. These devices generally fall into three categories:

  • Multilevel servers – network servers implemented on an MLS system with individual network interfaces to connect to system high networks running at different security levels.
  • One-way guards – devices that could transfer data from a network at a “low” security level to a network at a “high” level.
  • Downgrading guards – devices that could transfer data in either direction between networks at different security levels.

In a multilevel server, computers on a network at a lower security level can store information on a server, and computers on networks at higher levels can visit the same server and retrieve that information. Vendors provide a variety of multilevel servers, including web servers, database servers, and file servers. While such systems are popular with some defense organizations, others avoid them. Most server products achieve relatively low levels of assurance, which suggests that attackers might find ways to leak information through them from a higher network to a lower one.

One-way guards implement a one-way data transfer from a network at a lower security level to a network at a higher level. The simplest implementations rely on hardware restrictions to guarantee that traffic flows in only one direction. For example, conventional fiber optic network equipment supports bidirectional traffic, but it’s not difficult to construct fiber optic hardware that only contains a transmitter on one end and a receiver on the other. Such a device can transfer data in one direction with no risk of leaking data in the other. An obvious shortcoming is that there is no efficient way to prevent congestion since the low side has no way of knowing when or if its messages have been received. More sophisticated devices like the NRL Pump (Kang, Moskowitz, and Lee, 1996) avoid this problem by implementing acknowledgements using trusted software. However, devices like the Pump can suffer from the same shortcoming as MLS servers: there are very few trustworthy operating systems on which to implement trusted MLS software, and most achieve relatively low assurance. The trustworthiness of the Pump will often be limited by the assurance of the underlying operating system.

Downgrading guards are important because they address a troublesome side-effect of MILS computing: users often end up with overclassified information. A user on a Secret system may be working with both Confidential and Secret files, and it is simple to share those files with other users on Secret systems. However, he faces a problem if he needs to provide the Confidential file to a user on a Confidential system: how does he prevent Secret information from leaking when he tries to provide a clean copy of the Confidential file? There is no simple, reliable, and foolproof way to do this, especially when using commercial desktop computers.

The same problem often occurs in e-mail systems: a different user on a Top Secret network may wish to send an innocuous but important announcement to her colleagues on a Secret network. She knows that the recipients are authorized to receive the message’s contents, but how does she ensure that no Top Secret information leak out along with the e-mail message? The problem also appears in military databases: the database may contain information at a variety of security levels, but not all user communities will be able to handle data at the same level as the whole database. To be useful, the data must be sanitized and then released to users at lower classification levels. When a downgrading guard releases information from a higher security level to a lower one, the downgrading generally falls into one of three categories:

  • Manual review and release – a trained and trusted operator carefully examines all files submitted for downgrading. If the file appears clean of unreleasable information, then it is released.
  • Automated release – the system looks for indicators to show that an authorized person has reviewed the file and approved it for release. The indicators usually include cryptographic authentication, like a digital signature.
  • Automated review – the system uses a set of carefully constructed filters to examine the file to be released. The filters are designed to check for unreleasable data.

The traditional technique was manual review and release. A site would train an operator to identify classified information that should not be released, and the operator would manually review all data passing through the guard. This strategy proved impractical because it has become very difficult to reliably scan files for sensitive information. Word processors like Microsoft Word tend to retain sensitive information even after the user has attempted to remove it from a file (Byers, 2004). Another problem is steganography: a subverted user on the high side of the guard, or a sophisticated piece of subverted software, can easily embed large data items in graphic images or other digital files so that a visual review won’t detect their presence. In addition to the problem of reliable scanning, there is a human factors problem: few operators would remain effective in this job for very long. Military security officers tell of review operators falling into a mode in which they automatically approve everything without review, partly to maintain message throughput and partly out of boredom.

The automated release approach was used by the Standard Mail Guard (SMG) (Smith, 1994). The SMG accepted text e-mail messages that had been reviewed by the message’s author, explicitly labeled for release, and digitally signed using the DMS protocol. The SMG would verify the digital signature and check the signer’s identity against the list of users authorized to send e-mail through the guard. The SMG would also search the message for words associated with classified information that should not be released and block messages containing any such words. Authorized users could also transmit files through the guard by attaching them to e-mails. The attached files had to be reviewed and then “sealed” using a special application program: the SMG would verify the presence of the seal before releasing an attached file.

The automated review approach has been used by several guards since the 1980s, primarily to release database records from highly classified databases to networks at lower security levels. Many of these guards were designed to automatically review highly formatted force deployment data. The guards were configured with detailed rules on how to check the fields of the database records so that data was released at the correct security level. In some cases the guards were given instructions on how to sanitize certain database fields to remove highly classified data before releasing the records.

While guards and multilevel servers provide a clear benefit by enabling data sharing between system-high networks running at different clearance levels, they also pose problems. The most obvious problem is that it puts all of the MLS eggs in one basket: the guard centralizes MLS protection in a single device that undoubtedly draws the interest of attackers. Downgrading guards pose a particular concern since there are many ways that a Trojan Horse on the “high” side of a guard may disguise sensitive information so that it passes successfully through the guard’s downgrading filters. For example, the Trojan could embed classified information in an obviously unclassified image using steganography. Another problem is that the safest place to attach a label to data is at the data’s point of origin: guards are less likely to label data correctly because they are removed from the data’s point of origin (Saydjari, 2004). Guards that used an automated release mechanism may be somewhat less prone to this problem if the guard bases its decision on a cryptographically-protected label provided at the data’s point of origin. However, this benefit can be offset by other risks if the guard or the labeling process are hosted on a low-assurance operating system.

(back to top)

Sensor-to Shooter

During the 1991 Gulf War, the defense community came to appreciate the value of classified satellite images in planning attacks on enemy targets. The only complaint was that the imagery couldn’t be delivered as fast as the tactical systems could take advantage of it (Federation of American Scientists, 1997). In the idealized state of the art “sensor to shooter” system, analysts and mission commanders select targets electronically from satellite images displayed on workstations, and they send the targeting information electronically to tactical units (see Figure 7). Clearly this involves at least one downgrading step, since tactical units probably won’t be cleared to handle satellite intelligence. So far, no general-purpose strategy has emerged for handling automatic downgrading of this kind. In practice, downgrading mechanisms are approved for operation on a case-by-case basis.

Sensor-to-shooter
Figure 7: In sensor-to-shooter, data flows from higher levels to lower levels.

(back to top)

Nonmilitary Applications Similar to MLS

The following is a list of nonmilitary applications that bear some similarities to the MLS problem. While this may suggest that there may someday be a commercial market for MLS technology, a closer look suggests this is unlikely. As noted earlier, MLS systems address a level of risk that doesn’t exist in business environments. Buyers of commercial systems do not want to spend the money required to assure correct MLS enforcement. This is illustrated by examining the following MLS-like business applications.

  • Protecting sensitive information – businesses are legally obligated to protect certain types of information from disclosure. However, as noted earlier, the effects of leaking secrets in the defense community are far greater than the results of leaking secrets in the business world. Private companies have occasionally tried using MLS to protect sensitive corporate data, but the attempt has usually been abandoned as too expensive.
  • Digital rights management – To some extent, MLS protection is similar to the security problem of digital rights management (DRM). Typically, an organization or enterprise purchases computer equipment to benefit their business activities, and employees and associates use the computers to support the business activities. In both MLS and DRM, there is an absentee third party with a strong interest in restricting information disclosure, and this interest may in some cases interfere with the objectives of the computer’s owners and users. In this case, as in the previous case, however, the trade-off in the business world will not justify the type of investment that MLS protection demands in the defense community.
  • Secure server platforms – As more and more companies deploy e-commerce servers on the Internet, they face increased risks of losing assets to Internet-based crime. In theory, a hard-to-penetrate server built to military specifications would provide more confidence of safety to a company’s customers and investors. In practice, however, neither the developers of these servers nor their customers have shown much interest in military-grade server security. Several vendors have offered such servers, notably Hewlett-Packard with its Virtual Vault product line, but vendors have found very little market interest in such products. Arguably it’s for the same reason that MLS fails to excite interest in other applications: the risk is not perceived as being high enough to justify the expense of military-grade protection. Moreover, attacks on servers arrive in a variety of directions, many of which are not addressed by MLS protections.
  • Firewall platforms – In the 1990s, many companies working on MLS systems developed and sold commercial firewalls that incorporated MLS-type protections, notably the Cyberguard and Sidewinder products. While these products achieved some success in the defense community, neither competed effectively against products hosted on common commercial operating systems.

Conclusion

Despite the failures and frustrations that have dogged MLS product developments for the past quarter century, end users still call for MLS capabilities. This is because the problem remains: the defense community needs to share information at multiple security levels. Most of the community solves the problem by working on multilevel data in a system high environment and dealing with downgrading problems on a piecemeal basis. While this solves the problem in some situations, it isn’t practical others, like sensor to shooter applications.

The classic strategies intended to yield MLS products failed in several ways. First, the government’s promotion of product evaluations failed when vendors found that MLS capabilities did not significantly increase product sales. The concept of deploying a provably secure system failed twice: first, when vendors found how expensive and uncertain evaluations could be, especially at the highest levels, and second, when security experts discovered how intractable the covert channel problem could be. Finally, the few MLS products that did make their way to market languished when end users realized how narrowly the products solved their security and sharing problems. The principal successes in MLS today are based on guard and trusted server products.

Glossary

accreditation – approval granted to a computer system to perform a critical, defense-related application. The accreditation is usually granted by a senior military commander.

assurance – a set of processes, tests, and analyses performed on a computing system to ensure that it fulfills its most critical operating and security requirements.

Bell-LaPadula model – a security model that reflects the information flow restrictions inherent in the access restrictions applied to classified information.

certification – the process of analyzing a system being deployed in a particular site to verify that it meets its operational and security requirements.

covert channel – in general, an unplanned communications channel within a computer system that allows violations to its security policy. In an MLS system, this is an information flow that violates MLS restrictions.

cross domain security/solution/system (CDS) – in general, a term applied to multilevel security problems in a networking environment.

evaluation – the process of analyzing the security functions and assurance evidence of a product by an independent organization to verify that the functions operate as required and that sufficient assurance evidence has been provided to have confidence in those functions.

labeled network – a computer network on which all messages or data packets carry labels to indicate the classification level of the information being carried.

multilevel security (MLS) – an operating mode in which the users who share a computing system and/or network do not all hold clearances to view all information on the system.

multiple independent levels of security (MILS) – a networking and desktop computing environment which assigns dedicated, system-high resources for processing classified information at different security levels. Users in a MILS environment may have two or more desktop computers, each dedicated to work at a particular security level.

reference monitor – the component of an operating system that mediates all access attempts by subjects (processes) on the system and objects (files and other system resources).

security model – an unambiguous, often formal, statement of the system’s rules for achieving its security objectives, such as protecting the confidentiality of classified information from access by uncleared or insufficiently cleared users.

system high – an operating mode in which the users who share a computing system and/or network all hold clearances that could allow them to view any information on the system.

trusted computing base – the specific hardware and software components upon which a computing system relies when enforcing its security policy.

References

Anderson, J.P. (1972). Computer Security Technology Planning Study Volume II, ESD-TR-73-51, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at: http://csrc.nist.gov/publications/history/ande72.pdf (Date of access: August 1, 2004).

Bell, D.D. and L.J. La Padula (1974). Secure Computer System: Unified Exposition and Multics Interpretation, ESD-TR-75-306. Bedford, MA: ESD/AFSC, Hanscom AFB. Available at:http://csrc.nist.gov/publications/history/bell76.pdf (Date of access: August 1, 2004).

Boehm, B.W. 1981, Software Engineering Economics. Englewood Cliffs, NJ: Prentice Hall.

Brooks, F.P. (1975). The Mythical Man-Month. Reading, MA: Addison-Wesley.

Byers, S (2004). Information leakage caused by hidden data in published documents. IEEE Security and Privacy 2 (2) 23-27. Available at: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1281241&filter%3DAND%28p_IS_Number%3A28622%29 (Date of access: August 31, 2015).

Cohen, F.C. (1994) Short Course on Computer Viruses, Second Edition. New York: John Wiley & Sons, pp. 35-36.

Cohen, F.C. (1990) Computer Viruses. Computer Security Encyclopedia. Available at: http://www.all.net/books/integ/encyclopedia.html (Date of access: February 20, 2005).

Common Criteria Project Sponsoring Organizations (1999). Common criteria for information technology security evaluation, version 2.1. Available at:http://csrc.nist.gov/cc/ (Date of access: October 1, 2004).

Department of Defense (1997). DOD Information Technology Security Certification and Accreditation, DOD Instruction 5200.40. Washington, DC: Department of Defense. Available at:http://www.dtic.mil/whs/directives/corres/pdf/i520040_123097/i520040p.pdf (Date of access: October 1, 2004).

Department of Defense (1985a). Trusted Computer System Evaluation Criteria (Orange Book), DOD 5200.28-STD. Washington, DC: Department of Defense. Available at:http://www.radium.ncsc.mil/tpep/library/rainbow/index.html#STD520028 (Date of access: October 1, 2004).

Department of Defense (1985b). “Specification Practices,” MIL-STD 490A, 4 June 1985. Washington, DC: Department of Defense.

Federation of American Scientists (1997). Imagery Intelligence: FAS Space Policy Project – Desert Star. Available at: http://www.fas.org/spp/military/docops/operate/ds/images.htm (Date of access: August 1, 2004).

Fine, T. (1996). Defining noninterference in the temporal logic of actions. Proc. 1996 IEEE Conference on Security and Privacy. pp. 12-21.

Fraim, L.J. SCOMP: a solution to the multilevel security problem. IEEE Computer 16 (7) 26-34.

Haigh, J.T., and Young, W.D. (1987). Extending the noninterference version of MLS for SAT. IEEE Transactions on Software Engineering SE-13 (2).

Hoffman, L.J. (1973). IBM’s Resource Security System (RSS). In L.J. Hoffman (ed.), Security and Privacy in Computer Systems (pp. 379-401). Los Angeles: Melville Publishing Company.

Kang, M.H., Moskowitz, I.S., and Lee, D.C. (1996). A network pump. IEEE Transactions on Software Engineering 22 (5) 329-338.

Karger, P.A. and R.R. Schell (1974). MULTICS Security Evaluation, Volume II: Vulnerability Analysis, ESD-TR-74-193, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at http://csrc.nist.gov/publications/history/karg74.pdf (Date of access: August 1, 2004).

Kemmerer, R.A. (2002). A practical approach to identifying storage and timing channels: twenty years later. Proceedings of the 18th Annual Computer Applications Security Conference.

Kemmerer, R.A. (1983). Shared resource matrix methodology: an approach to identifying storage and timing channels. ACM Transactions on Computer Systems 1 109-118.

Lampson, B. (1973). A note on the confinement problem. Communications of the ACM 16 10, pp 613-615.

LaPadula, L.J., LeMoine, J.E., Vukelich, D.F. and Woodward, J.P.L. (1990). DNSIX Detailed Design Specifications, Version 2. Bedford, MA: MITRE Corporation.

Nibaldi, G.H., (1979). Proposed Technical Evaluation Criteria for Trusted Computer Systems, M79-225. Bedford, MA: The Mitre Corporation. Available at:http://csrc.nist.gov/publications/history/niba79.pdf (Date of access: August 1, 2004).

Orlowski, A. (2004). MS Trusted Computing back to drawing board. The Register, May 6, 2004. Available at: http://www.theregister.co.uk/2004/05/06/microsoft_managed_code_rethink/ (Date of access: August 1, 2004).

Proctor, N.E., and Neumann, P.G. (1992). Architectural implications of covert channels. Proceedings of the Fifteenth National Computer Security Conference pp. 28-43. Available at:http://www.csl.sri.com/users/neumann/ncs92.html (Date of access: November 15, 2004).

St. Johns, M. (1988). Draft Revised IP Security Option, RFC 1038. Available at: http://www.ietf.org/rfc/rfc1038.txt (Date of access: October 1, 2004).

Saydjari, O.S. (2004). Multilevel security: reprise. IEEE Security and Privacy 2 (no. 5). pp. 64-67.

Saydjari, O.S. (2002). LOCK: an historical perspective. Proceedings of the 2002 Annual Computer Security Applications Conference pp. Available at: http://www.acsac.org/2002/papers/classic-lock.pdf (Date of access: November 15, 2004).

Saydjari, O.S., Beckman, J.K., Jeffrey R. Leaman, J.R. (1989). LOCK Trek: navigating uncharted space. Proceedings of the 1989 IEEE Symposium on Security and Privacy 167-175.

Smith, R.E. (2005). Observations on multi-level security. Web pages available at https://cryptosmith.com/mls/notes/ (Date of access: August 31, 2015).

Smith, R.E. (2001). Cost profile of a highly assured, secure operating system. ACM Transactions on Information System Security 4 pp. 72-101. A draft version is available at https://cryptosmith.com/wp-content/uploads/2014/10/lock-eff-acm.pdf. (Date of access: August 31, 2015).

Smith, R.E. (2000). Trends in government endorsed security product evaluations, Proceedings of the 23rd National Information Systems Security Conference. Available at: https://cryptosmith.com/wp-content/uploads/2014/10/evaltrends.pdf (Date of access: August 31, 2015).

Smith, R.E. (1994). Constructing a high assurance mail guard. Proceedings of the 17th National Computer Security Conference 247-253. Available at: https://cryptosmith.com/wp-content/uploads/2014/10/mailguard.pdf. (Date of access: August 31, 2015).

Ware, W.H. (1970) Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security. Santa Monica, CA: The RAND Corporation. Available at:http://csrc.nist.gov/publications/history/ware70.pdf (Date of access: August 1, 2004).

Weissman, C. (1969). Security controls in the ADEPT-50 time-sharing system. Proceedings of the 1969 Fall Joint Computer Conference. Reprinted in L.J. Hoffman (ed.), Security and Privacy in Computer Systems (pp. 216-243). Los Angeles: Melville Publishing Company, 1973.

Wray, J.C. (1991). An analysis of covert timing channels. Proceedings of the 1991 IEEE Symposium on Security and Privacy 2-7.