Multilevel security (MLS) is a technology to protect secrets from leaking between computer users, when some are allowed to see those secrets and others are not. This is generally used in defense applications (the military and intelligence communities) since nobody else is nearly as paranoid about data leaking. A modern wrinkle on this is called cross domain systems (CDS) in which we speak of domains instead of levels, and are usually sharing data on computer networks instead of individual computers
Personally, I was introduced to MLS through my work on the LOCK trusted computing system in the early 1990s.
Here are some MLS materials available on this site:
Note that some people like to spell it "multi-level security." I think the term is old enough that we can omit the hyphen.
Several years ago I was at a workshop sponsored by the Air Force to develop some new directions for information systems improvements. The workshop included both "end user" representatives from the Air Force and "R&D" representatives from laboratories and government contractors.
Discussions on MLS capabilities became rather heated. One vendor representative from the security working group declared the following in a plenary session:
"Don't ask for MLS. We've tried to give you MLS, but in fact you've never really wanted it or used it. But please, tell us what you do want!"
A voice in the back shouted, "MLS!"
That little incident reflects an important fact about MLS: it's an overloaded term that describes both an abstract security objective and a well-known mechanism that is supposed to achieve that objective, more or less. In her well-known paper on software safety, Nancy Leveson criticizes this type of labeling:
Labeling a technique, e.g., "software diversity" or "expert system," with the property we hope to achieve by it (and need to prove about it) is misleading and unscientific.
Unfortunately, we're stuck with the established terminology, so now we must focus on distinguishing between the two meanings.
This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.
Multilevel security (MLS) has posed a challenge to the computer security community since the 1960s. MLS sounds like a mundane problem in access control: allow information to flow freely between recipients in a computing system who have appropriate security clearances while preventing leaks to unauthorized recipients. However, MLS systems incorporate two essential features: first, the system must enforce these restrictions regardless of the actions of system users or administrators, and second, MLS systems strive to enforce these restrictions with incredibly high reliability. This has led developers to implement specialized security mechanisms and to apply sophisticated techniques to review, analyze, and test those mechanisms for correct and reliable behavior.
Despite this, MLS systems have rarely provided the degree of security desired by their most demanding customers in the military services, intelligence organizations, and related agencies. The high costs associated with developing MLS products, combined with the limited size of the user community, have also prevented MLS capabilities from appearing in commercial products.
Portions of this article also appear as Chapter 205 of the Handbook of Information Security, Volume 3, Threats, Vulnerabilities, Prevention, Detection and Management, Hossein Bidgoli, ed., ISBN 0-471-64832-9, John Wiley, 2006.
Many businesses and organizations need to protect secret information, and most can tolerate some leakage. Organizations who use MLS systems tolerate no leakage at all. Businesses may face legal or financial risks if they fail to protect business secrets, but they can generally recover afterwards by paying to repair the damage. At worst, the business goes bankrupt. Managers who take risks with business secrets might lose their jobs if secrets are leaked, but they are more likely to lose their jobs to failed projects or overrun budgets. This places a limit on the amount of money a business will invest in data secrecy.
The defense community, which includes the military services, intelligence organizations, related government agencies, and their supporting enterprises, cannot easily recover from certain information leaks. Stealth systems aren't stealthy if the targets know what to look for, and surveillance systems don't see things if the targets know what camouflage to use. Such failures can't always be corrected just by spending more money. Even worse, a system's weakness might not be detected until after a diplomatic or military disaster reveals it. During the Cold War, the threat of nuclear annihilation led military and political leaders to take such risks very seriously. It was easy to argue that data leakage could threaten a country's very existence. The defense community demanded levels of computer security far beyond what the business community needed.
We use the term multilevel because the defense community has classified both people and information into different levels of trust and sensitivity. These levels represent the well-known security classifications: Confidential, Secret, and Top Secret. Before people are allowed to look at classified information, they must be granted individual clearances that are based on individual investigations to establish their trustworthiness. People who have earned a Confidential clearance are authorized to see Confidential documents, but they are not trusted to look at Secret or Top Secret information any more than any member of the general public. These levels form the simple hierarchy shown in Figure 1. The dashed arrows in the figure illustrate the direction in which the rules allow data to flow: from "lower" levels to "higher" levels, and not vice versa.
The defense community was the first and biggest customer for computing technology, and computers were still very expensive when they became routine fixtures in defense organizations. However, few organizations could afford separate computers to handle information at every different level: they had to develop procedures to share the computer without leaking classified information to uncleared (or insufficiently cleared) users. This was not as easy as it might sound. Even when people "took turns" running the computer at different security levels (a technique called periods processing), security officers had to worry about whether Top Secret information may have been left behind in memory or on the operating system's hard drive. Some sites purchased computers to dedicate exclusively to highly classified work, despite the cost, simply because they did not want to take the risk of leaking information.
Multiuser systems, like the early timesharing systems, made such sharing particularly challenging. Ideally, people with Secret clearances should be able to work at the same time others were working on Top Secret data, and everyone should be able to share common programs and unclassified files. While typical operating system mechanisms could usually protect different user programs from one another, they could not prevent a Confidential or Secret user from tricking a Top Secret user into releasing Top Secret information via a Trojan horse.
A Trojan horse is software that performs an invisible function that the user would not have chosen to perform. For example, consider a multiuser system in which users have stored numerous private files and have used the system's access permissions to protect those files from prying eyes. Imagine that the author of a locally-developed word processing program has an unhealthy curiosity about others in the user community and wishes to read their protected files. The author can install a Trojan horse function in the word processing program to retrieve the protected files. The function copies a user's private files into the author's own directory whenever a user runs the word processing program.
Unfortunately, this is not a theoretical threat. The "macro" function in modern word processors like Microsoft Word allow users to create arbitrarily complicated software procedures and attach them to word processing documents. When another user opens a document containing the macro, the word processor executes the procedure defined by the macro. This was the basis of "macro viruses" like the "I Love You" virus of the late 1990s. A macro can perform all the functions required of a Trojan horse program, including copying files.
When a user runs the word processing program, the program inherits that user's access permissions to the user's own files. Thus the Trojan horse circumvents the access permissions by performing its hidden function when the unsuspecting user runs it. This is true whether the function is implemented in a macro or embedded in the word processor itself. Viruses and network worms are Trojan horses in the sense that their replication logic is run under the context of the infected user. Occasionally, worms and viruses may include an additional Trojan horse mechanism that collects secret files from their victims. If the victim of a Trojan horse is someone with access to Top Secret information on a system with lesser-cleared users, then there's nothing on a conventional system to prevent leakage of the Top Secret information. Multiuser systems clearly need a special mechanism to protect multilevel data from leakage.
The phrase "need to know" refers to a commonly-enforced rule in organizations that handle classified information. In general, a security clearance does not grant blanket permission to look at all information classified at that level or below. The clearance is really only the first step: people are only allowed to look at classified information that they need to know as part of the work they do.
In other words, if we give Janet a Secret clearance to work on a cryptographic device, then she has a need to know the Secret information related to that device. She does not have permission to study Secret information about spy satellites, or Secret cryptographic information that doesn't apply to her device. If she is using a multiuser system containing Secret information about her project, other cryptographic projects, and even spy satellites, then the system must prevent Janet from browsing information belonging to the other projects and activities. On the other hand, the system should be able to grant Janet permission to look at other materials if she really needs the information to do her job.
A computer's operating mode determines what access control mechanisms it needs. Dedicated systems might not require any mechanisms beyond physical security. Computers running at system high must have user-based access restrictions like those typically provided in Unix and in "professional" versions of Microsoft Windows. In multilevel mode, the system must prevent data from higher security levels from leaking to users who have lower clearances: this requires a special mechanism.
Typically, an MLS mechanism works as follows: Users, computers, and networks carry computer-readable labels to indicate security levels. Data may flow from "same level" to "same level" or from "lower level" to "higher level" (Figure 2). Thus, Top Secret users can share data with one another, and a Top Secret user can retrieve information from a Secret user. It does not allow data from Top Secret (a higher level) to flow into a file or other location visible to a Secret user (at a lower level). If data is not subject to classification rules, it belongs to the "Unclassified" security level. On a computer this would include most application programs and any computing resources shared by all users.
A direct implementation of such a system allows the author of a Top Secret report to retrieve information entered by users operating at Secret or Confidential and merge it with Top Secret information. The user with the Secret clearance cannot "read up" to see the Top Secret result, since the data only flows in one direction between Secret and Top Secret. Unclassified data can be made visible to all users.
It isn't enough to simply prevent users with lower clearances from reading data carrying higher classifications. What if a user with a Top Secret clearance stores some Top Secret data in a file readable by a Secret user? This causes the same problem as "reading up" since it makes the Top Secret data visible to the Secret user. Some may argue that Top Secret users should be trusted not to do such a thing. In fact, some would argue that they would never do it because it's a violation of the Espionage Act. Unfortunately, this argument does not take into account the risk of a Trojan horse.
For example, Figure 3 shows what could happen if an attacker inserts a macro function with a Trojan horse capability into a word processing file. The attacker has stored the macro function in a Confidential file and has told a Top Secret user to examine the file. When the user opens the Confidential file, the macro function starts running, and it tries to copy files from the Top Secret user's directory into Confidential files belonging to the attacker. This is called "writing down" the data from a higher security level to a lower one.
In a system with typical access control mechanisms, the macro will succeed since the attacker can easily set up all of the permissions needed to allow the Top Secret user to write data into the other user's files. Clearly, the system cannot enforce MLS reliably if Trojan horse programs can circumvent MLS protections. There is no way users can avoid Trojan horse programs with 100% reliability, as suggested by the success of e-mail viruses. An effective MLS mechanism needs to block "write down" attempts as well as "read up" attempts.
The most widely recognized approach to MLS is the Bell-LaPadula security model (Bell and La Padula, 1974). The model effectively captures the essentials of the access restrictions implied by conventional military security levels. Most MLS mechanisms implement Bell-LaPadula or a close variant of it. Although Bell-LaPadula has accurately defined a MLS capability that keeps data safe, it has not led to the widespread development of successful multilevel systems. In practice, developers have not been able to produce MLS mechanisms that work reliably with high confidence, and some important defense applications require a "write down" capability that renders Bell-LaPadula irrelevant (for example, see the later section "Sensor to Shooter").
In the Bell-LaPadula model, programs and processes (called subjects) try to transfer information via files, messages, I/O devices, or other resources in the computer system (called objects). Each subject and object carries a label containing its security level, that is, the subject's clearance level or the object's classification level. In the simplest case, the security levels are arranged in a hierarchy as shown earlier in Figure 1. More elaborate cases involve compartments, as described in a later section.
The Bell-LaPadula model enforces MLS access restrictions by implementing two simple rules: the simple security property and the *-property. When a subject tries to read from or write to an object, the system compares the subject's security label with the object's label and applies these rules. Unlike typical access restrictions on multiuser computing systems, these restrictions are mandatory: no users on the system can turn them off or bypass them. Typical multiuser access restrictions are discretionary, that is, they can be enabled or disabled by system administrators and often by individual users. If users, or even administrative users, can modify the access rules, then a Trojan horse can modify or even disable those rules. To prevent leakage by browsing users and Trojan horses, the Bell-LaPadula systems always enforce the two properties.
The simple security property is obvious: it prevents people (or their processes) from reading data whose classification exceeds their security clearances. Users can't "read up" relative to their security clearances. They can "read down," which means that they can read data classified at or below the same level as their clearances.
The *-property prevents people with higher clearances from passing highly classified data to users who don't share the appropriate clearance, either accidentally or intentionally. User programs can't "write down" into files that carry a lower security level than the process they are currently running. This prevents Trojan horse programs from secretly leaking highly classified data. Figure 4 illustrates these properties: the dashed arrows show data being read in compliance with the simple security property, and the lower solid arrow shows an attempted "write down" being blocked by the *-property.
A system enforcing the Bell-LaPadula model blocks the word processing macro in either of two ways, depending on the security level at which the user runs the word processing program. In one case, shown in Figure 4, the user runs the program at Top Secret. This allows the macro to read the user's Top Secret files, but the *-property prevents the macro from writing to the attacker's Confidential files. When the process tries to open a Confidential file for writing, the MLS access rules prevent it. In the other case, the Top Secret user runs the program at the Confidential level, since the file is classified Confidential. The program's macro function can read and modify Confidential files, including files set up by the attacker, but the simple security property prevents the macro from reading any Secret or Top Secret files.
Although the hierarchical security levels like Top Secret are familiar to most people, they are not the only restrictions placed on information in the defense community. Organizations apply hierarchical security levels (Confidential, Secret, or Top Secret) to their data according to the damage that might be caused by leaking that data. Some organizations add other markings to classified material to further restrict its distribution. These markings go by many names: compartments, codewords, caveats, categories, and so on, and they serve many purposes. In some cases, the markings indicate whether or not the data may be shared with particular organizations, enterprises, or allied countries. In many cases these markings give the data's creator or owner more control over the data's distribution. Each marking indicates another restriction placed on the distribution of a particular classified data item. People can only receive the classified data if they comply with all restrictions placed on the data's distribution.
The Bell-LaPadula model refers to all of these additional markings as compartments. A security level may include compartment identifiers in addition to a hierarchical security level. If a particular file's security level includes one or more compartments, then the user's security level must also include those compartments or the user won't be allowed to read the file.
A system with compartments generally acquires a large number of distinct security levels: one for every legal combination of a hierarchical security level with zero or more compartments. The interrelationships between these levels form a directed graph called a lattice. Figure 5 shows the lattice for a system that contains Secret and Top Secret information with compartments Ace and Bar.
The arrows in the lattice show which security levels can read data labeled with other security levels. If the user Cathy has a Top Secret clearance with access to both compartments Ace and Bar, then she has permission to read any data on the system (assuming its owner has also given her "read" permission to that data). We determine the access rights associated with other security labels by following arrows in Figure 5.
If Cathy runs a program with the label Secret Ace, then the program can read data labeled Unclassified, Secret, or Secret Ace. The program can't read data labeled Secret Bar or Secret Ace Bar, since its security label doesn't contain the Bar compartment. Figure 4 illustrates this: there is no path to Secret Ace that comes from a label containing the Bar compartment. Likewise, the program can't read Top Secret data because it is running at the Secret level, and no Top Secret labels lead to Secret labels.
A high security clearance like Cathy's shows that a particular organization is willing to trust Cathy with certain types of classified information. It is not a blank check that grants access to every resource on a computer system. MLS access rules always work in conjunction with the system's other access rules.
Systems that enforce MLS access rules always combine them with conventional, user-controlled access permissions. If a Secret or Confidential user blocks access to a file by other users, then a Top Secret user can't read the file either. The "need to know" rule means that classified information should only be shared among individuals who genuinely need the information. Individual users are supposed to keep classified information protected from arbitrary browsing by other users. Higher security clearances do not grant permission to arbitrarily browse: access is still restricted by the need to know requirement.
Users who have Top Secret and higher clearances don't automatically acquire administrative or "super user" status on multilevel computer systems, even if they are cleared for everything on the computer. In a way, the Top Secret security level actually restricts what the user can do: a program running at Top Secret can't install an unclassified application program, for example. Many administrative tasks, like installing application programs and other shared resources, must take place at the unclassified level. If an administrator installs programs while running at the Top Secret level, then the programs will be installed with Top Secret labels. Users with lower clearances wouldn't be authorized to see the programs.
Despite strong support from the military community and a strong effort by computing vendors and computer security researchers, MLS mechanisms failed to provide the security and functionality required by the defense community. First, security researchers and MLS system developers found it to be extremely difficult, and perhaps impossible, to completely prevent information flow between different security levels in an MLS system. We will explore this problem further in the next section on "Assurance." A second problem was the virus threat: when we enforce MLS information flow we do nothing to prevent a virus introduced at a lower clearance level from propagating into higher clearance levels. Finally, the end user community found a number of cases where that the Bell-LaPadula model of information flow did not entirely satisfy their operational and security needs.
Self-replicating software like computer viruses became a minor phenomenon among the earliest home computer users in the late 1970s. While viruses weren't enough of a phenomenon to interest MLS researchers and developers, MLS systems caught the interest of the pioneering virus researcher Fred Cohen (1990, 1994). In 1984 he demonstrated that a virus inserted at the unclassified level of a system that implemented the Bell-LaPadula model could rapidly spread throughout all security levels of a system. This particular infestation did not reflect a bug in the MLS implementation. Instead, it indicated a flaw in the Bell-LaPadula model, which strives to allow information flows from low to high while preventing flows from high to low. Viruses represent a security threat that exploits an information flow from low to high, so MLS protection based on Bell-LaPadula gives no protection against it.
Viruses represented one case in which the Bell-LaPadula model did not meet the end users' operational and security needs. Additional cases emerged as end users gained experience with MLS systems. One problem was that the systems tended to collect a lot of "overclassified" information. Whenever a user created a document at a high security level, the document would have to retain that security level even if the user removed all sensitive information in order to create a less-classified or even unclassified document. In essence, end users often needed a mechanism to "downgrade" information so its label reflected its lowered sensitivity.
The downgrading problem became especially important as end users sought to develop "sensor to shooter" systems. These systems would use highly classified intelligence data to produce tactical commands to be sent to combat units whose radios received information at the Secret level or lower (see the later section on "Sensor to Shooter"). In practice, systems would address the downgrading problem by installing privileged programs that bypassed the MLS mechanism to downgrade information. While this served as a convenient patch to correct the problem, it also showed that practical systems did not entirely rely on the Bell-LaPadula mechanisms that had cost so much to build and validate. This further eroded the defense community's interest in MLS based on Bell-LaPadula products.
Members of the defense community identified the need for MLS-capable systems in the 1960s, and a few vendors implemented the basic features (Weissman 1969, Hoffman 1973, Karger and Schell 1974). However, government studies of the MLS problem emphasized the danger of relying on large, opaque operating systems to protect really valuable secrets (Ware 1970, Anderson 1972). Operating systems were already notorious for unreliability, and these reports highlighted the threat of a software bug allowing leaks of highly sensitive information. The recommended solution was to achieve high assurance through extensive analysis, review, and testing.
High assurance would clearly increase vendors' development costs and lead to higher product costs. This did not deter the US defense community, which foresaw long-term cost savings. Karger and Schell (1974) repeated an assertion that MLS capabilities could save the US Air Force alone $100,000,000 a year, based on computing costs at that time.
Every MLS device poses a fundamental question: does it really enforce MLS, or does it leak information somehow? The first MLS challenge is to develop a way to answer that question. We can decompose the problem into two more questions:
The first question was answered by the development of security models, like the Bell-LaPadula model summarized earlier. A true security model provides a formal, mathematical representation of MLS information flow restrictions. The formal model makes the enforcement problem clear to non-programmers. It also makes the operating requirement clear to the programmers who implemented the MLS mechanisms.
To address the evaluation question, designers needed a way to prove that the system's MLS controls indeed work correctly. By the late 1960s, this had become a really serious challenge. Software systems had become much too large for anyone to review and validate: Brooks (1975) reported that IBM had over a thousand people working on its ground breaking system, OS/360. In the book, The Mythical Man-Month, Brooks described the difficulties of building a large-scale software system. Project size wasn't the only challenge in building reliable and secure software: smaller teams, like the team responsible for the Multics security mechanisms, could not detect and close every vulnerability (Karger and Schell, 1974).
The security community developed two sets of strategies for evaluating MLS systems: strategies for designing a reliable MLS system and strategies to prove the MLS system works correctly. The design strategies emphasized a special structure to ensure uniform enforcement of data access rules, called the reference monitor. The design strategies further required that the designers explicitly identify all system components that played a role in enforcing MLS; those components were defined as being part of the trusted computing base, which included all components that required high assurance.
The strategies for proving correctness relied heavily on formal design specifications and on techniques to analyze those designs. Some of these strategies were a reaction to ongoing quality control problems in the software industry, but others were developed as an attempt to detect covert channels, a largely unresolved weakness in MLS systems.(back to top)
During the early 1970s, the US Air Force commissioned a study to develop feasible strategies for constructing and verifying MLS systems. The study pulled together significant findings by security researchers at that time into a report, called the Anderson report (1972), which heavily influenced subsequent US government support of MLS systems. A later study (Nibaldi 1979) identified the most promising strategies for trusted system development and proposed a set of criteria for evaluating such systems.
These proposals led to published criteria for developing and evaluating MLS systems called the Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book" (Department of Defense, 1985a). The US government established a process by which computer system vendors could submit their products for security evaluation. A government organization, the National Computer Security Center (NCSC), evaluated products against the TCSEC and rated the products according to their capabilities and trustworthiness. For a product to achieve the highest rating for trustworthiness, the NCSC needed to verify the correctness of the product's design.
To make design verification feasible, the Anderson report recommended (and the TCSEC required) that MLS systems enforce security through a "reference validation mechanism" that today we call the reference monitor. The reference monitor is the central point that enforces all access permissions. Specifically, a reference monitor must have three features:
Operating system designers had by that time recognized the concept of an operating system kernel: a portion of the system that made unrestricted accesses to the computer's resources so that other components didn't need unrestricted access. Many designers believed that a good kernel should be small for the same reason as a reference monitor: it's easier to build confidence in a small software component than in a large one. This led to the concept of a security kernel: an operating system kernel that incorporated a reference monitor. Layered atop the security kernel would be supporting processes and utility programs to serve the system's users and the administrators. Some non-kernel software would require privileged access to system resources, but none would bypass the security kernel. The combination of the computer hardware, the security kernel, and its privileged components made up the trusted computing base (TCB) - the system components responsible for enforcing MLS restrictions. The TCB was the focus of assurance efforts: if it worked correctly, then the system would correctly enforce the MLS restrictions.
The computer industry has always relied primarily on system testing for quality assurance. However, the Anderson report recognized the shortcomings of testing by repeating Dijkstra's observation that tests can only prove the presence of bugs, not their absence. To improve assurance, the report made specific recommendations about how MLS systems should be designed, built, and tested. These recommendations became requirements in the TCSEC, particularly for products intended for the most critical applications:
These activities were not substituted for conventional product development techniques. Instead, these additional tasks were combined with the accepted "best practices" used in conventional computer system development. These practices tended to follow a "waterfall" process (Boehm, 1981; Department of Defense, 1985b): first, the builders develop a requirements specification, from that they develop the top-down design, then they implement the product, and finally they test the product against the requirements. In the idealized process for developing an MLS product, the requirements specification focuses on testable functions and measurable performance capabilities while the policy model captures security requirements that can't be tested directly. (see Figure 6). shows how these elements worked together to validate an MLS product's correct operation.
Product development has always been expensive. Many development organizations, especially smaller ones, try to save time and money by skipping the planning and design steps of the waterfall process. The TCSEC did not demand the waterfall process, but its requirements for highly assured systems imposed significant costs on development organizations. Both the Nibaldi study and the TCSEC recognized that not all product developers could afford to achieve the highest levels of assurance. Instead, the evaluation process identified a range of assurance levels that a product could achieve. Products intended for less-critical activities could spend less money on their development process and achieve a lower standard of assurance. Products intended for the most critical applications, however, were expected to meet the highest practical assurance standard.(back to top)
Shortly after the Anderson report appeared, Lampson (1973) published a note which examined the general problem of keeping information in one program secret from another, a problem at the root of MLS enforcement. Lampson noted that computer systems contain a variety of channels by which two processes might exchange data. In addition to explicit channels like the file system or interprocess communications services, there are covert channels that can also carry data between processes. These channels typically exploit operating system resources shared among all processes. For example, when one process can take exclusive control of a file, it prevents other processes from accessing the file, or when one process uses up all the free space on the hard drive, other processes will "see" this activity.
Since MLS systems could not achieve their fundamental objective (to protect secrets) if covert channels were present, defense security experts developed techniques to detect such channels. The TCSEC required a covert channel analysis of all MLS systems except those achieving the lowest assurance levels.
In general, there are two categories of covert channels: storage channels and timing channels. A storage channel transmits data from a "high" process to a "low" one by writing data to a storage location visible to the "low" one. For example, if a Secret process can see how much memory is left after a Top Secret process allocates some memory, the Top Secret process can send a numeric message by allocating or freeing the amount of memory equal to the message's numeric value. The covert channel consists of setting the contents of a storage location (the size of free memory) to a value by the "high" process that is readable by the "low" one.
A timing channel is one in which the "high" process communicates to the "low" one by varying the timing of some detectable event. For example, the Top Secret process might instruct the hard drive to visit particular disk blocks. When the Secret process goes to read data from the hard drive itself, the disk activity by the Top Secret process will cause varying delays in the Secret program when it tries to use the hard drive itself. The Top Secret program can systematically impose delays on the Secret program's disk activities, and thus transmit information through the pattern of those delays. Wray (1991) describes a covert channel based on hard drive access speed, and also uses the example to show how ambiguous the two covert channel categories can be.
The fundamental strategy for seeking convert channels is to inspect all shared resources in the system, decide if any could yield an effective covert channel, and to measure the bandwidth of whatever covert channels are uncovered. While a casual inspection by a trained analyst may often uncover covert channels, there is no guarantee that a casual inspection will find all such channels. Systematic techniques help increase confidence that the search has been comprehensive. An early technique, the shared resource matrix (Kemmerer, 1983; 2002), can analyze a system from either a formal or informal specification. While the technique can detect covert storage channels, it cannot detect covert timing channels. An alternative approach, noninterference, requires formal policy and design specifications (Haigh and Young, 1987). This technique locates both timing and storage channels by proving theorems to show that processes in the system, as described in the design specification, can't perform detectable ("interfering") actions that are visible to other processes in violation of MLS restrictions.
To be effective at locating covert channels, the design specification must accurately model all resource sharing that is visible to user processes in the system. Typically, the specification focuses its attention on the system functions made available to user processes: system calls to manipulate files, allocate memory, communicate with other processes, and so on. The development program for the LOCK system (Saydjari, Beckman, and Leaman, 1989; Saydjari, 2002), for example, included the development of a formal design specification to support a covert channel analysis. The LOCK design specification identified all system calls, described all inputs and outputs produced by these calls, including error results, and represented the internal mechanisms necessary to support those capabilities. The LOCK team used a form of noninterference to develop proofs that the system enforced MLS correctly (Fine, 1994).
As with any flaw detection technique, there is no way to confirm that all flaws have been found. Techniques that analyze the formal specification will detect all flaws in that specification, but there is no way to conclusively prove that the actual system implements the specification perfectly. Techniques based on less-formal design descriptions are also limited by the quality of those descriptions: if the description omits a feature, there's no way to know if that feature opens a covert channel. At some point there must be a trade-off between the effort spent on searching for covert channels and the effort spent searching for other system flaws.
In practice, system developers have found it almost impossible to eliminate all covert channels. While evaluation criteria encourage developers to eliminate as many covert channels as possible, the criteria also recognize that practical systems will probably include some channels. Instead of eliminating the channels, developers must identify them, measure their possible bandwidth, and provide strategies to reduce their potential for damage. While not all security experts agree that covert channels are inevitable (Proctor and Neumann, 1992), typical MLS products contain covert channels. Thus, even the approved MLS products contain known weaknesses.
How does assurance fit into the process of actually deploying a system? In theory, one can plug a computer in and throw the switch without knowing anything about its reliability. In the defense community, however, a responsible officer must approve all critical systems before they can go into operation, especially if they handle classified information. Approval rarely occurs unless the officer receives appropriate assurance that the system will operate correctly. There are three major elements to this approval process in the US defense community:
In military environments, a highly-ranked officer, typically an admiral or general, must formally grant approval (accreditation) before a critical system goes into operation. Accreditation shows that the officer believes the system is safe to operate, or at least that the system's risks are outweighed by its benefits. The decision is based on the results of the system's certification: a process in which technical experts analyze and test the system to verify that it meets its security and safety requirements. The certification and accreditation process must meet certain standards (Department of Defense, 1997). Under rare, emergency conditions an officer could accredit a system even if there are problems with the certification.
Certification can be very expensive, especially for MLS systems. Tests and analyses must show that the system is not going to fail in a way that will leak classified information or interfere with the organization's mission. Tests must also show that all security mechanisms and procedures work as specified in the requirements. Certification of a custom-built system often involves design reviews and source code inspections. This work requires a lot of effort and special skills, leading to very high costs.
The product evaluation process heralded by the TCSEC was intended to provide off-the-shelf computing equipment that reliably enforced MLS restrictions. Although organizations could implement, certify, and accredit custom systems enforcing MLS, the certification costs were hard to predict and could overwhelm the project budget. If system developers could use off-the-shelf MLS products, their certification costs and project risks would be far lower. Certifiers could rely on the security features verified during evaluation, instead of having to verify a product's implementation themselves.
Product evaluations assess two major aspects: functionality and assurance. A successful evaluation indicates that the product contains the appropriate functional features and meets the specified level of assurance. The TCSEC defined a range of evaluation levels to reflect increasing levels of compliance with both functional and assurance requirements. Each higher evaluation level either incorporated the requirements of the next lower level, or superseded particular requirements with a stronger requirement. Alphanumeric codes indicated each level, with D being lowest and A1 being highest:
Although the TCSEC defined a whole range of evaluation levels, the government wanted to encourage vendors to develop systems that met the highest levels. In fact, one of the pioneering evaluated products was SCOMP, an A1 system constructed by Honeywell (Fraim, 1983). Very few other vendors pursued an A1 evaluation. High assurance caused high product development costs; one project estimated that the high assurance tasks added 26% to the development effort's labor hours (Smith, 2001). In the fast-paced world of computer product development, that extra effort can cause delays that make the difference between a product's success or failure.
To date, no commercial computer vendors have offered a genuine "off the shelf" MLS product. A handful of vendors had implemented MLS operating systems, but none of these were standard product offerings. All MLS products were expensive, special purpose systems marketed almost exclusively to the military and government customers. Almost all MLS products were evaluated to the B1 level, meeting minimum assurance standards. Thus, the TCSEC program failed on two levels: it failed to persuade vendors to incorporate MLS features into their standard products, and it failed to persuade any vendors to produce products that met the "A1" requirements for high assurance.
A survey of security product evaluations completed by the end of 1999 (Smith, 2000) noted that only a fraction of security products ever pursued evaluation. Most products pursued "medium assurance" evaluations, which could be sufficient for a minimal (B1) MLS implementation.
TCSEC evaluations were discontinued in 2000. The handful of modern MLS products are evaluated under the Common Criteria (Common Criteria Project Sponsoring Organizations, 1999), an evaluation criteria designed to address a broader range of security products.
The most visible failure of MLS technology is its absence from typical desktops. As Microsoft's Windows operating systems came to dominate the desktop in the 1990s, Microsoft made no significant move to implement MLS technology. Versions of Windows have earned a TCSEC C2 evaluation and a more-stringent EAL-4 evaluation under the Common Criteria, but it has never incorporated MLS. The closest Microsoft has come to offering MLS technology has been its "Palladium" effort announced in 2002. The technology focused on the problem of digital rights management - restricting the distribution of copyrighted music and video - but the underlying mechanisms caught the interest of many in the MLS community because of potential MLS applications. The technology was slated for incorporation in a future Windows release codenamed "Longhorn," but was dropped from Microsoft's plans in 2004 (Orlowski, 2004).
Arguably several factors have contributed to the failure of the MLS product space. Microsoft demonstrated clearly that there was a giant market for products that omit MLS. Falling computer prices also played a role: sites where users typically work at a couple of different security levels find it cheaper to put two computers on every desktop than to try to deploy MLS products. Finally, the sheer cost and uncertainty of MLS product development undoubtedly discourage many vendors. It is hard to justify the effort to develop a "highly secure" system when it's likely that the system will still have identifiable weaknesses, like covert channels, after all the costly, specialized work is done.(back to top)
As computer costs fell and performance soared during the 1980s and 1990s, computer networks became essential for sharing work and resources. Long before computers were routinely wired to the Internet, sites were building local area networks to share printers and files. In the defense community, multilevel data sharing had to be addressed in a networking environment. Initially, the community embraced networks of cheap computers as a way to temporarily sidestep the MLS problem. Instead of tackling the problem of data sharing, many organizations simply deployed separate networks to operate at different security levels, each running in system high mode.
This approach did not help the intelligence community. Many projects and departments needed to process information carrying a variety of compartments and code words. It simply wasn't practical to provide individual networks for every possible combination of compartments and code words, since there were so many to handle. Furthermore, intelligence analysts often spent their time combining information from different compartments to produce a document with a different classification. In practice, this work demanded an MLS desktop and often required communications over an MLS network.
Thus, MLS networking took two different paths in the 1990s. Organizations in the intelligence community continued to pursue MLS products. This reflected the needs of intelligence analysts. In networking, this called for labeled networks, that is, networks that carried classification labels on their traffic to ensure that MLS restrictions were enforced.
Many other military organizations, however, took a different path. Computers in most military organizations tended to cluster into networks handling data up to a specified security level, operating in system high mode. This choice was not driven by an architectural vision; it was more likely the effect of the desktop networking architecture emerging in the commercial marketplace combined with existing military computer security policies. Ultimately, this strategy was named multiple single levels (MSL) or multiple independent levels of security (MILS).
The fundamental objective of a labeled network is to prevent leakage of classified information. The leakage could occur through eavesdropping on the network infrastructure or by delivering data to an uncleared destination. This yielded two different approaches to labeled networking. The more complex approach used cryptography to keep different security levels separate and to prevent eavesdropping. The simpler approach inserted security labels into network traffic and relied on a reference monitor mechanism installed in network interfaces to restrict message delivery.
In practice, the cryptographic hardware and key management processes have often been too expensive to use in certain large scale MLS network applications. Instead, sites have relied on physical security to protect their MLS networks from eavesdropping. This has been particularly true in the intelligence community, where the proliferation of compartments and codewords have made it impractical to use cryptography to keep security levels separate.
Within such sites, the network infrastructure is physically isolated from any contact except by people with Top Secret clearances supported by special background investigations. Network wires are protected from tampering, though not from sophisticated attacks that might tempt uncleared outsiders. MLS access restrictions rely on security labels embedded in network messages. If cryptography is used at all, its primary purpose is to protect the integrity of security labels.
Standard traffic using the Internet Protocol (IP) does not include security labels, but the Internet community developed standards for such labels, beginning with the IP Security Option (IPSO) (St. Johns, 1988). The US Defense Intelligence Agency developed this further when implementing a protocol for the DOD Intelligence Information System (DODIIS). The protocol, called DODIIS Network Security for Information Exchange (DNSIX), specified both the labeling and the checking process to be used when passing traffic through a DNSIX network interface (LaPadula, LeMoine, Vukelich, and Woodward, 1990). To increase the assurance of the resulting system, the specification included a design description for the checking process; the design had been verified against a security model that described the required MLS enforcement.
In the US, MLS cryptographic techniques were exclusively the domain of the National Security Agency (NSA), since it set the standards for encrypting classified information. Traditional NSA protocols encrypted traffic at the link level, carrying the traffic without security labels. During the 1980s and 1990s the NSA started a series of programs to develop cryptographic protocols for handling labeled, multilevel data, including the Secure Data Network System (SDNS), the Multilevel Network System Security Program (MNSSP), and the Multilevel Information System Security Initiative (MISSI).
These programs yielded Security Protocol 3 (SP3) and the Message Security Protocol (MSP). SP3 protects messages at the network protocol layer (layer 3) and has been used in gateway encryption devices for the DOD's Secret IP Router Network (SIPRNET), which shares classified information at the Secret level among approved military and defense organizations. However, SP3 is a relatively old protocol and will probably be superseded by a variant of the IP Security Protocol (IPSEC) that has been adapted for the defense community, called the High Assurance IP Interface Specification (HAIPIS). MSP protects messages at the application level and was originally designed to encrypt e-mail. The Defense Message System, the DOD's evolving secure e-mail system, uses MSP.
Obviously an MLS network uses encryption to protect traffic against eavesdropping. In addition, MLS protocols can use cryptography to enforce MLS access restrictions. The FIREFLY protocol illustrates this. Developed by the NSA for multilevel telephone and networking protocols, FIREFLY uses public-key cryptography to negotiate encryption keys to protect traffic between two entities. Each FIREFLY certificate contains the clearance level of its owner, and the protocol compares the levels when negotiating keys. If the two entities are using secure telephones, then the protocol yields a key whose clearance level matches the lower of the two entities. If the two entities are establishing a computer network connection, then the negotiation succeeds only if the clearance levels match.
Despite the shortage of MLS products, the defense and intelligence communities dramatically expanded their use of computer systems during the 1980s and 1990s. Instead of implementing MLS systems, most organizations chose to deploy multiple computer networks, each dedicated to a security level they needed. This eliminated multiuser data sharing risks by eliminating the multiuser data sharing at multiple security levels. When necessary, less-classified data was copied one-way onto servers on a higher-classified network from a removable disk or tape volume.
To simplify data sharing in the MILS environment, many organizations have implemented devices to transfer data from networks at one security level to networks at other levels. These devices generally fall into three categories:
In a multilevel server, computers on a network at a lower security level can store information on a server, and computers on networks at higher levels can visit the same server and retrieve that information. Vendors provide a variety of multilevel servers, including web servers, database servers, and file servers. While such systems are popular with some defense organizations, others avoid them. Most server products achieve relatively low levels of assurance, which suggests that attackers might find ways to leak information through them from a higher network to a lower one.
One-way guards implement a one-way data transfer from a network at a lower security level to a network at a higher level. The simplest implementations rely on hardware restrictions to guarantee that traffic flows in only one direction. For example, conventional fiber optic network equipment supports bidirectional traffic, but it's not difficult to construct fiber optic hardware that only contains a transmitter on one end and a receiver on the other. Such a device can transfer data in one direction with no risk of leaking data in the other. An obvious shortcoming is that there is no efficient way to prevent congestion since the low side has no way of knowing when or if its messages have been received. More sophisticated devices like the NRL Pump (Kang, Moskowitz, and Lee, 1996) avoid this problem by implementing acknowledgements using trusted software. However, devices like the Pump can suffer from the same shortcoming as MLS servers: there are very few trustworthy operating systems on which to implement trusted MLS software, and most achieve relatively low assurance. The trustworthiness of the Pump will often be limited by the assurance of the underlying operating system.
Downgrading guards are important because they address a troublesome side-effect of MILS computing: users often end up with overclassified information. A user on a Secret system may be working with both Confidential and Secret files, and it is simple to share those files with other users on Secret systems. However, he faces a problem if he needs to provide the Confidential file to a user on a Confidential system: how does he prevent Secret information from leaking when he tries to provide a clean copy of the Confidential file? There is no simple, reliable, and foolproof way to do this, especially when using commercial desktop computers.
The same problem often occurs in e-mail systems: a different user on a Top Secret network may wish to send an innocuous but important announcement to her colleagues on a Secret network. She knows that the recipients are authorized to receive the message's contents, but how does she ensure that no Top Secret information leak out along with the e-mail message? The problem also appears in military databases: the database may contain information at a variety of security levels, but not all user communities will be able to handle data at the same level as the whole database. To be useful, the data must be sanitized and then released to users at lower classification levels. When a downgrading guard releases information from a higher security level to a lower one, the downgrading generally falls into one of three categories:
The traditional technique was manual review and release. A site would train an operator to identify classified information that should not be released, and the operator would manually review all data passing through the guard. This strategy proved impractical because it has become very difficult to reliably scan files for sensitive information. Word processors like Microsoft Word tend to retain sensitive information even after the user has attempted to remove it from a file (Byers, 2004). Another problem is steganography: a subverted user on the high side of the guard, or a sophisticated piece of subverted software, can easily embed large data items in graphic images or other digital files so that a visual review won't detect their presence. In addition to the problem of reliable scanning, there is a human factors problem: few operators would remain effective in this job for very long. Military security officers tell of review operators falling into a mode in which they automatically approve everything without review, partly to maintain message throughput and partly out of boredom.
The automated release approach was used by the Standard Mail Guard (SMG) (Smith, 1994). The SMG accepted text e-mail messages that had been reviewed by the message's author, explicitly labeled for release, and digitally signed using the DMS protocol. The SMG would verify the digital signature and check the signer's identity against the list of users authorized to send e-mail through the guard. The SMG would also search the message for words associated with classified information that should not be released and block messages containing any such words. Authorized users could also transmit files through the guard by attaching them to e-mails. The attached files had to be reviewed and then "sealed" using a special application program: the SMG would verify the presence of the seal before releasing an attached file.
The automated review approach has been used by several guards since the 1980s, primarily to release database records from highly classified databases to networks at lower security levels. Many of these guards were designed to automatically review highly formatted force deployment data. The guards were configured with detailed rules on how to check the fields of the database records so that data was released at the correct security level. In some cases the guards were given instructions on how to sanitize certain database fields to remove highly classified data before releasing the records.
While guards and multilevel servers provide a clear benefit by enabling data sharing between system-high networks running at different clearance levels, they also pose problems. The most obvious problem is that it puts all of the MLS eggs in one basket: the guard centralizes MLS protection in a single device that undoubtedly draws the interest of attackers. Downgrading guards pose a particular concern since there are many ways that a Trojan Horse on the "high" side of a guard may disguise sensitive information so that it passes successfully through the guard's downgrading filters. For example, the Trojan could embed classified information in an obviously unclassified image using steganography. Another problem is that the safest place to attach a label to data is at the data's point of origin: guards are less likely to label data correctly because they are removed from the data's point of origin (Saydjari, 2004). Guards that used an automated release mechanism may be somewhat less prone to this problem if the guard bases its decision on a cryptographically-protected label provided at the data's point of origin. However, this benefit can be offset by other risks if the guard or the labeling process are hosted on a low-assurance operating system.
During the 1991 Gulf War, the defense community came to appreciate the value of classified satellite images in planning attacks on enemy targets. The only complaint was that the imagery couldn't be delivered as fast as the tactical systems could take advantage of it (Federation of American Scientists, 1997). In the idealized state of the art "sensor to shooter" system, analysts and mission commanders select targets electronically from satellite images displayed on workstations, and they send the targeting information electronically to tactical units (see Figure 7). Clearly this involves at least one downgrading step, since tactical units probably won't be cleared to handle satellite intelligence. So far, no general-purpose strategy has emerged for handling automatic downgrading of this kind. In practice, downgrading mechanisms are approved for operation on a case-by-case basis.
The following is a list of nonmilitary applications that bear some similarities to the MLS problem. While this may suggest that there may someday be a commercial market for MLS technology, a closer look suggests this is unlikely. As noted earlier, MLS systems address a level of risk that doesn't exist in business environments. Buyers of commercial systems do not want to spend the money required to assure correct MLS enforcement. This is illustrated by examining the following MLS-like business applications.
Despite the failures and frustrations that have dogged MLS product developments for the past quarter century, end users still call for MLS capabilities. This is because the problem remains: the defense community needs to share information at multiple security levels. Most of the community solves the problem by working on multilevel data in a system high environment and dealing with downgrading problems on a piecemeal basis. While this solves the problem in some situations, it isn't practical others, like sensor to shooter applications.
The classic strategies intended to yield MLS products failed in several ways. First, the government's promotion of product evaluations failed when vendors found that MLS capabilities did not significantly increase product sales. The concept of deploying a provably secure system failed twice: first, when vendors found how expensive and uncertain evaluations could be, especially at the highest levels, and second, when security experts discovered how intractable the covert channel problem could be. Finally, the few MLS products that did make their way to market languished when end users realized how narrowly the products solved their security and sharing problems. The principal successes in MLS today are based on guard and trusted server products.
covert channel - in general, an unplanned communications channel within a computer system that allows violations to its security policy. In an MLS system, this is an information flow that violates MLS restrictions.
evaluation - the process of analyzing the security functions and assurance evidence of a product by an independent organization to verify that the functions operate as required and that sufficient assurance evidence has been provided to have confidence in those functions.
multiple independent levels of security (MILS) - a networking and desktop computing environment which assigns dedicated, system-high resources for processing classified information at different security levels. Users in a MILS environment may have two or more desktop computers, each dedicated to work at a particular security level.
security model - an unambiguous, often formal, statement of the system's rules for achieving its security objectives, such as protecting the confidentiality of classified information from access by uncleared or insufficiently cleared users.
Anderson, J.P. (1972). Computer Security Technology Planning Study Volume II, ESD-TR-73-51, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at: http://csrc.nist.gov/publications/history/ande72.pdf (Date of access: August 1, 2004).
Bell, D.D. and L.J. La Padula (1974). Secure Computer System: Unified Exposition and Multics Interpretation, ESD-TR-75-306. Bedford, MA: ESD/AFSC, Hanscom AFB. Available at: http://csrc.nist.gov/publications/history/bell76.pdf (Date of access: August 1, 2004).
Byers, S (2004). Information leakage caused by hidden data in published documents. IEEE Security and Privacy 2 (2) 23-27. Available at: http://www.computer.org/security/v2n2/byers.htm (Date of access: October 1, 2004).
Cohen, F.C. (1990) Computer Viruses. Computer Security Encyclopedia. Available at: http://www.all.net/books/integ/encyclopedia.html (Date of access: February 20, 2005).
Common Criteria Project Sponsoring Organizations (1999). Common criteria for information technology security evaluation, version 2.1. Available at: http://csrc.nist.gov/cc/Documents/CC%20v2.1%20-%20HTML/CCCOVER.HTM (Date of access: October 1, 2004).
Department of Defense (1997). DOD Information Technology Security Certification and Accreditation, DOD Instruction 5200.40. Washington, DC: Department of Defense. Available at: http://www.dtic.mil/whs/directives/corres/pdf/i520040_123097/i520040p.pdf (Date of access: October 1, 2004).
Department of Defense (1985a). Trusted Computer System Evaluation Criteria (Orange Book), DOD 5200.28-STD. Washington, DC: Department of Defense. Available at: http://www.radium.ncsc.mil/tpep/library/rainbow/index.html#STD520028 (Date of access: October 1, 2004).
Federation of American Scientists (1997). Imagery Intelligence: FAS Space Policy Project - Desert Star. Available at: http://www.fas.org/spp/military/docops/operate/ds/images.htm (Date of access: August 1, 2004).
Karger, P.A. and R.R. Schell (1974). MULTICS Security Evaluation, Volume II: Vulnerability Analysis, ESD-TR-74-193, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at http://csrc.nist.gov/publications/history/karg74.pdf (Date of access: August 1, 2004).
Nibaldi, G.H., (1979). Proposed Technical Evaluation Criteria for Trusted Computer Systems, M79-225. Bedford, MA: The Mitre Corporation. Available at: http://csrc.nist.gov/publications/history/niba79.pdf (Date of access: August 1, 2004).
Orlowski, A. (2004). MS Trusted Computing back to drawing board. The Register, May 6, 2004. Available at: http://www.theregister.co.uk/2004/05/06/microsoft_managed_code_rethink/ (Date of access: August 1, 2004).
Proctor, N.E., and Neumann, P.G. (1992). Architectural implications of covert channels. Proceedings of the Fifteenth National Computer Security Conference pp. 28-43. Available at: http://www.csl.sri.com/users/neumann/ncs92.html (Date of access: November 15, 2004).
St. Johns, M. (1988). Draft Revised IP Security Option, RFC 1038. Available at: http://www.ietf.org/rfc/rfc1038.txt (Date of access: October 1, 2004).
Saydjari, O.S. (2002). LOCK: an historical perspective. Proceedings of the 2002 Annual Computer Security Applications Conference pp. Available at: http://www.acsac.org/2002/papers/classic-lock.pdf (Date of access: November 15, 2004).
Smith, R.E. (2005). Observations on multi-level security. Web pages available at http://www.smat.us/crypto/mls/index.html (Date of access: October 31, 2005).
Smith, R.E. (2001). Cost profile of a highly assured, secure operating system. ACM Transactions on Information System Security 4 pp. 72-101. A draft version is available at http://www.smat.us/crypto/docs/Lock-eff-acm.pdf (Date of access: February 20, 2005).
Smith, R.E. (2000). Trends in government endorsed security product evaluations, Proceedings of the 23rd National Information Systems Security Conference. Available at: http://www.smat.us/crypto/evalhist/evaltrends.pdf. (Date of access: February 20, 2005).
Smith, R.E. (1994). Constructing a high assurance mail guard. Proceedings of the 17th National Computer Security Conference 247-253. Available at: http://www.smat.us/crypto/docs/mailguard.pdf (Date of access: February 20, 2005).
Ware, W.H. (1970) Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security. Santa Monica, CA: The RAND Corporation. Available at: http://csrc.nist.gov/publications/history/ware70.pdf (Date of access: August 1, 2004).
Weissman, C. (1969). Security controls in the ADEPT-50 time-sharing system. Proceedings of the 1969 Fall Joint Computer Conference. Reprinted in L.J. Hoffman (ed.), Security and Privacy in Computer Systems (pp. 216-243). Los Angeles: Melville Publishing Company, 1973.
Multilevel security (MLS) is an overloaded term that describes both an abstract security objective and a well-known mechanism that is supposed to achieve that objective, more or less.
Click here for a general introduction to MLS.
A system or device achieves the objective of being "multilevel secure" if it can handle information at a variety of sensitivity levels without disclosing information to an unauthorized person. In a perfect world, this yields two more specific objectives:
The information flow problem is very hard to solve through automated access restrictions. The sanitization problem is almost impossible to solve through automated access restrictions.
A device enforces MLS information flow if it can't be induced (accidentally or intentionally) to release information to the wrong person. The U.S. government has a body of laws and regulations that make it illegal to share classified information with people who do not possess the appropriate security clearance. In theory, an "MLS device" will automatically enforce those restrictions. Some defense officials refer to MLS devices as felony boxes since they can automatically commit felonies if they malfunction.
While it might seem easy to implement such a thing just by setting up the right access restrictions on files, this approach isn't reliable enough for large-scale use. For one thing, it's hard to reliably establish and review the permission settings for hundreds or even thousands of files and expect to have them all set correctly at all times. Another problem is that a malicious user could leak enormous amounts of classified information with a few simple keystrokes. Moreover, an innocent user could be tricked into releasing comparable amounts of classified information if subjected to a virus or other malicious software.
The sanitization problem is made difficult by two things. First, it assumes that software applications don't hide data from their users, and that simply isn't true. A Microsoft Word file carries all sorts of information, including reams of text that its owner may have tried to remove. There are even circumstances where cut-and-paste may copy deleted information from one Word file to another. The second problem is that it requires a certain level of intellect to distinguish more-sensitive informaiton from less-sensitive information. While classifying authorities may sometimes attempt to make it obvious which data is Top Secret, which is Secret, and which is unclassified, it is often impossible in practice to build an automated tool to identify sensitivity simply on the basis of the unmarked material.
In practice, the devices we generally recognize as implementing "MLS" are based on a hierarchical, lattice-based access control model developed for the Multics operating system in 1975. In this model, the system assigns security classification labels to processes it runs and to files and other system-level objects, particularly those containing data. The mechanism enforces the following rules:
The mechanism is also referred to as mandatory access control because it is always enforced and users could not disable or bypass it. One of the reasons why you can't reliably enforce MLS (the objective) with conventional access control mechanisms is that such mechanisms are usually under the complete control of a file's owner. Thus the owner can accidentally or intentionally violate the access control rules simply by changing the permissions on a sensitive file. The MLS mechanism is mandatory, which means that individual users can't really control it themselves. Once a file is labeled "top secret," the MLS mechanism won't permit the user to share it with merely "secret" users or to mingle its contents with that of "secret" files. Once "top secret" information is mixed with "secret" information, the aggregate becomes "top secret."
The original motivation for building MLS mechanisms was the perception that it would make applications development safe from serious security risks. The notion was that applications programmers would be saved from the bother of having to worry about security labels. Existing applications would be able to handle labeled data safely and correctly without reprogramming. Even more important, programmers would not be able to sneak data between security levels by writing clever programs: the MLS protection mechanism would prevent any attempt to violate the most important security constraints on the system.
Although this basic MLS mechanism may seem simple to implement, it proved very difficult to implement effectively. Several systems were developed that enforced the security rules, but experimenters quickly found ways to bypass the mechanism and leak data from high classification levels to low ones. The techniques were often called covert channels. These channels used operating system resources to transmit data between higher classified processes and lower classified processes. For example, the higher process might systematically modify a file name or other file system resource (like the free space count for the disk or for main memory) that is visible to the lower process in order to transmit data.
Attempts to block covert channels led to very expensive operating system development efforts. The resulting systems were hard to use in practice and often had performance problems. While it's never been conclusively proven that you can't build a high performance system that avoids or blocks covert channels, there are no working examples.
The high cost of MLS systems was also driven by the high cost of security product evaluations performed by the US government.
Although MLS (the objective) remains a requirement for many military and government systems, the objective is not met by MLS (the mechanism). The fundamental problem is that the mechanism only allows data to flow "upwards" in terms of sensitivity. So, automatic software can easily take unclassified data, mix it with secret data to make more secret data, and then with top secret data to produce more top secret data. There are a few applications in the intelligence community where this is a very useful property.
However, this upward flow works against modern notions of "information dominance" and "sensor to shooter" information flow. The modern concept is that technical intelligence assets should identify targets, pass the information to mission planners, who assemble a mission, and pass the mission details to tactical assets, who in turn share details with support and maintenance assets. The problem is that technical intelligence, mission planning, tactical assets, and support assets tend to operate at decreasing security levels. The flow of information goes the exact opposite of what the MLS mechanism allows.
This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.
The LOCK project (short for LOgical Coprocessing Kernel) developed a "trusted computing system" that implemented multilevel security. LOCK was intended to exceed the requirements for an "A1" system as defined by the old Trusted Computing System Evaluation Criteria (a.k.a. the TCSEC or "Orange Book").
The project was modestly successful in that we actually deployed a couple dozen systems in military command centers. A major design feature of LOCK was type enforcement, a fine grained access control mechanism that could tie access restrictions to the programs being run.
The work was performed at Secure Computing Corporation in Minnesota.
LOCK technology, particularly type enforcement, live on in two 'children' that still exist:
Here are links to papers about LOCK and the Standard Mail Guard (the deployed version of LOCK).
I wrote the following message as part of a discussion on the old Firewalls mailing list in 1996. The message was part of a discussion on the use of MLS technology to protect Internet servers from attack. The basic concepts still apply in some ways, though the threats have evolved in many other ways.
The message opens with a summary of my background in MLS (through 1996, anyway) motivated by some questions raised in the discussion. Then it explains why, based on classical MLS reasoning, you can't use MLS to enforce separation between different Internet servers that share a common network interface. The article ends by explaining how an MLS-based Internet server can enforce the separation once we change our assumptions about how MLS works. An unspoken assumption of this discussion is that a "firewall" might be a large scale device that hosts a variety of "secure" services, like e-mail and DNS. Only a handful of firewalls (notably Secure Computing's Sidewinder) do anything like this today.
Someday I'd like to reconstruct the entire discussion from my own archives and from copies on the Internet, since it was a particularly satisfying one. It led to a paper I presented at ACSAC later that year. Click here to see the paper (PDF).
FROM: Rick Smith
DATE: 01/31/1996 15:09:50
SUBJECT: Re: Mandatory protection (was: product selection)
I think we`ve covered most of the issues so far in the Type Enforcement (TE) versus Multilevel Security (MLS) discussion pretty well, but there are two remaining issues that need clearing up.
I don`t think the unresolved topics arise from ignorance or a simple failure to communicate; we have a genuine and fully unintended culture clash.
The first is a matter of credibility. Since the relevance of anything else I say probably hinges on this, I`ll start here
"Does Rick Smith have a clue regarding MLS?"
There are several people at the National Computer Security Center and the MISSI Program Office that would be astonished by this question. Before moving to firewalls I was a key designer and the lead systems engineer on the SNS Mail Guard, one of the few MLS systems that comes close to being a turnkey device (I bring this up as evidence and not as a topic of Firewalls discussion - comment privately if you must). I`ve also done a variety of other MLS related analysis, design, and implementation tasks. So I do have some credentials.
But my background is entirely in high assurance MLS systems. Those are systems where MLS has only one meaning: obsessive protection of confidentiality in accordance with the Bell-LaPadula access control rules. Labels define barriers to information disclosure, and nothing in the platform architecture or services is permitted to compromise confidentiality. My statements on what MLS systems can and can`t do are based on the implications of highly assured confidentiality, not on some "strawman" MLS notion nor on "misconfigured" MLS systems.
That`s where the culture clash comes in. My colleagues in this discussion are using B1 MLS systems. These are systems where confidentiality protection is not pursued to such an extreme. This is *not* intended as a put-down, especially in the firewalls environment. Firewalls don`t need obsessively strong confidentiality. They need integrity protection. That`s why we put TE in Sidewinder and left out MLS -- we see MLS as a confidentiality mechanism and that`s not what we needed. But if you`re using MLS for mandatory protection and don`t have an obsessively strong confidentiality objective, then the picture changes a bit.
Here`s how this relates to the last open technical issue:
"Can MLS systems protect Internet servers from one another?"
I`ve always recognized that MLS systems can impose mandatory protection bariers between processes by using levels, categories, and compartments, but I still concluded "No." This is based on my view of high assurance MLS obsessed with confidentiality. The argument goes as follows:
I suspect our misunderstandings are tied to statement 3) above. On Sidewinder we can associate TCP/IP port numbers with separately labeled domains in the TE system. The only way you can get a similar result in an MLS system is to associate TCP/IP port numbers with MLS confidentiality labels. For example, the B1 system might define a category or compartment label for "Mail" and restrict Port 25 traffic to processes with the Mail label. If so, this changes how statement 3) is phrased, and completely changes the conclusion.
The problem is, you can`t assign MLS labels that way if you`re obsessed with confidentiality. I can think of three reasons immediately as to why not:
1) Port numbers aren`t confidentiality labels and aren`t intended to be. There`s no reason to believe that any other system you`re communicating with is going to keep traffic separte according to port numbers. This means you can`t depend on confidentiality, the prime objective. This leads to the next reason:
2) Treating a port number like a label establishes an uncontrolled data channel between processes with different labels, independent of the label enforcement rules. This is because there`s nothing in the TCP/IP operating concept to prevent one such process from opening a connection directly or indrectly to a process on the same system associated with a different port number, bypassing the MLS barriers between differently labeled processes.
3) You can almost certainly construct a nifty timing channel between two processes that have different labels/ports and share the same TCP/IP stack. So even if the rest of the network behaves, there are ways to circumvent the labels.
But the bottom line answer to the question, in the context of *firewalls* and the irrelevance to them of a high assurance obsession with confidentiality, appears to be "Yes, If."
IF the vendor puts in the trusted code to associate different port numbers with different MLS process labels, THEN their firewall *can* enforce mandatory MLS protection between Internet servers. It`s not clear that a firewall is "misconfigured" if this degree of protection is omitted, but a thorough implementation really should include it. So, if you`re buying an MLS based firewall, look for this feature.
smith at secure computing corporation
This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.
I first encountered the term cross domain security (or cross domain systems, or cross domain solutions, or just CDS) at a workshop in the late 1990s. We were discussing the problem of how to share information with coalition forces even though different countries had different, treaty-based access to US defense information. Even worse, there were coalitions that contained countries who were not on the best of terms (like Japan and Korea).
These days the term has often replaced MLS in the defense community. Some argue that the term has changed in hopes that the community can lower the assurance requirements, thus putting information at risk. Time will tell if this is indeed true.
Starting in the 1980s, the US government established a program to evaluate the security of computer operating systems. Since then, other governments have established programs. In the late 1990s, major governments agreed to recognize a Common Criteria for security product evaluations. Since then, the number of evaluations have skyrocketed.
The following figure summarizes the number of government endorsed security evaluations completed every year since the first was completed in 1984. The different colors represent different evaluation criteria, with "CC" representing today's Common Criteria.
Starting in 1999, I have occasionally run projects where I have tracked down every report I could find of a security product evaluation. My first project led to some preliminary results.
At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations. I also produced a summary page of those results.
In 2006, I ran another survey that yielded the chart above and a paper (PDF) reviewing current trends. This was published in Information Systems Security (v. 16, n. 4) in 2007. The work was done with the help of several undergrads at the Univerisity of St. Thomas.
For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.
In the US, cryptographic products are certified under the FIPS 140 process, administered by the National Institute for Standards and Technology (NIST). Evaluation experts are quick to point out that the process and intentions are different between FIPS 140 and the Common Criteria. For the end user, they may appear to yield a similar result: a third-party assessment of a security device or product. In practice, different communities have different requirements. In the US and Canada, there is no substitute for an up-to-date FIPS 140 certification when we look at cryptographic products. Other countries or communities may acknowledge FIPS 140 certifications or they may require Common Criteria certifications.
In any case, security evaluations and certifications simply illustrate a form of due diligence. They do not guarantee the safety of a device or system. In 2009, for example, researchers found that many self-encrypting USB drives contained an identical, fatal security flaw. All of the drives had completed a FIPS 140 evaluation that did not highlight the failure.
This article by Rick Smith is licensed under a
For a more recent view of this topic, visit my Security Evaluations page.
In 1999, I tracked down every report I could find of a product that had completed a published, formal security evaluation in accordance with trusted systems evaluation criteria. This led to some preliminary results. At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations.
I collected all of my data in an Excel 97/98 spreadsheet that contained an entry for every evaluation I could find through the end of 1999. At the moment the spreadsheet includes the reported evaluations by the United States (TCSEC/NCSC and Common Criteria), United Kingdom (ITSEC and Common Criteria), Australia, and whatever evaluations were reported from Canada, France, and Germany by the US, UK, and Australian sites. I am not convinced that this is every published evaluation that took place, but it's every report I could find.
For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.
I would be thrilled if anyone interested in a weird research project would use my spreadsheet as a starting point to further analyze the phenomenon of security evaluations. There are probably other facts to be gleaned from the existing data, or other information to be collected. As noted, I stopped collecting data at the end of the last century.
This is a collection of notes intended to introduce the fundamentals of file systems. This section summarizes the challenges of using hard drives and the general objectives of file systems. Subsections introduce simple file systems that are for the most part obsolete today.
When faced with a large expanse of hard drive space, one has three problems:
Over the decades, different file systems have produced different solutions to these problems. Usually the differences can be traced back to the following, sometimes mutually exclusive, objectives:
As hard drives have grown in capacity, file systems have grown in complexity. Still, the systems' weird features usually trace their origins back to the problems being solved or the particular objectives being pursued.
If we look back into ancient history, when semi-trailor-sized behemoths were being out-evolved by refrigerator-sized creatures in university computer labs, we find many comprehensible file systems.
(circa 1970-85, maybe later)
The Forth programming system was developed in the late 1960s by Chuck Moore. It provided a very powerful, text based mechanism for controlling a computer and writing programs when RAM and hard drive space were extremely tight. Early implementations were routinely restricted to 8KB of RAM. Some early implementations relied exclusively on diskette drives that stored less than a half a megabyte of data.
Starting in the 1970s, typical Forth systems treated hard drives as consisting of a linear set of numbered blocks, each 1KB in size. The first block on the drive (block 0) contained the bootstrap program to get Forth started, and a small number of subsequent blocks might also contain binary executable code that was loaded into RAM when Forth started.
Following the blocks of executable code, the remaining hard drive blocks generally contained ASCII text and were referred to by number. If a programmer needed to modify part of a Forth program, he would edit the hard drive block that contained that program, and refer to the block by its number.
Here is an assessment of Forth's file system in the context of the eight concepts noted above:
Boston University (BU) developed its own timesharing system in the 1970s for its IBM 360 and 370 mainframes. The system was based on the batch-oriented Remote Access Computing System (RACS) developed by IBM. McGill University also participated in RAX development, but their version was renamed "McGill University System for Interactive Computing" (MUSIC). Although many of the details are lost in the mists of time, both systems used some text processing tools developed at BU.
At the time, IBM had developed a few timesharing systems, but they were generally expensive and slow. IBM's standard operating systems for the 360 series had a file system; files were referred to as data sets. To put matters as charitably as possible, IBM's data set support was not suited to the dynamic nature of file access in timesharing environments. Frankly, it was a beast. So RAX really needed its own file system.
In accordance with the traditions of IBM data processing, a RAX file looked more-or-less like a deck of punched cards. Files consisted of "records" that carried individual lines of text. Unlike punched cards, trailing blanks were omitted and the individual records (lines) could vary in length. More significantly, files were either read sequentially in a single pass, or written sequentially in a single pass. There wasn't any notion of random access or of modifying the middle of a file without rewriting the whole thing. While RAX did support random access to hard drive files, the function was limited to specially allocated files (standard IBM data sets, actually) and used special operations that were only avaliable to assembly language programmers.
Each file had a unique name and was 'owned' by the user that created it. Users could modify the permissions on files to share them with other users.
The RAX system's timesharing hours were generally limited to daytime and evenings. Overnight, the CPU was rebooted with IBM's OS/360 or OS/VS1 to run batch jobs. Thus, the RAX hard drives had to be compatible with IBM's native file system, such as it was. The RAX library was implemented inside a collection of IBM data sets, each data set serving as a pool of disk blocks to use in library files. These disk blocks were called space sets and contained 512 bytes each.
A complete RAX library file name contained two parts: an 8-character index name and an 8-character file name. While this gave the illusion of there being a hierarchical file system, there was no true 'root' directory. All files not used by the RAX system programming staff resided in the "userlib" index; if no index name was given, RAX searched in userlib. The directory arrangement apparently worked as follows:
There were a small number of IBM data sets that served as library directories (indexes). A file's index name selected the appropriate data set to search for that file's directory entry. These index files were apparently set up using IBM's Indexed Sequential Access Method (ISAM). Such files were specially formatted to use a feature of the IBM disk hardware. Each data block in the file contained a key field along with space for a library file's directory entry. The "key" part contained the file name. The IBM disk hardware could be told to scan the data set until it found the record whose key contained that name, and then it would retrieve the corresponding data. This put the burden of directory searching on the hard drive, and freed up the CPU to work on other tasks.
The directory entry contained the usual timestamps (date created, accessed, modified, etc.), ownership information, access permissions, size, and a pointer to the first space set in the file.
Once the system knew the location of the file's first space set, it could retrieve the file's contents sequentially. A space set address was a 32-bit number formatted in 2 fields:
Remember that the library consisted of numerous data sets that served as pools of data blocks These pools were called lib files, and were numbered sequentially. The data blocks, or space sets, were numbered sequentially inside each lib file.
Files within the RAX library were implemented as a list of linked space sets. The first four bytes of each space set carried the pointer to the next one in the file. The pointer bytes were managed automatically by the system's read and write operations; they were invisible to user programs. The net result was that user programs perceived space sets as containing only 508 bytes, since 4 bytes were used for the link pointer.
A single library file could contain space sets from many different lib files. Since each lib file tended to represent a contiguous set of disk space, file retrieval was most efficient when all space sets came from the same lib file. In practice, however, a file would incorporate space sets from whichever lib file had the most available.
Free space was managed within individual lib files. Each lib file kept a linked list of free space sets. Space sets from deleted files were added back to the free list in the appropriate lib file.
Here is a review of the eight issues listed above:
The PDP-11 computer, build by Digital Equipment Corporation (DEC) in the late 20th century, was a classic machine of the minicomputer era. At the time of the -11's introduction, DEC really had no idea what to do about software for its machines, and wasn't even sure what was appropriate in the way of operating systems. Over the next several years, DEC (and others) introduced a flotilla of operating systems for the PDP-11. Here are DEC's contributions:
The RT-11 operating system came in several flavors, but all shared the same, simple file structure. The file system consisted of a single directory configured at a fixed location at the start of the disk volume. The directory could consist of multipe non-contiguous "segments" as defined in the directory's header.
Searching for files was very simple: when the system booted, the boot code would search the RT-11 directory for the name "MONITR.SYS" which indicated the system image to load and run.
An RT-11 directory entry consisted of the following:
When creating a directory, the operator can configure it to contain extra space for application specific file data. It would be the application's responsibility to update the additional data in a file's entry.
To create a new file, even temporarily, RT-11 had to allocate space by updating the directory. If the program specified a particular file size, RT-11 would try to fit that size. If the program did not specify a file size, RT-11 would allocate half of the largest block of free space available on the hard drive.
Once RT-11 had determined the amount of space for the new fiile, and its intended location on the hard drive, it would usually allocate the hard drive space by creating two adjacent directory entries. The first entry would be for the new file, and would contain the number of blocks desired for the new file. The second entry would be for any free space left over after allocating the desired amount of space for the file.
When first opened, a file was generally marked as "tentative" in the directory, and updated to "permanent" when and if the creating program actually closed the file.
Here is an assessment of RT-11's file system in the context of the eight concepts (Challenges and Objecives) noted earlier:
The RT-11 file system's simplicity led to its being used in various other applications. For example, the microcode disk used by the original VAX hardware used an RT-11 file system. The LOCK trusted computing base used the RT-11 file system for accessing files on its embedded SIDEARM processor.
These articles are for instructors or curriculum developers who want to have their courseware certified by the National Security Agency's Information Assurance Courseware Evaluation (IACE) program. These articles focus on certifying compliance with NSTISSI 4011, the national training standard for information security (INFOSEC) professionals.
NOTE: This information is based on the previous CNSS standards. The government has replaced those standards. They have also changed the process, rendering this discussion irrelevant.
I recommend that instructors adopt Elementary Information Security for one or more courses in their curriculum and take advantage of the textbook's NSTISSI 4011 mapping information. Existing courses may already cover many of the training standard's required topics, and the mapping information makes it easy to develop course notes and to identify assigned readings that fill the remaining gaps.
NSTISSI 4011 was issued in 1994. This predates the widespread use of technologies like firewalls and SSL. To ensure up-to-date coverage of essential topics, the book also incorporates information from curriculum recommendations published jointly by the ACM and the IEEE Computer Society. Specifically, the book covers the topics and core learning outcomes listed in the Information Security and Assurance knowledge area of the Information Technology 2008 curriculum recommendations.
To best ensure up-to-date coverage, the book also reflects a review of recent malware implementations and techniques, and of web server vulnerabilities. Additional coverage of cryptographic techniques serves as a sort of update to a different, aging book I wrote, Internet Cryptography.
Here are some resources to support the mapping effort:
The following articles further outline the certification process.
The U.S. government certifies courses of study in information security under the Information Assurance Courseware Evaluation (IACE) program. If a course is certified under one of the approved standards, then students are eligible to receive a certificate that carries the seal of the U.S. Committee on National Security Systems (CNSS, left) to indicate they have completed an approved course of study.
It can be challenging for an institution to get its course of study certified. Many of the topics are obvious ones for information security training, but others are relatively obscure. Several topics, like TEMPEST, COMSEC, and transmission security, have lurked in the domain of classified documents for decades.
This new text provides a comprehensive and widely available source for all topics required for NSTISSI 4011 certification. An institution can use the textbook along with the details of its NSTISSI 4011 topic mapping to establish its own certified course of study.
The U.S. National Security Agency (NSA) operates a program to evaluate programs of study for compliance against published U.S. government training standards. Institutions may apply to have their courseware certified. Numerous two- and four-year colleges, universities, and private training academies have earned this certification under the NSA's Information Assurance Courseware Evaluation (IACE) program. Certified courses of study may issue certificates to their students that carry the seal of the U.S. Committee on National Security Systems (CNSS) and indicate they have completed an approved course of study.
In 2012, the IACE program certified a textbook for the first time: Elementary Information Security was certified to conform fully to the CNSS training standard NSTISSI 4011. Institutions may use this textbook to efficiently develop a course of study eligible for government certification under this same standard. Consult the posted topic mapping to NSTISSI 4011 for further details.
IACE uses a mapping process to show that a particular curriculum or set of courseware complies with a standard. While IACE will certify compliance with several different standards, this discussion focuses on NSTISSI 4011. The mapping process typically goes through the following steps:
Both the public IACE web site and the mapping web site have help and instructions to guide and simplify the process. However, it will still require several hours of work to enter all of the details.
The mapping instructions sometimes refer to “Entry,” “Intermediate,” and “Advanced” coverage of topics. These do not apply to NSTISSI 4011. To comply with this standard, the curriculum must make the students aware of all topics covered by the standard.
Note that the mapping process is not available at certain times of the year. At present, it is not available between January 15 and March 1. That is the time period during which IACE evaluations take place. Here is the current calendar for IACE certification:
While Elementary Information Security should make courseware mapping clearer and more accessible for institutions, the process described here is not guaranteed to work. Moreover, the process as described could be changed by IACE without notice. CNSS only certifies that the textbook's contents fully conform to NSTISSI 4011.
Elementary Information Security has been certified to conform fully to to the Committee on National Security System’s national training standard for information security professionals (NSTISSI 4011). To do this, I had to map each topic required by the standard to the information as it appears in the textbook. Instructors who map their courses to the standard must map the topics to lectures, readings, or other materials used in those courses.
I have exported the textbook's mapping to an Excel spreadsheet file. Curriculum developers may use this information to develop a course of study that complies with NSTISSI 4011 and is eligible for certification. I'm describing the courseware mapping process in another post. Read that post first.
The topic mapping for Elementary Information Security relates a single “course,” the textbook itself, to the required topics in the NSTISSI 4011 standard. The mapping is made available in a spreadsheet. The first column contains row numbers. The next three columns, Subsection, Element, and Topic, contain topic identifiers from the standard. The Chapters column lists the chapters by number that cover a particular topic. The column may also contain the letter “B” to point to Appendix B.
The Notes column points directly to each topic by section number and page number. These were placed in the “Additional Comments” field of the mapping. This is because the textbook focuses primarily on readability and an appropriate progression of topics. Detailed comments were provided to ensure that appropriate information about every topic appeared in the mapping.
Most mappings should not require additional comments, or at least require this level of detail. If the course and topic mappings link to files in which the material is easily found, then additional comments shouldn't be required.
The phrase “summary justification” is used when an earlier or higher-level topic covers several subtopics as well. The mapping web site allows summary justification for selected topics. When used, the mapping only needs to specify the earlier or higher-level topic. The subsequent, related topics are then filled in automatically and locked from editing.
The Elementary Information Security mapping may be downloaded in spreadsheet form, and used as a starting point for mapping the institution's curriculum. However, this particular spreadsheet maps all topics to a single “course,” the Elementary Information Security textbook. Mapping is by chapter numbers in the Chapters column.
In a curriculum containing several courses, the spreadsheet should be modified to allow mapping of multiple courses and the topics they contain. One approach is to include a column with two-part entries, one part indicates a course and the other part selects a particular course topic. Another approach is to provide a separate column for each course. If a particular course covers a particular required topic, then the course's column identifies the corresponding course topic.
I received an email this morning announcing that Elementary Information Security has been certified by the NSA's Information Assurance Courseware Evaluation program as covering all topics required for training information security professionals. Here is the certification letter.
This is the first time thay have certified textbooks. In the past they've only certified training programs and degree programs.
The evaluation is based on the national training standard NSTISSI 4011. The book also covers the core learning outcomes for Information Assurance and Security listed in the Information Technology 2008 Curriculum Recommendations from the ACM and IEEE Computer Society.
The textbook is currently available from the publisher, Jones and Bartlett, and I notice that they offer a PDF-ish version as well as the 890-page hardcopy edition.
It was a bit of a challenge trying to fit all of that information into a single textbook, and to target it at college sophomores and two-year college programs. The book contains a lot of tutorial material to try to bridge the gap between the knowledge of introductory students and the required topics.
Visit the textbook site eisec.us for the latest supporting materials and study guides.
Draft materials are provided below:
Jones & Bartlett Learning, November, 2011.
The only textbook verified by the US Government to conform fully to the Committee on National
Security Systems' national training standard for information security professionals (NSTISSI 4011)
This comprehensive, accessible Information Security text is ideal for the one-term, undergraduate college course. The text integrates risk assessment and security policy throughout the text, since security systems work best at achieving goals they are designed to meet, and security policy ties real-world goals to security mechanisms. Early chapters in the text discuss individual computers and small LANS, while later chapters deal with distributed site security and the Internet. Cryptographic topics follow the same progression, starting on a single computer and evolving to Internet-level connectivity. Mathematical concepts throughout the text are defined and tutorials with mathematical tools are provided to ensure students grasp the information at hand.
See below for sample contents. Sample chapters are also available from the publisher's site.
1.1. The Security Landscape
1.2. Process Example: Bob’s Computer
1.4. Identifying Risks
1.5. Prioritizing Risks
1.6. Ethical Issues in Security Analysis
1.7. Security Example: Aircraft Hijacking
2.1. Computers and Programs
2.2. Programs and Processes
2.3. Buffer Overflow and The Morris Worm
2.4. Access Control Strategies
2.5. Keeping Processes Separate
2.6. Security Policy and Implementation
2.7. Security Plan: Process Protection
3.1. The File System
3.2. Executable Files
3.3. Sharing and Protecting Files
3.4. Security Controls for Files
3.5. File Security Controls
3.6. Patching Security Flaws
3.7. Process Example: The Horse
3.8. Chapter Resources
4.1. Controlled Sharing
4.2. File Permission Flags
4.3. Access Control Lists
4.4. Microsoft Windows ACLs
4.5. A Different Trojan Horse
4.6. Phase Five: Monitoring The System
4.7. Chapter Resources
5.1. Phase Six: Recovery
5.2. Digital Evidence
5.3. Storing Data on a Hard Drive
5.4. FAT: An Example File System
5.5. Modern File Systems
5.6. Input/Output and File System Software
6.1. Unlocking a Door
6.2. Evolution of Password Systems
6.3. Password Guessing
6.4. Attacks on Password Bias
6.5. Authentication Tokens
6.6. Biometric Authentication
6.7. Authentication Policy
7.1. Protecting the Accessible
7.2. Encryption and Cryptanalysis
7.3. Computer-Based Encryption
7.4. File Encryption Software
7.5. Digital Rights Management
8.1. The Key Management Challenge
8.2. The Reused Key Stream Problem
8.3. Public-key Cryptography
8.4. RSA: Rivest-Shamir-Adleman
8.5. Data Integrity and Digital Signatures
8.6. Publishing Public Keys
9.1. Securing a Volume
9.2. Block Ciphers
9.3. Block Cipher Modes
9.4. Encrypting a Volume
9.5. Encryption in Hardware
9.6. Managing Encryption Keys
10.1. The Network Security Problem
10.2. Transmitting Information
10.3. Putting Bits on a Wire
10.4. Ethernet: A Modern LAN
10.5. The Protocol Stack
10.6. Network Applications
11.1. Building Information Networks
11.2. Combining Computer Networks
11.3. Talking Between Hosts
11.4. Internet Addresses in Practice
11.5. Network Inspection Tools
12.1. “Smart” Versus “Dumb” Networks
12.2. Internet Transport Protocols
12.3. Names on the Internet
12.4. Internet Gateways and Firewalls
12.5. Long Distance Networking
13.1. The Challenge of Community
13.2. Management Processes
13.3. Enterprise Issues
13.4. Enterprise Network Authentication
13.5. Contingency Planning
14.1. Communications Security
14.2. Crypto Keys on a Network
14.3. Crypto Atop the Protocol Stack
14.4. Network Layer Cryptography
14.5. Link Encryption on 802.11 Wireless
14.6. Encryption Policy Summary
15.1. Internet Services
15.2. Internet Email
15.3. Email Security Problems
15.4. Enterprise Firewalls
15.5. Enterprise Point of Presence
16.1. Hypertext Fundamentals
16.2. Basic Web Security
16.3. Dynamic Web Sites
16.4. Content Management Systems
16.5. Ensuring Web Security Properties
17.1. Secrecy In Government
17.2. Classifications and Clearances
17.3. National Policy Issues
17.4. Communications Security
17.5. Data Protection
17.6. Trustworthy Systems
The goal of this textbook is to introduce college students to information security. Security often involves social and organizational skills as well as technical understanding. To solve practical security problems, we must balance real-world risks and rewards against the cost and bother of available security techniques. The text uses continuous process improvement to integrate these elements.
Security is a broad field. Some students may excel in the technical aspects, while others may shine in the more social or process-oriented aspects. Many successful students fall between these poles. The text offers opportunities for all types of students to excel.
If we want a solid understanding of security technology, we must look closely at the strengths and weaknesses of underlying information technology itself. This requires a background in computer architecture, operating systems, and computer networking. It’s hard for a typical college student to achieve breadth and depth in these subjects and still have time to really study security.
Instead of leaving a gap in students’ understanding, this book provides introductions to essential technical topics. Chapter 2 explains the basics of computer operation and instruction execution. This prepares students for a description of process separation and protection, which illustrates the essential role of operating systems in enforcing security.
Chapter 5 introduces file systems and input/output in modern operating systems. This lays a foundation for forensic file system analysis. It also shows students how a modern operating system organizes a complex service. This sets the stage for Chapter 10’s introduction to computer networking and protocol software.
Introducing Continuous Process Improvement
The text organizes security problem-solving around a six-phase security process. Chapter 1 introduces the process as a way of structuring information about a security event, and presents a simple approach to risk analysis. Chapter 2 introduces security policies as a way to state security objectives, and security controls as a way to implement a policy. Subsequent chapters introduce system monitoring and incident response as ways to assess system security and improve it.
Each step in the process builds on earlier steps. Each step also provides a chance to assess how well our work addresses our security needs. This is the essence of continuous process improvement.
In order to give students an accurate view of process improvement, the text introduces document structures that provide cross references between different steps of the process. We use elements of each earlier phase to construct information in the following phase, and we often provide a link back to earlier data to ensure complete coverage. While this may seem like nit-picking in some cases, it allows mastery of essential forms of communication in the technical and professional world.
When used as a textbook, the material is intended for lower division undergraduates, or for students in a two-year community college program. Students should have completed high school mathematics. Typical students should have completed an introductory computing or programming course.
Instructors may want to use this book for either a one or two semester course. A one semester course would usually cover one chapter a week; the instructor may want to combine a couple of earlier chapters or skip the final chapter. Some institutions may find it more effective to teach the material over a full year. This gives the students more time to work with the concepts and to cover all topics in depth.
Following the style of my earlier books, this text focuses on diagrams and practical explanations to present fundamental concepts. This makes the material clearer to all readers and makes it more accessible to the math-phobic reader. Many concepts, particularly in cryptography, can be clearly presented in either a diagram or in mathematical notation. This text uses both, with a bias towards diagrams.
Many fundamental computing concepts are wildly abstract. This is also true in security, where we sometimes react to illusions perceived as risks. To combat this, the text incorporates a series of concrete examples played out by characters with names familiar to those who read cryptographic papers: Bob, Alice, and Eve. They are joined by additional classmates named Tina and Kevin, since different people have different security concerns.
The material in this text fulfills curriculum requirements published by the US government and the Association for Computing Machinery (ACM). In particular, the text covers all required topics for training information systems security professionals under the Information Assurance Courseware Evaluation Program (NSTISSI #4011) established by the US National Security Agency (NSA). The text also provides substantial coverage of the required topics for training senior system managers (CNSSI #4012) and for system administrators (CNSSI #4013).
The text also covers the core learning outcomes for information security education published in the ACM’s “IT 2008” curricular recommendations for Information Technology education. As a reviewer and contributor to the published recommendations, the author is familiar with its guidelines.
Students who are interested in becoming a Certified Information System Security Professional (CISSP) may use this book as a study aid for the examination. All key areas of the CISSP Common Body of Knowledge are covered in this text. Certification requires four or five years of professional experience in addition to passing the exam.
Information security is a fascinating but abstract subject. This text introduces students to real-world security problem solving, and incorporates security technology into the problem-solving process. There are lots of “how to” books that explain how to harden a computer system.
Many readers and most students need more than a “how to” book. They need to decide which security measures are most important, or how to trade off between alternatives. Such decisions often depend on what assets are at stake and what risks the computer’s owner are willing to take.
The practitioner’s most important task is the planning and analysis that helps choose and justify a set of security measures. In my own experience, and in that of my most capable colleagues, security systems succeed when we anticipate the principal risks and design the system to address them. In fact, once we have identified what we want for security (our policy requirements), we don’t need security gurus to figure out what we get (our implementation). We just need capable programmers, testers, and administrators.
As the chapter unfolds, we encounter certain key terms indicated in bold italics. These highlight essential terms that students may encounter in the information security community. Successful students will recognize these terms.
The Resources section at the end of each chapter lists the key terms and provides review and exercises. The review questions help students confirm that they have absorbed the essential concepts. Some instructors may want to use these as recitation or quiz questions. The problems and exercises give students the opportunity to solve problems based on techniques presented in the text.
Here is an incomplete collection of reading materials associated with the textbook. Visit the Elementary Infosec site for a complete set of reading and study materials.
Some links lead to on-line articles published by professional societies like the ACM (Association for Computing Machinery) or IEEE (Institute of Electrical and Electronic Engineers). Serious computing experts often join one or both of these, and sign up for electronic library subscriptions. Many college and university libraries may also provide free access to these for students and faculty.
Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.
The following blogs provide readable reports and commentary on information security.
In Japan, the term kaizen embodies the continuous improvement process.
During World War II, military enterprises poured vast resources into various techincal projects, notably radar, codebreaking, and the atomic bomb. Those successes encouraged the peacetime military to pursue other large scale technical projects. A typical project would start with a very large budget and a vague completion date a few years in the future. But in practice, many of these projects vastly exceeded both the budget and the predicted time table.
A handful of projects, notably the Polaris Missile project, achieved success while adhering closely to their initial budget and schedule estimates. Pressure on defense budgets led the US DOD to identify features of the successful projects so they might be applied to future work. This was the genesis of systems engineering. The DOD's Defense Aquisition University has produced an introduction to systems engineering (PDF format) in the defense community.
The International Council on Systems Engineering (INCOSE) provides a more general view of systems engineering. NASA also provides on-line training materials on their Space Systems Engineering site.
Chapter 1 introduces the first three of eight basic principles of information security.
Different authorities present different lists of principles. International standards bodies, including NIST in the US, tend to produce very general lists of principles, reflecting notions such as "be safe," "keep records," and other generalizations (for example, see NIST's SP 800-14: "Generally Accepted Principles and Practices for Securing Information Technology Systems"). These principles represent basic truths about security, but few are stated in a way that helps one make security decisions.
Saltzer and Schroeder produced a now-classic list based on experience with the Multics time sharing system in the 1970s: "The Protection of Information in Computer Systems," Proceedings of the IEEE 63, 9 (September, 1975). Some of these principles reflect features of the Multics system while others reflect some well-known shortcomings with most systems of that time. Copies exist online at Saltzer's own web site and at the University of Virginia.
There is also a Cryptosmith blog post that compares the textbook's list of principles with those in Saltzer and Schroeder.
A high-level security analysis provides a brief summary of a security situation at a given point of time. The next section provides a checklist for writing a high-level security analysis.
The risk assessment processes noted in the textbook are all avaliable online:
The fundamental reference for everything related to the events of September 11, 2001, is the Final Report of the National Commission on Terrorist Attacks Upon the United States, a.k.a. "The 9/11 Commission Report," published in 2004.
Following 9/11, the BBC published a brief history of airline hijackings. The 2003 Centennial of Flight web site provides a more general summary of violent incidents in aviation security. In 2007, New York Magazine published a more detailed hijacking time-line in conjunction with breaking news on the D. B. Cooper hijacking case. The US FBI web site contains a lot of information about the D. B. Cooper case, including a 2007 update.
The textbook presents a high-level security analysis as a short writing exercise that summarizes a security situation. The analysis generally describes a situation at a particular point in time. For example, the 9/11 discussion in the textbook describes air travel security before 9/11. The analysis describes the six phases of the security process:
Here is a checklist of the basic properties of a high-level analysis:
Note that a complete security plan will also cover the six phases, but it is not limited to this length. A complete plan covers each phase thoroughly.
Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.
Here is a 26-minute video of middle school students visiting a "walk through" computer to learn the basics of computer operation. Taken from the PBS TV series "Newton's Apple," 1990. Although the technology is over 20 years old, the fundamental components remain the same, except for speed speed, size, and capacity.
There are, of course, countless images and videos available through on-line searching that show specific elements of computer systems.
Students who have not yet studied these topics in detail will want to visit web sites that provide an introduction to binary and hex. YouTube user Ryan of Aberdeen has created a video tutorial (9 minutes). There are also written tutorials:
A faculty member at NC State University maintains a site that provides an overview of the Morris worm.
Eugene Spafford (aka spaf) wrote a report describing the worm, its operations, and its effects (PDF), shortly after the incident.
Eichin and Rochlis of MIT published a report of the worm incident from the MIT perspective (PDF). This was presented at the IEEE Symposium on Security and Privacy the following year.
In 1990, Peter Denning published a book that brought together several papers on the Morris worm and other security issues emerging at that time, titled Computers Under Attack: Intruders, Worms and Viruses.
Spafford also maintains an archive of worm-related information at Purdue University.
Auguste Kerckhoffs' original paper on cryptographic system design recommended that cryptographic systems be published and that secrecy should reside entirely in a secret key. The paper was published in French in 1883. Portions of Kerckhoffs' paper are available on-line including partial English translations.
Claude Shannon's "Communication Theory of Secrecy Systems" (Bell System Technical Journal, vol. 28, no. 4, 1949) contains his assumption "the enemy knows the system being used" (italics his). Bell Labs provides general information about Shannon's work and publications.
Eric Raymond published a famous essay on the benefits of open design and of sharing program source code in general, called "The Cathedral and The Bazaar." This essay has inspired many members of the Open Source community.
Butler Lampson introduced the access matrix in his 1971 paper, "Protection." Lampson has posted a copy of his paper on-line in several formats.
This is, unfortunately, a lot harder than it seems. As RAM has grown smaller and I/O grown more complex, motherboard components have changed dramatically in size and appearance. Here are suggestions on identifying key features in older and newer motherboards.
The most reliable way to identify a motherboard's contents is to locate a copy of its installation manual. These are usually posted on the web by the motherboard's manufacturer. Most boards clearly include the manufacturer's name and the board's model number.
If the manufacturer and model number aren't obvious, it may be possible to identify the motherboard using Google Images. Enter the word "motherboard" as a search term along with other textual items on the board. Compare the images displayed with the color and layout of the motherboard in question. Keep in mind everything should match when you find the correct board. Missing or misplaced features indicate that the boards don't match. A popular motherboard may appear many times, but most images will lead to pages that indicate the manufacturer and model. In some cases, the image may lead directly to the manufacturer's own pages.
Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.
Security consultant Fred Cohen performed much of the pioneering analysis of computer viruses. His web site contains several useful articles on virus technology. Even though some of the material is 30 years old, the basic technical truths remain unchanged.
Some anti-virus vendors provide summaries of current anti-virus and malware activities:
Here is a list of malware briefly described in the textbook, plus links to in-depth reports on each one. Check recent news: security experts occasionally make progress in eradicating one or another of these, but the botnets sometimes recover. Many of these are PDFs.
Videos: Ralph Langer, a German expert in control systems security, gave a TED talk describing Stuxnet (~11 minutes). Bruce Dang of Microsoft also gave a detailed presentation about Stuxnet (75 minutes) at a conference.
Butler Lampson introduced the access matrix in his 1971 paper, "Protection" (PDF). Lampson has posted a copy on-line in several formats.
Although most modern systems use resource-oriented permissions to control access rights, there are a few cases that use capabilities, which associate rights with active entities like programs and users. Jack Dennis and Earl Van Horn of MIT introduced the notion of capabilities in their 1965 report "Programming Semantics for Multiprogrammed Computers," which was published in Communications of the ACM in 1966.
Marc Stiegler has posted an interesting introduction to capability based security that ties it to other important security concepts. The EROS OS project has also posted an essay that explains capability-based security. For a thorough coverage of capability based architecture circa 1984, see Henry Levy's book Capability-Based Computer Systems. He has posted it on-line.
Microsoft has posted an article that describes access control on Windows files and on the Windows "registry," a special database of system-specific information.
Electrical engineers have relied on state diagrams for decades to help design complicated circuits. The technique is also popular with some software engineers, though it rarely finds its way into courses on computer programming. Any properly-constructed state diagram may be translated into a state table that provides the same information in a tabular form. Tony Kuphaldt's free on-line textbook Lessons in Electric Circuits explains state machines in the context of electric circuits in Volume IV, Chapter 11: Sequential Circuits-Counters.
Upper-lever computer science students may encounter state diagrams in a course on automata theory in which they use such diagrams to represent deterministic finite automata. Such mechanisms can handle the simplest type of formal language, a regular grammar. Most people encounter regular grammars as regular expressions, an arcane syntax used to match text patterns when performing complicated search-and-replace operations in text editors.
Students introduced to modern structured design techniques using the Unified Modeling Language (UML) often use state machine diagrams or state charts (a diagram in table form). On-line tutorials about UML state machines appear at Kennesaw State University and the Agile Modeling web site.
In the US, there are several organizations that track and report on information security vulnerabilities. Many of these organizations provide email alerts and other data feeds to keep subscribers up to date on emerging vulnerabilities. Some organizations provide their services to particular communities (e.g. government or military organizations, or customers of a vendor's products) while others provide reports to the public at large.
The SANS Internet Storm Center also provides a variety of on-line news feeds and reports, as well as a continuously-updated "Infocon Status" to indicate unusual changes in the degree of malicious activity on the Internet. Click on the image below to visit the Internet Storm Center for further information on current vulnerabilities and malicious Internet activity.
In 2000, Arbaugh, Fithen, and McHugh wrote an article describing a life-cycle model of information security vulnerabilities titled "Windows of Vulnerability: A Case Study Analysis", (IEEE Computer 33, December 2000). The authors have posted a copy of the article online (PDF).
The library at Stanford posted a brief history of the Trojan War. Although Homer's Iliad tells the story of the Trojan War, it says very little about the Greek trickery that led to the city's fall. The story is more the province of Virgil's Aeneid.
In the 1970s, Guy Steele at MIT started collecting bits of jargon used in the computer community. This yielded "The Jargon File," which Steele maintained for several years until it was passed on to Eric Raymond. According to the Jargon File, the term Trojan horse entered the computing lexicon via Dan Edwards of MIT and the NSA.
US-CERT has published a two-page guide on how to deal with a Trojan horse or virus infection on a computer (PDF).
There are numerous on-line tutorials on Unix and/or Linux file permissions, including ones provided by:
ACLs first appeared in the Multics timesharing system, as described in the paper "A General-Purpose File System For Secondary Storage," by R. Daley and Peter Neumann (Proc. 1965 Fall Joint Computer Conference) and on the Multicians web site.
Since ACLs could provide very specific access restrictions, they became recommended features of high-security systems. When the US DOD developed the "Trusted Computer System Evaluation Criteria," (PDF) (a.k.a. the TCSEC or Orange Book) ACLs were an essential feature of higher security systems. Modern security products are evaluated against the Common Criteria.
While traditional Unix systems did not have ACLs, more advanced versions of Unix incorporated them, partly to meet high security requirements like those in the Orange Book. This led to the development of POSIX ACLs as part of a proposed POSIX 1003.1e standard. The standards effort was abandoned, but several Unix-based systems did incorporate POSIX ACLs. Here are examples:
The ACL user interface on Mac OS-X is very simple. In fact, the OS-X ACLs are based on POSIX ACLs and may incorporate more spohisticated settings and inheritances than we see in the Finder's "Information" display. These features are available through special ACL options of the chmod shell command. One developer has produced an application called Sandbox that provides a more extensive GUI for managing the ACLs.
It can be challenging to find accurate online information about Windows ACLs, because the computer-based access controls are often confused with network-based access controls. The MS Developer Network provides general information about ACLs.
Researchers at CMU evaluated the Windows XP version of ACLs in a series of experiments documented in "Improving user-interface dependability through mitigation of human error," Intl J. Human-Computer Studies 63 (2005) 25-50, by Maxion and Reeder.
Here is a summary of memory size names and their corresponding address sizes. Many people memorize this type of information naturally through working with computer technology over time or during a professional career.
If you want to memorize these values, visit the Quizlet page. The page tests your knowledge of the smaller sizes (K, M, G, T), how these sizes are related (i.e. a terabyte is a thousand billion bytes), and how they relate to memory sizes (a TB needs an address approximately 40 bits long).
Here is a simple shortcut for estimating the number of bits required to address storage of a given size.
103 ~ 210
To put this into practice, we do the following:
Let's work out an example with a terabyte: a trillion-byte memory.
Cryptographers develop new hash functions every few years because cryptanalysts and mathematicians find weaknesses in the older ones. Valerie Aurora provides a graphic illustration of this.
This section of the textbook provides details not otherwise addressed by the main text.
Two educational standards in information system security refer to closely-related models of information system security. First, we have a US government training standard:
Second, we have an academic curriculum standard:
1.1 The Security Landscape
Not Just Computers Any More
1.1.1 Making Security Decisions
1.1.2 The Security Process
1.1.3 Continuous Improvement: A Basic Principle
The Roots of Continuous Improvement
1.2 Process Example: Bob’s Computer
1.3 Assets and Risk Assessment
Fine Points of Terminology
1.3.1 What Are We Protecting?
1.3.2 Security Boundaries
Least Privilege: A Second Basic Principle
Example: Boundaries in a Dorm
Analyzing the Boundary
The Insider Threat
1.3.3 Security Architecture
Defense In Depth: A Third Basic Principle
1.3.4 Risk Assessment Overview
1.4 Identifying Risks
1.4.1 Threat Agents
1.4.2 Security Properties, Services, and Attacks
1.5 Prioritizing Risks
1.5.1 Example: Risks to Alice’s Laptop
Step 1: Identify Computing Assets
Step 2: Identify Threat Agents and Potential Attacks
Step 3: Estimate the Likelihood of Individual Attacks
Step 4: Estimate the Impact of Attacks over Time
Step 5: Calculate the Impact of Each Attack
1.5.2 Other Risk Assessment Processes
1.6 Ethical Issues in Security Analysis
Laws, Regulations, and Codes of Conduct
1.6.1 Searching for Vulnerabilities
1.6.2 Sharing or Publishing Vulnerabilities
1.7 Security Example: Aircraft Hijacking
1.7.1 Hijacking: A High-Level Analysis
1.7.2 September 11, 2001
1.8.1 Review Questions
1.8.2 Exercises and Problems
2.1 Computers and Programs
Parallel Versus Serial Wiring
2.1.2 Program Execution
Separating Data and Control
2.2 Programs and Processes
2.2.1 Switching Between Processes
Observing Active Processes
2.2.2 The Operating System
2.3 Buffer Overflow and The Morris Worm
The ‘finger’ program
2.3.1 The ‘finger’ Overflow
The Worm Released
2.3.2 Security Alerts
2.4 Access Control Strategies
2.4.1 Puzzles and Patterns
Open Design: A Basic Principle
Cryptography and Open Design
Pattern-based Access Control
2.4.2 Chain of Control: Another Basic Principle
Controlling the BIOS
Subverting the Chain of Control
2.5 Keeping Processes Separate
Evolution of Personal Computers
Security on Personal Computers
Operating System Security Features
2.5.1 Sharing a Program
2.5.2 Sharing Data
2.6 Security Policy and Implementation
Constructing Alice’s Security Plan
Writing a Security Policy
2.6.1 Analyzing Alice’s Risks
2.6.2 Constructing Alice’s Policy
2.6.3 Alice’s Security Controls
Alice’s Backup Procedure
2.7 Security Plan: Process Protection
Policy for Process Protection
Functional Security Controls
The Dispatcher’s Design Description
The Design Features
The Dispatching Procedure
Security Controls for Process Protection
2.8.1 Review Questions
2.8.2 Problems and Exercises
3.1 The File System
File and Directory Path Names
3.1.1 File Ownership and Access Rights
File Access Rights
Initial File Protection
3.1.2 Directory Access Rights
3.2 Executable Files
3.2.1 Execution Access Rights
Types of Executable Files
3.2.2 Computer Viruses
3.2.3 Macro Viruses
3.2.4 Modern Malware: A Rogue’s GAllery
Conficker, also called Downadup
3.3 Sharing and Protecting Files
3.3.1 Policies for Sharing and Protection
Underlying System Policy
User Isolation Policy
User File Sharing Policy
3.4 Security Controls for Files
3.4.1 Deny by Default: A Basic Principle
The opposite of Deny by Default
3.4.2 Managing Access Rights
Capabilities in Practice
3.5 File Security Controls
3.5.1 File Permission Flags
System and Owner Access Rights in Practice
3.5.2 Security Controls to Enforce Bob’s Policy
3.5.3 States and State Diagrams
3.6 Patching Security Flaws
The Patching Process
Security Flaws and Exploits
Windows of Vulnerability
3.7 Process Example: The Horse
3.7.1 Troy: A High-Level Analysis
3.7.2 Analyzing the Security Failure
3.8 Chapter Resources
3.8.1 Review Questions
4.1 Controlled Sharing
Tailored File Security Policies
Bob’s Sharing Dilemma
4.1.1 Basic File Sharing on Windows
4.1.2 User Groups
4.1.3 Least Privilege and Administrative Users
Administration by Regular Users
User Account Control on Windows
4.2 File Permission Flags
4.2.1 Permission Flags and Ambiguities
4.2.2 Permission Flag Examples
Security Controls for the File Sharing Policy
4.3 Access Control Lists
Modern ACL Implementations
4.3.1 POSIX ACLs
4.3.2 Macintosh OS-X ACLs
4.4 Microsoft Windows ACLs
4.4.1 Denying Access
Determining Access Rights
Building Effective ACLs
4.4.2 Default File Protection
Moving and Copying Files
4.5 A Different Trojan Horse
A Trojan Horse Program
Transitive Trust: A Basic Principle
4.6 Phase Five: Monitoring The System
Catching an intruder
4.6.1 Logging Events
A Log Entry
The Event Logging Mechanism
Detecting Attacks by Reviewing the Logs
4.6.2 External Security Requirements
Laws, Regulations, and Industry Rules
External Requirements and the Security Process
4.7 Chapter Resources
4.7.1 Review Questions
5.1 Phase Six: Recovery
Incidents and Damage
5.1.1 The Aftermath of an Incident
Fault and Due Diligence
5.1.2 Legal Disputes
Resolving a Legal Dispute
5.2 Digital Evidence
The Fourth Amendment
5.2.1 Collecting Legal Evidence
Collecting Evidence at The Scene
Securing the Scene
Documenting the Scene
5.2.2 Digital Evidence Procedures
Authenticating a Hard Drive
5.3 Storing Data on a Hard Drive
Magnetic Recording and Tapes
Hard Drive Fundamentals
5.3.1 Hard Drive Controller
5.3.2 Hard Drive Formatting
High Level Format
5.3.3 Error Detection and Correction
Cyclic Redundancy Checks
Error Correcting Codes
5.3.4 Hard Drive Partitions
Partitioning to Support Older Drive Formats
Partitioning in Modern Systems
Partitioning and Fragmentation
Hiding Data with Partitions
5.3.5 Memory Sizes and Address Variables
Address, Index, and Pointer Variables
Memory Size Names and Acronyms
Estimating the Number of Bits
5.4 FAT: An Example File System
5.4.1 Boot Blocks
5.4.2 Building Files from Clusters
An Example FAT File
FAT Format Alternatives
5.4.3 FAT Directories
Long File Names
Undeleting a File
5.5 Modern File Systems
File System Design Goals
Conflicting File System Objectives
5.5.1 Unix File System
5.5.2 Apple’s HFS Plus
5.5.3 Microsoft’s NTFS
5.6 Input/Output and File System Software
File System Software
5.6.1 Software Layering
5.6.2 A Typical I/O Operation
Part A: Call the operating system
Part B: OS constructs the I/O operation
Part C: The driver starts the actual I/O device
Part D: The I/O operation ends
5.6.3 Security and I/O
Restricting the devices themselves
Restricting Parameters in I/O Operations
File Access Restrictions
5.7.1 Review Questions
6.1 Unlocking a Door
6.1.1 Authentication Factors
6.1.2 Threats and Risks
Attack Strategy: Low Hanging Fruit
6.2 Evolution of Password Systems
Password Hashing in Practice
6.2.1 One-way Hash Functions
Modern Hash Functions
A Cryptographic Building Block
6.2.2 Sniffing Credentials
6.3 Password Guessing
DOD Password Guideline
Off-line Password Cracking
6.3.1 Password Search Space
6.3.2 Truly Random Password Selection
6.3.3 Cracking Speeds
6.4 Attacks on Password Bias
Bias and Entropy
6.4.1 Biased Choices and Average Attack Space
Average Attack Space
Biased Password Selection
Measuring Likelihood, not Certainty
Making Independent Guesses
Example: 4-digit luggage lock
6.4.2 Estimating Language-Based Password Bias
Klein’s Password Study
6.5 Authentication Tokens
Passive Authentication Tokens
6.5.1 Challenge-Response Authentication
Another Cryptographic Building Block
Direct Connect Tokens
6.5.2 One-time Password Tokens
A Token’s Search Space
Average Attack Space
Attacking One-Time Password Tokens
Guessing a Credential
6.5.3 Token Vulnerabilities
6.6 Biometric Authentication
6.6.1 Biometric Accuracy
6.6.2 Biometric Vulnerabilities
6.7 Authentication Policy
6.7.1 Weak and Strong Threats
Effect of Location
6.7.2 Policies for Weak Threat Environments
A Household Policy
A Workplace Policy: Passwords Only
A Workplace Policy: Passwords and Tokens
6.7.3 Policies for Strong and Extreme Threats
Passwords Alone for Strong Threats
Passwords Plus Biometrics
Passwords Plus Tokens
Constructing the Policy
6.7.4 Password Selection and Handling
Strong But Memorable Passwords
The Strongest Passwords
6.8.1 Review Questions
7.1 Protecting the Accessible
7.1.1 Process Example: The Encrypted Diary
7.1.2 Encryption Basics
Categories of Encryption
A Process View of Encryption
Shared Secret Keys
7.1.3 Encryption and Information States
Illustrating Policy with a State Diagram
Proof of Security
7.2 Encryption and Cryptanalysis
7.2.1 The Vignère Cipher
7.2.2 Electromechanical Encryption
7.3 Computer-Based Encryption
The Data Encryption Standard
The Advanced Encryption Standard
Predicting Cracking Speeds
7.3.1 Exclusive Or: A Crypto Building Block
7.3.2 Stream Ciphers: Another Building Block
Generating a Key Stream
An Improved Key Stream
7.3.3 Key Stream Security
Pseudo-Random Number Generators
The Effects of Ciphertext Errors
7.3.4 The One-time Pad
Soviet Espionage and One-time pads
Practical One-time pads
7.4 File Encryption Software
7.4.1 Built-in File Encryption
7.4.2 Encryption Application Programs
Ensuring Secure File Encryption
Protecting the Secret Key
7.4.3 Erasing a Plaintext File
Risks That Demand Overwriting
Preventing Low Level Data Recovery
Erasing Optical Media
7.4.4 Choosing a File Encryption Program
Software Security Checklist
File Encryption Security Checklist
Cryptographic Product Evaluation
7.5 Digital Rights Management
The DVD Content Scrambling System
7.6.1 Review Questions
8.1 The Key Management Challenge
Levels of Risk
Key Sharing Procedures
Distributing New Keys
8.1.2 Using Text for Encryption Keys
Taking Advantage of Longer Passphrases
Software Checklist for Key Handling
8.1.3 Key Strength
8.2 The Reused Key Stream Problem
8.2.1 Avoiding Reused Keys
Changing the Internal Key
Combining the Key with a Nonce
Software Checklist for Internal Keys Using Nonces
8.2.2 Key Wrapping: Another Building Block
Key Wrapping and Cryptoperiods
Software Checklist for Wrapped Keys
8.2.3 Separation of Duty: A Basic Principle
Separation of Duty with Encryption
8.2.4 DVD Key Handling
8.3 Public-key Cryptography
Attacking Public Keys
8.3.1 Sharing a Secret: Diffie-Hellman
Perfect Forward Secrecy
Variations of Diffie-Hellman
8.3.2 Diffie-Hellman: The Basics of the Math
8.3.3 Elliptic Curve Cryptography
8.4 RSA: Rivest-Shamir-Adleman
8.4.1 Encapsulating Keys with RSA
8.4.2 An Overview of RSA Mathematics
Brute Force Attacks on RSA
The Original Challenge
The Factoring Problem
Selecting a Key Size
Other Attacks on RSA
8.5 Data Integrity and Digital Signatures
8.5.1 Detecting Malicious Changes
One-way Hash Functions
8.5.2 Detecting a Changed Hash Value
8.5.3 Digital Signatures
8.6 Publishing Public Keys
8.6.1 Public-Key Certificates
8.6.2 Chains of Certificates
Web of Trust
Trickery with Certificates
8.6.3 Authenticated Software Updates
8.7.1 Review Questions
9.1 Securing a Volume
9.1.1 Risks To Volumes
Discarded Hard Drives
9.1.2 Risks and Policy Trade-offs
Identifying Critical Data
Policy for Unencrypted Volumes
Policy for Encrypted Volumes
9.2 Block Ciphers
Building a Block Cipher
The Effect of Ciphertext Errors
9.2.1 Evolution of DES and AES
DES and Lucifer
The Development of AES
9.2.2 The RC4 Story
RC4 Leaking, Then Cracking
9.2.3 Qualities of Good Encryption Algorithms
Explicitly designed for encryption
Security does not rely on its secrecy
Available for analysis
Subjected to analysis
No practical weaknesses
Choosing an Encryption Algorithm
9.3 Block Cipher Modes
9.3.1 Stream Cipher Modes
9.3.2 Cipher Feedback Mode
9.3.3 Cipher Block Chaining
9.4 Encrypting a Volume
Choosing a Cipher Mode
Hardware Versus Software
9.4.1 Volume Encryption in Software
Files as Encrypted Volumes
9.4.2 Adapting an Existing Mode
Drive Encryption with Counter Mode
Constructing the Counter
An Integrity Risk
Drive Encryption with CBC Mode
Integrity Issues with CBC Encryption
9.4.3 A “Tweakable” Encryption Mode
9.4.4 Residual Risks
Looking for plaintext
9.5 Encryption in Hardware
Recycling the Drive
9.5.1 The Drive Controller
9.5.2 Drive Locking and Unlocking
9.6 Managing Encryption Keys
9.6.1 Key Storage
Working key storage in hardware
Persistent Key Storage
Managing removable keys
9.6.2 Booting an Encrypted Drive
9.6.3 Residual Risks to Keys
Eavesdrop on the encryption process
Sniffing keys from swap files
Cold boot attack
Recycled Password Attack
The “Master Key” Risk
9.7.1 Review Questions
10.1 The Network Security Problem
10.1.1 Basic Network Attacks and Defenses
Example: Sharing Eve’s Printer
10.1.2 Physical Network Protection
Protecting External Wires
10.1.3 Host and Network Integrity
Botnets in Operation
The Insider Threat
10.2 Transmitting Information
10.2.1 Message Switching
10.2.2 Circuit Switching
10.2.3 Packet Switching
Mix and Match Network Switching
10.3 Putting Bits on a Wire
Synchronous versus Asynchronous Links
10.3.1 Wireless Transmission
Frequency, Wavelength, and Bandwidth
AM and FM Radio
Radio Propagation and Security
10.3.2 Transmitting Packets
Network Efficiency and Overhead
10.3.3 Recovering a Lost Packet
10.4 Ethernet: A Modern LAN
10.4.1 Wiring a Small Network
10.4.2 Ethernet Frame Format
MAC Addresses and Security
10.4.3 Finding Host Addresses
Addresses from Keyboard Commands
Addresses from Mac OS
Addresses from Microsoft Windows
10.4.4 Handling Collisions
10.5 The Protocol Stack
10.5.1 Relationships Between Layers
10.5.2 The OSI Protocol Model
The Orphaned Layers
10.6 Network Applications
Network Applications and Information States
10.6.1 Resource Sharing
10.6.2 Data and File Sharing
Delegation: A Security Problem
10.7.1 Review Questions
11.1 Building Information Networks
Network Topology: Evolution of the Phone Network
11.1.1 Point-to-Point Network
11.1.2 Star Network
11.1.3 Bus Network
11.1.4 Tree Network
11.2 Combining Computer Networks
Traversing Computer Networks
The Internet Emerges
11.2.1 Hopping Between Networks
Routing Internet Packets
11.2.2 Evolution of Internet Security
Protecting the ARPANET
Early Internet Attacks
Early Internet Defenses
11.2.3 Internet Structure
Starting an ISP
11.3 Talking Between Hosts
Socket API capabilities
11.3.1 IP Addresses
IP Version 6
11.3.2 IP Packet Format
11.3.3 Address Resolution Protocol
The ARP Cache
11.4 Internet Addresses in Practice
IPv4 Address Classes
11.4.1 Addresses, Scope, and Reachability
11.4.2 Private IP Addresses
Assigning Private IP Addresses
Dynamic Host Configuration Protocol
11.5 Network Inspection Tools
11.5.1 Wireshark Examples
Address Resolution Protocol
11.5.2 Mapping a LAN with nmap
The Nmap Network Mapper Utility
Use Nmap with Caution
11.6.1 Review Questions
12.1 “Smart” Versus “Dumb” Networks
The End-to-End Principle
12.2 Internet Transport Protocols
User Datagram Protocol
End-to-End Transport Protocols
12.2.1 Transmission Control Protocol
Sequence and Acknowledgement Numbers
12.2.2 Attacks on Protocols
Internet Control Message Protocol
12.3 Names on the Internet
The Name Space
12.3.1 Domain Names in Practice
Using a Domain Name
12.3.2 Looking Up Names
12.3.3 DNS Protocol
Resolving a Domain Name Via Redirection
12.3.4 Investigating Domain Names
12.3.5 Attacking DNS
DOS Attacks on DNS Servers
DOS Attacks and DNS Resolvers
DNS Security Improvements
12.4 Internet Gateways and Firewalls
12.4.1 Network Address Translation (NAT)
Configuring DHCP and NAT
12.4.2 Filtering and Connectivity
12.4.3 Software-based Firewalls
12.5 Long Distance Networking
12.5.1 Older technologies
Analog broadcast networks
Circuit-switched telephone systems
Analog-based digital networks
Analog two-way radios
12.5.2 Mature technologies
Dedicated digital network links
12.5.3 Evolving technologies
Optical fiber networks.
Bidirectional satellite communications
12.6.1 Review Questions
13.1 The Challenge of Community
13.1.1 Companies and Information Control
Reputation: Speaking with One Voice
Companies and Secrecy
Need To Know
13.1.2 Enterprise Risks
Insiders and Outsiders
13.1.3 Social Engineering
Thwarting Social Engineering
13.2 Management Processes
13.2.1 Security Management Standards
Evolution of Management Standards
13.2.2 Deployment Policy Directives
13.2.3 Management Hierarchies and Delegation
Profit Centers and Cost Centers
Implications for Information Security
13.2.4 Managing Information Resources
Managing Information Security
13.2.5 Security Audits
13.2.6 Information Security Professionals
Information Security Training
Information Security Certification
13.3 Enterprise Issues
Education, Training, and Awareness
13.3.1 Personnel Security
Employee Life Cycle
Administrators and Separation of Duty
13.3.2 Physical Security
Information System Protection
13.3.3 Software Security
Software Development Security
Repeatability and Traceability
Formalized Coding Activities
Avoiding Risky Practices
Software-based access controls
13.4 Enterprise Network Authentication
13.4.1 Direct Authentication
13.4.2 Indirect Authentication
Properties of Indirect Authentication
13.4.3 Off-Line Authentication
13.5 Contingency Planning
13.5.1 Data Backup and Restoration
Full Versus Partial Backups
File-Oriented Synchronized Backups
File-Oriented Incremental Backups
RAID as Backup
13.5.2 Handling Serious Incidents
Examining a Serious Attack
13.5.3 Disaster Preparation and Recovery
Business Impact Analysis
Contingency Planning Process
13.6.1 Review Questions
14.1 Communications Security
14.1.1 Crypto By Layers
Link Layer Encryption and 802.11 Wireless
Network Layer Encryption and IPsec
Socket Layer Encryption with SSL/TLS
Application Layer Encryption with S/MIME or PGP
14.1.2 Administrative and Policy Issues
Internet Site Access
14.2 Crypto Keys on a Network
The Default Keying Risk
Key Distribution Objectives
Key Distribution Strategies
Key Distribution Techniques
14.2.1 Manual Keying: A Building Block
14.2.2 Simple Rekeying
New Keys Encrypted With Old
14.2.3 Secret-Key Building Blocks
Key Distribution Center (KDC)
Shared Secret Hashing
14.2.4 Public-Key Building Blocks
Secret Sharing with Diffie-Hellman
Wrapping a Secret with RSA
14.2.5 Public-Key versus Secret-Key Exchanges
Choosing Secret-Key Techniques
Choosing Public-Key Techniques
14.3 Crypto Atop the Protocol Stack
Privacy Enhanced Mail
Pretty Good Privacy
Adoption of Secure Email and Application Security
14.3.1 Transport Layer Security - SSL and TLS
The World Wide Web
Secure Sockets Layer/Transport Layer Security
14.3.2 SSL Handshake Protocol
14.3.3 SSL Record Transmission
Message Authentication Code
Application Transparency and End-to-End Crypto
14.4 Network Layer Cryptography
Components of IPsec
14.4.1 The Encapsulating Security Payload (ESP)
ESP Packet Format
14.4.2 Implementing a VPN
Private IP Addressing
Bundling Security Associations
14.4.3 Internet Key Exchange (IKE) Protocol
14.5 Link Encryption on 802.11 Wireless
Wi-Fi Protected Access: WPA and WPA2
14.5.1 Wireless Packet Protection
Decryption and Validation
14.5.2 Security Associations
Establishing the Association
Establishing the Keys
14.6 Encryption Policy Summary
Apply Encryption Automatically
14.7.1 Review Questions
15.1 Internet Services
Traditional Internet Applications
15.2 Internet Email
Message Formatting Standards
The To: Field
The From: Field
15.2.1 Email Protocol Standards
POP3: An Example
Port Number Summary
15.2.2 Tracking an Email
#1: From UC123 to USM01
#2: From USM01 to USM02
#3: From USM02 to MMS01
#4: From MMS01 to MMS02
15.2.3 Forging an Email Message
15.3 Email Security Problems
Classic Financial Fraud
Evolution of Spam Prevention
MTA Access Restriction
Filtering on Spam Patterns
Tracking a Phishing Attack
15.3.3 Email Viruses and Hoaxes
Email Chain Letters
Virus Hoax Chain Letters
15.4 Enterprise Firewalls
Evolution of Internet Access Policies
A Simple Internet Access Policy
15.4.1 Controlling Internet Traffic
15.4.2 Traffic Filtering Mechanisms
15.4.3 Implementing Firewall Rules
Example of Firewall Security Controls
Additional Firewall Mechanisms
Firewall Rule Proliferation
15.5 Enterprise Point of Presence
Internet Service Providers
Intrusion Prevention Systems
Data Loss Prevention Systems
15.5.1 POP Topology
Single Firewall Topology
Bastion Host Topology
15.5.2 Attacking an Enterprise Site
15.5.3 The Challenge of Real-Time Media
16.1 Hypertext Fundamentals
Formatting: Hypertext Markup Language
Cascading Style Sheets
Hypertext Transfer Protocol
Retrieving data from other files or sites
16.1.1 Addressing Web Pages
Hosts and Authorities
Default Web Pages
16.1.2 Retrieving a Static Web Page
Building a Page from Multiple Files
Web Servers and Statelessness
Web Directories and Search Engines
Crime via Search Engine
16.2 Basic Web Security
Client Policy Issues
Policy Motivations and Objectives
Internet Policy Directives
Strategies to Manage Web Use
The Tunneling Dilemma
Firewalling HTTP Tunnels
16.2.1 Security for Static Web Sites
16.2.2 Server Authentication
Mismatched Domain Name: May be Legitimate
Untrusted Certificate Authority: Difficult to Verify
Expired Certificate: Possibly Bogus, Probably Not
Revoked Certificate: Always Bogus
Invalid Digital Signature: Always Bogus
16.2.3 Server Masquerades
Bogus Certificate Authority
Misleading Domain Name
Stolen Private Key
Tricked certificate authority
16.3 Dynamic Web Sites
Web Forms and POST
16.3.1 Scripts on the Web
Client Side Scripts
Client Scripting Risks
“Same Origin” Policy
16.3.2 States and HTTP
16.4 Content Management Systems
16.4.1 Database Management Systems
Structured Query Language
16.4.2 Password Checking: A CMS Example
Logging In to a Web Site
An Example Login Process
16.4.3 Command Injection Attacks
A Password-Oriented Injection Attack
Inside the Injection Attack
Resisting Web Site Command Injection
16.5 Ensuring Web Security Properties
Serve confidential data
Collect confidential data
16.5.1 Web Availability
16.5.2 Web Privacy
17.1 Secrecy In Government
Hostile Intelligence Services
17.1.1 The Challenge of Secrecy
The Discipline of Secrecy
Secrecy and Information Systems
Exposure and Quarantine
17.1.2 Information Security and Operations
Intelligence and Counterintelligence
17.2 Classifications and Clearances
Legal Basis for Classification
Minimizing the Amount of Classified Information
17.2.1 Security Labeling
Sensitive But Unclassified
17.2.2 Security Clearances
17.2.3 Classification Levels in Practice
Working with classified information
Higher levels have greater restrictions
17.2.4 Compartments and Other Special Controls
Sensitive Compartmented Information
Example of SCI processing
Special Access Programs
Special Intelligence Channels
Enforcing Access to Levels and Compartments
17.3 National Policy Issues
Federal Information Security Management Act
NIST Standards and Guidance for FISMA
Personnel roles and responsibilities
Threats, vulnerabilities, and countermeasures
17.3.1 Facets of National System Security
Life Cycle Procedures
17.3.2 Security Planning
System Life Cycle Management
Security System Training
17.3.3 Certification and Accreditation
NIST Risk Management Framework
17.4 Communications Security
Key Leakage Through Spying
17.4.1 Cryptographic Technology
Classic Type 1 Crypto Technology
17.4.2 Crypto Security Procedures
Controlled Cryptographic Items
Key Management Processes
Data Transfer Device
Electronic Key Management System
17.4.3 Transmission Security
17.5 Data Protection
Media Sanitization and Destruction
17.5.1 Protected Wiring
17.6 Trustworthy Systems
Trusted Systems Today
17.6.1 Integrity of Operations
Achieving Nuclear High Assurance
17.6.2 Multilevel Security
Rule- and Identity-Based Access Control
Other Multilevel Security Problems
17.6.3 Computer Modes of Operation
System High Mode
Compartmented or Partitioned Mode
17.7.1 Review Questions
This provides brief articles on certain topics relevant to CNSS training standards.
This is a collection of pages describing experiences I've had with Drupal.
After reading and re-reading several descriptions of how to do bulk changes to a Drupal site, the Views Bulk Operation (VBO) mechanism sounded most promising. It did, however, take another couple of hours of poking at explanations to figure out how THAT works.
The Views mechanism allows the site manager to create a page or a block that displays information retrieved from the Drupal database. It's mostly intended to pull data out of nodes and display the data as a table.
The VBO mechanism uses Views to perform database operations on nodes in the database. To use it, we create a page that, instead of simply retrieving data from the database, applies a "bulk operation" to the data.
Here is how I used it, step-by-step:
1. Start up Views and add a new view
Give the view a name, and select "Create a page." You'll need the page in order to execute your bulk operations. Click "Continue and Edit."
2. Allow entry of fields
By default, a view will display a block of "content" text. We need to change it to display fields of data. To do this we click on Format: Settings in the leftmost column under Page details.
Click on Force using fields, and apply.
3. Add a field to do bulk operations.
Click on the Add button to the right of the term Fields. This presents a list of fields, including two marked "Bulk Operations." Select a bulk operation.
4. Set Access: Permission so that non-privileged users can't run it.
Click on the Permission link and make sure no non-privileged users can run this view.
5. Save the whole thing to execute later
Views will display a "preview" of the operation just below the settings area. I misunderstood the preview and thought it would allow me to run the bulk operation right there. You have to save the thing, probably with a menu entry, and execute it from the menu.
6. Bring up a page on the site, and select the menu entry to run the operation.
The operation may demand details from you about the settings to change and the nodes to affect. Choose the appropriate inputs and execute the thing.
In context, I see why it works this way. From a usability standpoint, it looks like nonsense.
After spending a few years with WordPress, I decided to migrate to Drupal. I installed WordPress in December, 2007, and replaced it in February, 2011.
Existing users had to recover their passwords, since ther was no clean way to use WordPress-encrypted passwords on a Drupal site. I also had the site off-line for part of the day while I replaced the WordPress software with the Drupal software.
My main reason for the migration is Drupal's "book" feature. Drupal makes it easy to structure short articles into a long, hierarchically-structured narrative. This is essentially how I write books anyway. This makes it easier to present complex topics built from a series of short articles.
For example, the site contains this long presentation on multilevel security. The presentation consists of several sections and subsections. The text began as a long article, and I broke it into pieces that I hooked together manually using links. The subsections should each be a separate article. Instead, I kept each section as a single article. And I had to hand-craft the navigation between sections.
The site also contains a detailed discussion of stream ciphers and the risk of reused key streams. At present the discussion consists of several separate articles that are interconnected via hyperlinks. This makes it hard to add details to the discussion.
For example, I need to post a discussion of the folly of using statistical tests to prove the effectiveness of a stream cipher's key stream. Believe it or not, there are graduate students who seriously believe this is a valid technique. Where do I put this explanation, and how do I hook it in with the existing articles? I have to fiddle with the navigation by hand in WordPress.
Drupal also supports a more flexible authentication and access control discipline. I'm skeptical that this will provide better security - it's harder to mark everything correctly when you have finer-grained permissions, especially when still learning about the system.
As before, visitors need to set up an account to post comments, but don't need an account to read the site's public contents.
Jones and Bartlett, the publisher of the upcoming Elementary Information Security, will be providing a protected area to distribute materials to course instructors who use the textbook. This would include lecture notes, figures, and answers to problem sets. So I doubt I'll need to set up such a mechanism on this site. However, I would like to encourage discussion of the text and the topics, and how best to present them to students. This may involve a mailing list or an on-line forum. I'll see how things develop.
The migration did not go without a few hitches, but it went as smoothly as might be expected for such a thing.
My inexperience with Apache's .htaccess files caused an unnecessary delay and a lot of "500 Internal Server Error" messages.
The process began with a Drupal clone of my WordPress site. I created it on a separate domain hosted by GoDaddy, my ISP. Once the clone reached a sufficient degree of done-ness, I removed the WordPress files and installed a fresh Drupal install atop it. The installation was actually performed by GoDaddy's "Host Connection" feature. Then I added the custom files I needed (graphics, themes, and plug-in modules). Once all those parts were in place, I copied the clone's Drupal database.
As I approached the migration, I realized that there were certain things I wanted to achieve:
The migration involved the following steps which I'll discuss in other postings:
Note that these steps took place before the actual migration - I did all this to the cloned Drupal site while the WordPress Cryptosmith site remained on-line. The actual site migration involved these activities:
I expect there will be a few more topics to cover as the site evolves.
This process looks deceptively simple. WordPress happily exports all entries into a nicely formatted XML file. Drupal has a "WordPress Import" module that appears to do a comfortable import. What could possibly go wrong?
Well, in Drupal, everything comes down to a question of surprising choices for defaults. At least, if you are expecting ease of use, the default choices seem surprising.
I performed about a half-dozen imports before I was finally satisfied with the results. No doubt a Drupal expert would have nailed it the first time. But that's the problem: someone who imports another site may be the least likely to be an expert. In my case, the import is my first significant experience with Drupal. And it goes badly.
My specific problem was that the default import format discarded most of my paragraph breaks.
Drupal has a configuration feature called "input formats." Each format consists of three parts: a set of filters to select, configurations for those filters, and the order of those filters. One filter discards all HTML tags that aren't on an approved list. Another filter converts newlines into line or paragraph breaks. The default input formats did not automatically convert the newlines.
After a couple of attempts, I realized that there were these things called input filters, and that they should have been performing the conversion. Eventually I even realized that there was a special WordPress input format. However, the WordPress format proved to be more trouble than aid.
The last thing I wanted to do was to break things at the start of my Drupal experience, so I was cautious about changing default configurations. In fact, I think I would have saved myself a world of effort if I had simply enabled the Line Break filter.
Instead, used a text editor to manually update the paragraph headers in the input text. This took about a half hour, which was a bit quicker than reinstalling a clean Drupal system and repeating the import.
I actually ended up performing this conversion twice. Once I used a global change to add paragraph breaks, but that ran into trouble with breaks that were inside paragraphs. A global change with "<br />" would have been a smarter choice.
If I had it to do over again, I would have enabled the Line Break Converter in the WordPress input format. I have no idea why the thing isn''t activated by default. The documentation suggests that it should do the right thing.
WordPress is well designed for blogging. I got used to the TinyMCE editor and easy-to-reach features to import graphics when using WordPress. I also got used to less sophisticated things like paragraph breaks and section subheadings. And I like the email alert when there's something to moderate.
I was appalled to discover that these things are omitted by default in Drupal.
Notice how I stuck an HTML header on the previous line. I like to use these things to break up a lengthy missive to make it easier to read. Drupal omits such things from formatted text by default.
Notice how I just started a second paragraph to address a second issue: and paragraphs, too, are considered unnecessary by Drupal's default configuration. When I talk about the "default" here I speak of "Filtered HTML." Web managers don't want regular users to enter "unfiltered" HTML because it opens the site to such things as "cross site scripting" attacks. On the other hand, we end up with hard-to-read text if our list of permitted HTML is too short.
Drupal's problems aren't limited to the defaults allowed for HTML, but they provide a good start to the list:
To install modules in Drupal, you generally use FTP. I'm on a Macintosh and I've decided I like CyberDuck. It supports WYSIWYG uploading, downloading, and synchronization. I used to rely on DreamWeaver for synchronization, which is like killing a gnat with a hammer. Even with a nice donation, CyberDuck is much cheaper and easier to live with than DreamWeaver.
The WYSIWYG editor module does not, by itself include editors. It provides the framework within which you can install editors.
Off the shelf, TinyMCE provides just about no functions. So you have to burrow into the "Wysiwyg profiles" under the Drupal Site Configuration.
To enable the editor, you first assign it to an input format. Then, the input format gives you a way to configure the editor.
Click on the format's "Configure" link. Then select "Buttons and plugins." This displays all of the features that may appear on the editor's GUI. The features appear as checkboxes.
Some checkboxes appear as plaintext and some appear with underlined links. I generally enable everything that's in plaintext, which is the first half of the list (about 20 boxes). I generally check all of them, plus "context menu" and "HTML block format." I also check "Teaser Break" which appears in plaintext at the bottom of the ilst.
Now, if you want to use the same editor with any other input formats - trust me, you do - then you must repeat the 20-checkbox procedure for each input format. (At least, this is the default behavior. As with most annoyances in Drupal, there may be a module to fix it, assuming you have the patience to look for one and use it.
In keeping with Drupal's design philosophy, everything is optional. By itself, the WYSIWYG framework and TinyMCE allow you to create text that includes the usual, desirable elements. You can even include graphics with your text. Well, you can as long as you have a URL that points to your image.
If you want to download a particular image to go with your text, that's another story. All together now:
For that, you need a different module
I installed the IMCE module for this purpose. This provides a convenient little button on the image insertion dialog so that you can upload the image and retrieve its new URL.
Again, however, we must go back to the WYSIWYG configuration and enable IMCE for each input format.
The only really restrictive input format is Formatted HTML, and it's generally too restrictive by default. I revised it to allow paragraph breaks, line breaks, and headings. To do this, I simply added the tags I wanted: <p>, <br>, <h2>, <h3>, and <h4>.
Oddly enough, the AntiSpam module seems to be the only thing that knows how to contact me when it's time to moderate new content. There are various other notifier modules, but they're primarily intended to tell subscribers when new content has been published. If input is awaiting moderation, it's unpublished and these modules do nothing.
Unfortunately, the anti-spam module doesn't do the perfect job of notifying me. I get emails for every spam posting as well as every legitimate one.
Why isn't more of this done correctly by default?
I'm always annoyed when I register for a web site only to have my user ID mysteriously disappear. The "scouting.org" web site has recreated itself about four times in the past decade. Each time has led to re-registration by the entire user community.
Therefore I decided to make a strong effort to retain my user community while migrating my site. The easy part was to contact those who provided email addresses and tell them what was happening. The hard part was to deal with passwords.
The first step was to export the user database from WordPress. For this I installed the AmR Users plug-in produced by one AnMari, a WordPress developer. This plug-in provides an interface to construct tailored lists of registered users. I configured a list to display user IDs, "readable" names, email addresses, URLs, and comment counts. Unfortunately I could not display OpenID credentials, so I didn't have a clean way to incorporate them into the Drupal user database.
Another problem: I couldn't transfer user passwords. The passwords were safely stored in a hashed format. In a perfect world, both Drupal and WordPress would have used an identical hashing mechanism. Perhaps the mechanism could have been a standard PHP function used by both packages. In fact, each rolled their own. There were hacks that might have allowed using WordPress hashes on Drupal, but that would have required PHP coding.
A perfect world would have also allowed me to transfer the OpenID credentials. All I should have needed to do is store the appropriate information in the Drupal user database, and everything would have worked.
But that's not how things are. Maybe things will move more smoothly in a few more years.
Akismet killed thousands of bogus comments on Cryptosmith. To post those comments, the spammers had to create user accounts. So I had a lot of bogus accounts. I sorted the wheat from the chaff by retaining only those users whose comments had actually been kept.
I also had to delete a bunch of escaped quotations that the exported AmR User list included in its CSV format.
And I deleted users who hadn't provided email addresses. After all, existing users couldn't retrieve lost passwords without a working email address.
I installed the "User Import" module to do this. Now, this may simply reflect my ignorance of Drupal, since there seem to be some very general-purpose import mechanisms. It may be that users are a special case of "nodes" or "taxonomies" or some other basic Drupal thing, and that I could import a list of users as a special case of one of those.
In any case, the User Import module accepted the CSV from AmR User, after I massaged it with Excel and a text editor.
Drupal may - or may not - automaticaly send email to new users as soon as they are activated. I chose not to send such emails, since I imported the user list into my Drupal clone system while the main Cryptosmith site was still running WordPress. The automated message would have arrived too soon. Instead, I had a chance to review the imports and make sure that they looked correct. I also tested a few imported users.
Shortly before I shut down the WordPress site, I sent an email to everyone who provided an email address to the site. The message announced the change, and assured everyone that the site contents would still be available without logging in first. I also explained that I would retain login names for users who had posted comments.
Then I performed the transition. I'll talk about that experience elsewhere.
Once the transition was finished, I sent an email to all registered users. Since the updated site was based on Drupal, I needed to install a module to send the email. I chose the "Mass Contact" module. Basic Drupal includes a "Contact" module that provides a web page for contacting the site administrator or other users (depending on how it's configured). The Mass Contact module provides a similar interface to contact specific classes of site users. I configured it to send email to all site users.
Those of you who received the email were subjected to some of the, well, surprising implications of emailing from Drupal.
Thus, my bias in favor of raw text email produced a jumble of text and HTML tags in my first attempt. The Mass Contact also uses some weird scheduling discipline so that it doesn't send too much email at once, and this produces delays in email transmission. In some cases you have to wait a while to see the results of a Mass Contact email, because they aren't always transmitted immediately.
If you visited Cryptosmith during the afternoon of February 5, you may have seen this:
This appeared while I was removing WordPress files from the site and inserting Drupal files. The "Site Down" display was controlled by the ".htaccess" file stored in the site's root directory. As soon as Drupal stored a new .htaccess file, links were redirected to Drupal's scripts.
I constructed the Site Down page from raw HTML and a JPEG of the Cryptosmith logo. Such pages are easy to build.
To begin with, here is the HTML text for such a page:
<p align="middle"><img src="cslogo.JPG" /></p>
<h1 align="middle">SITE DOWN</h1>
<p align="middle">We are making major changes to the hosting software. The site
should return to service in a few hours. For more information, contact the
The working files for this display consist of the "index.html" flile containing the above contents, a logo image refered to in the "<img src=" tag, and the .htaccess file.
The .htaccess file gives the Apache web server some specific instructions on how to respond to URLs. Those of us who rent server space from others must use .htaccess files to adjust the server's behavior. Those who actually run their own server may put this information directly in the Apache configuration file (it's more efficient that way).
If we put the following into the .htaccess file and store it in the web site's root directory, it will direct everything to the index.html page:
# Apache .htaccess setting for the 1-page web site
# Don't show directory listings for URLs which map to a directory.
# point Apache at the directory/site's home page.
# If the URL points elsewhere in the site, take it back to the home page.
ErrorDocument 404 /index.html
In the above text, the Apache server ignores the lines starting with a "#" since they are comments. Each comment explains the line following it. The "Options -Indexes" line disables directory listings: no matter how complex the site might be, visitors can't retrieve anything unless the explicit URL points to an actual file. The "DirectoryIndex" line directs Apache to the "index.html" file, which contains the contents listed earlier.
In most CMS-based web sites, including those run by WordPress, a URL path either leads to a PHP script (a file with the ".php" suffix) or it's interpreted by a PHP script at the root level. These systems use a .htaccess file that leads to an "index.php" file instead of a .html file.
When we replace the CMS .htaccess file with our own, we redirect paths to lead to "index.html." However, we've only provided one such file at the root level. The third Apache statement above, "ErrorDocument," redirects all nonexistent pages (404 errors) to our solitary home page.
Treat all .htaccess files with respect. Any syntax errors may yield the "500 Internal Server Error" when you try to visit the web site from a browser. While adjusting the Drupal site configuration I managed to stick an error in my .htaccess file, and this yielded a couple of hours of misdirected work. The mistake probably came down to an extra space in a URL specification somewhere.
The easiest way to eliminate a 500 Server Error is to delete your .htaccess file. That should allow a complete URL (one that includes the file name) to work on your site. Then go back and try to fix the .htaccess file. See if the 500 error reappears.
I created the new Drupal-based site in parallel with the existing Wordpress site. Once the Drupal site seemed solid and provided the necessary content, I installed it over the old Wordpress site.
I moved my hosting from a local provider to GoDaddy about five years ago. The local provider found it easier to provide virtual hosts to customers who did their own host management. I prefer shared hosting since there's less management. The local provider provided shared-hosting support via email - I'd ask for something via email and it would happen in the next few hours.
GoDaddy offered the whole package: domain registration, SSL certificates, shared hosting, and even some package installation, via a web interface. So I no longer had to change domain aliasing or host configuration via email.
I'm currently evaluating Drupal performance under GoDaddy. Page handling seems excessively slow, especially when logged in. The customer service people suggest trying their new "4GH' service. This apparently hosts the site on scalable multiprocessors.
The transition process moved the Drupal prototype site onto my main Cryptosmith domain. This was the cheapest and easiest way for me to do it.
Like true love, the course of migration rarely runs smooth.
I used Cyberduck to upload my web sites to my desktop, and Cyberduck choked when it hit access-restricted files. It's annoying to perform a large-scale file copy only to have it die tragically part way through the process. I had to restart the process and carve around the access-restricted files. I'm not exactly sure why GoDaddy has stuck a few files in my site directory over which I don't have control, but that's what I found.
Once I had copied the needed files over to the Cryptosmith domain, I was greeted with a series of "500 Internal Server Error" messages. Instead of immediately searching help fora for obvious causes of such a message, i repeated my migration steps a few times, hoping to eliminate the problem through brute force.
The typical cause of "500 Internal Server Error" is a flaw in a .htaccess file. Once I went back to an earlier .htaccess file and took more care with my editing, the problem went away and the new site went live.