Miscellaneous INFOSEC

Taxonomy upgrade extras: 

"Design Patterns" for Identity Systems

These are design patterns in the Christopher Alexander sense rather than the object oriented design sense: they address the physical and network environment rather than focusing on software abstractions. The patterns were introduced in my book Authentication.

There are four patterns: local, direct, indirect, and off-line.

Here is a brief description of each authentication pattern:

  • Local: All checking takes place within the physical confines of the device. For example, a password-protected home computer follows this pattern because it typically stores the user name and password inside the physical confines of the computer. A door lock with a key pad usually follows this pattern since the unlock code is stored inside the lock.
  • Direct: A pattern used in remote access: the client contacts a server, and the server performs the checking itself based on information it maintains itself. Blogging software typically does this, since each blog maintains its own database of user names and corresponding passwords.
  • Indirect: Another remote access pattern: the client contacts the server, but the server relies on a separate authentication server to verify the client user's identity. There are many examples of this: OpenID, RADIUS, Kerberos, TACACS, Microsoft domains, etc.
  • Off-line: Another remote access pattern: the client contacts the server, and the server relies on cryptographic information to authenticate the client's user without contacting a third party. Browsers use this pattern to validate a server's certificate: they rely on the built-in list of certificate authorities.
These patterns are useful because they capture key distinctions between different authentication systems:
  • Local versus the others: Local systems rely on physical security to protect the integrity of the authentication operation. The other patterns involve remote access and must be designed to deal with sniffing, replay, and other network-based risks.
  • Local/Direct versus Indirect/Off-line: This reflects the trade-off of relying on third parties to manage authentication. In the Indirect pattern we rely on a central authentication server; in the off-line pattern we rely on a third party to issue cryptographic certificates.
  • Direct versus indirect: This reflects the server trade-off between local and third-party management of authentication. Neither is ideal for all cases.
  • Off-line versus the other remote patterns: This reflects the benefits of generally autonomous operation versus the problems of autonomy, in particular, revocation.
Post category: 

Multilevel Security

Multilevel security (MLS) is a technology to protect secrets from leaking between computer users, when some are allowed to see those secrets and others are not. This is generally used in defense applications (the military and intelligence communities) since nobody else is nearly as paranoid about data leaking. A modern wrinkle on this is called cross domain systems (CDS) in which we speak of domains instead of levels, and are usually sharing data on computer networks instead of individual computers

Personally, I was introduced to MLS through my work on the LOCK trusted computing system in the early 1990s.

Here are some MLS materials available on this site:

  • A general introduction to MLS
  • A commentary about MLS based on meetings with defense security experts in the late 1990s, when we tried to map a new way forward.
  • An e-mail about MLS and Internet server protection written for the old Firewalls mailing list in the 1990s.
  • MLS versus Type Enforcement versus chroot() A paper (PDF) comparing the use of MLS and Type Enforcement and Unix chroot() for protecting Internet servers. The paper was based on an e-mail discussion from the old "Firewalls" mailing list. The e-mail noted above was part of that discussion.
  • Building a high assurance system A paper (PDF) on the cost of building a high assurance MLS system.
  • The Standard Mail Guard A paper (PDF) on an almost off-the-shelf MLS product.

Note that some people like to spell it "multi-level security." I think the term is old enough that we can omit the hyphen.

Ambiguous terminology

Several years ago I was at a workshop sponsored by the Air Force to develop some new directions for information systems improvements. The workshop included both "end user" representatives from the Air Force and "R&D" representatives from laboratories and government contractors.

Discussions on MLS capabilities became rather heated. One vendor representative from the security working group declared the following in a plenary session:

"Don't ask for MLS. We've tried to give you MLS, but in fact you've never really wanted it or used it. But please, tell us what you do want!"

A voice in the back shouted, "MLS!"

That little incident reflects an important fact about MLS: it's an overloaded term that describes both an abstract security objective and a well-known mechanism that is supposed to achieve that objective, more or less. In her well-known paper on software safety, Nancy Leveson criticizes this type of labeling:

Labeling a technique, e.g., "software diversity" or "expert system," with the property we hope to achieve by it (and need to prove about it) is misleading and unscientific.

Unfortunately, we're stuck with the established terminology, so now we must focus on distinguishing between the two meanings.

 

Creative Commons License

 

This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.

Taxonomy upgrade extras: 

MLS Introduction

Multilevel security (MLS) has posed a challenge to the computer security community since the 1960s. MLS sounds like a mundane problem in access control: allow information to flow freely between recipients in a computing system who have appropriate security clearances while preventing leaks to unauthorized recipients. However, MLS systems incorporate two essential features: first, the system must enforce these restrictions regardless of the actions of system users or administrators, and second, MLS systems strive to enforce these restrictions with incredibly high reliability. This has led developers to implement specialized security mechanisms and to apply sophisticated techniques to review, analyze, and test those mechanisms for correct and reliable behavior.

Despite this, MLS systems have rarely provided the degree of security desired by their most demanding customers in the military services, intelligence organizations, and related agencies. The high costs associated with developing MLS products, combined with the limited size of the user community, have also prevented MLS capabilities from appearing in commercial products.

Portions of this article also appear as Chapter 205 of the Handbook of Information Security, Volume 3, Threats, Vulnerabilities, Prevention, Detection and Management, Hossein Bidgoli, ed., ISBN 0-471-64832-9, John Wiley, 2006.

Taxonomy upgrade extras: 

The MLS Problem

Many businesses and organizations need to protect secret information, and most can tolerate some leakage. Organizations who use MLS systems tolerate no leakage at all. Businesses may face legal or financial risks if they fail to protect business secrets, but they can generally recover afterwards by paying to repair the damage. At worst, the business goes bankrupt. Managers who take risks with business secrets might lose their jobs if secrets are leaked, but they are more likely to lose their jobs to failed projects or overrun budgets. This places a limit on the amount of money a business will invest in data secrecy.

 

The defense community, which includes the military services, intelligence organizations, related government agencies, and their supporting enterprises, cannot easily recover from certain information leaks. Stealth systems aren't stealthy if the targets know what to look for, and surveillance systems don't see things if the targets know what camouflage to use. Such failures can't always be corrected just by spending more money. Even worse, a system's weakness might not be detected until after a diplomatic or military disaster reveals it. During the Cold War, the threat of nuclear annihilation led military and political leaders to take such risks very seriously. It was easy to argue that data leakage could threaten a country's very existence. The defense community demanded levels of computer security far beyond what the business community needed.

We use the term multilevel because the defense community has classified both people and information into different levels of trust and sensitivity. These levels represent the well-known security classifications: Confidential, Secret, and Top Secret. Before people are allowed to look at classified information, they must be granted individual clearances that are based on individual investigations to establish their trustworthiness. People who have earned a Confidential clearance are authorized to see Confidential documents, but they are not trusted to look at Secret or Top Secret information any more than any member of the general public. These levels form the simple hierarchy shown in Figure 1. The dashed arrows in the figure illustrate the direction in which the rules allow data to flow: from "lower" levels to "higher" levels, and not vice versa.

The hierarchical security levels
Figure 1: The hierarchical security levels

When speaking about these levels, we use three different terms:

  • Clearance level indicates the level of trust given to a person with a security clearance, or a computer that processes classified information, or an area that has been physically secured for storing classified information. The level indicates the highest level of classified information to be stored or handled by the person, device, or location.
  • Classification level indicates the level of sensitivity associated with some information, like that in a document or a computer file. The level is supposed to indicate the degree of damage the country could suffer if the information is disclosed to an enemy.
  • Security level is a generic term for either a clearance level or a classification level.

The defense community was the first and biggest customer for computing technology, and computers were still very expensive when they became routine fixtures in defense organizations. However, few organizations could afford separate computers to handle information at every different level: they had to develop procedures to share the computer without leaking classified information to uncleared (or insufficiently cleared) users. This was not as easy as it might sound. Even when people "took turns" running the computer at different security levels (a technique called periods processing), security officers had to worry about whether Top Secret information may have been left behind in memory or on the operating system's hard drive. Some sites purchased computers to dedicate exclusively to highly classified work, despite the cost, simply because they did not want to take the risk of leaking information.

Multiuser systems, like the early timesharing systems, made such sharing particularly challenging. Ideally, people with Secret clearances should be able to work at the same time others were working on Top Secret data, and everyone should be able to share common programs and unclassified files. While typical operating system mechanisms could usually protect different user programs from one another, they could not prevent a Confidential or Secret user from tricking a Top Secret user into releasing Top Secret information via a Trojan horse.

A Trojan horse is software that performs an invisible function that the user would not have chosen to perform. For example, consider a multiuser system in which users have stored numerous private files and have used the system's access permissions to protect those files from prying eyes. Imagine that the author of a locally-developed word processing program has an unhealthy curiosity about others in the user community and wishes to read their protected files. The author can install a Trojan horse function in the word processing program to retrieve the protected files. The function copies a user's private files into the author's own directory whenever a user runs the word processing program.

Unfortunately, this is not a theoretical threat. The "macro" function in modern word processors like Microsoft Word allow users to create arbitrarily complicated software procedures and attach them to word processing documents. When another user opens a document containing the macro, the word processor executes the procedure defined by the macro. This was the basis of "macro viruses" like the "I Love You" virus of the late 1990s. A macro can perform all the functions required of a Trojan horse program, including copying files.

When a user runs the word processing program, the program inherits that user's access permissions to the user's own files. Thus the Trojan horse circumvents the access permissions by performing its hidden function when the unsuspecting user runs it. This is true whether the function is implemented in a macro or embedded in the word processor itself. Viruses and network worms are Trojan horses in the sense that their replication logic is run under the context of the infected user. Occasionally, worms and viruses may include an additional Trojan horse mechanism that collects secret files from their victims. If the victim of a Trojan horse is someone with access to Top Secret information on a system with lesser-cleared users, then there's nothing on a conventional system to prevent leakage of the Top Secret information. Multiuser systems clearly need a special mechanism to protect multilevel data from leakage.

(back to top)

Multiuser Operating Modes

In the United States, the defense community usually describes a multiuser system as operating in a particular mode. For the purposes of this discussion, there are three important operating modes:

  • Dedicated mode - all users currently on the system have permission to access any of the data on the system. In dedicated mode, the computer itself does not need any built-in access control mechanisms if locked doors or other physical mechanisms prevent unauthorized users from accessing it.
  • System high mode - all users currently on the system have the right security clearance to access any data on the system, but not all users have a need to know all data. If users don't need to know some of the data, then the system must have mechanisms to restrict their access. This requires the typical file access mechanisms of typical multiuser systems.
  • Multilevel mode - Not all users currently on the system are cleared for all data stored on the system. The system must have an access control mechanism that enforces MLS restrictions. It must also have a mechanisms to enforce multiuser file access restrictions.

The phrase "need to know" refers to a commonly-enforced rule in organizations that handle classified information. In general, a security clearance does not grant blanket permission to look at all information classified at that level or below. The clearance is really only the first step: people are only allowed to look at classified information that they need to know as part of the work they do.

In other words, if we give Janet a Secret clearance to work on a cryptographic device, then she has a need to know the Secret information related to that device. She does not have permission to study Secret information about spy satellites, or Secret cryptographic information that doesn't apply to her device. If she is using a multiuser system containing Secret information about her project, other cryptographic projects, and even spy satellites, then the system must prevent Janet from browsing information belonging to the other projects and activities. On the other hand, the system should be able to grant Janet permission to look at other materials if she really needs the information to do her job.

A computer's operating mode determines what access control mechanisms it needs. Dedicated systems might not require any mechanisms beyond physical security. Computers running at system high must have user-based access restrictions like those typically provided in Unix and in "professional" versions of Microsoft Windows. In multilevel mode, the system must prevent data from higher security levels from leaking to users who have lower clearances: this requires a special mechanism.

Typically, an MLS mechanism works as follows: Users, computers, and networks carry computer-readable labels to indicate security levels. Data may flow from "same level" to "same level" or from "lower level" to "higher level" (Figure 2). Thus, Top Secret users can share data with one another, and a Top Secret user can retrieve information from a Secret user. It does not allow data from Top Secret (a higher level) to flow into a file or other location visible to a Secret user (at a lower level). If data is not subject to classification rules, it belongs to the "Unclassified" security level. On a computer this would include most application programs and any computing resources shared by all users.

Data flows allowed by an MLS mechanism
Figure 2: Data flows allowed by an MLS mechanism

A direct implementation of such a system allows the author of a Top Secret report to retrieve information entered by users operating at Secret or Confidential and merge it with Top Secret information. The user with the Secret clearance cannot "read up" to see the Top Secret result, since the data only flows in one direction between Secret and Top Secret. Unclassified data can be made visible to all users.

It isn't enough to simply prevent users with lower clearances from reading data carrying higher classifications. What if a user with a Top Secret clearance stores some Top Secret data in a file readable by a Secret user? This causes the same problem as "reading up" since it makes the Top Secret data visible to the Secret user. Some may argue that Top Secret users should be trusted not to do such a thing. In fact, some would argue that they would never do it because it's a violation of the Espionage Act. Unfortunately, this argument does not take into account the risk of a Trojan horse.

For example, Figure 3 shows what could happen if an attacker inserts a macro function with a Trojan horse capability into a word processing file. The attacker has stored the macro function in a Confidential file and has told a Top Secret user to examine the file. When the user opens the Confidential file, the macro function starts running, and it tries to copy files from the Top Secret user's directory into Confidential files belonging to the attacker. This is called "writing down" the data from a higher security level to a lower one.

A macro function with a Trojan horse

Figure 3: A macro function with a Trojan horse could leak data to a lower security level

In a system with typical access control mechanisms, the macro will succeed since the attacker can easily set up all of the permissions needed to allow the Top Secret user to write data into the other user's files. Clearly, the system cannot enforce MLS reliably if Trojan horse programs can circumvent MLS protections. There is no way users can avoid Trojan horse programs with 100% reliability, as suggested by the success of e-mail viruses. An effective MLS mechanism needs to block "write down" attempts as well as "read up" attempts.

(back to top)

Bell-LaPadula Model

The most widely recognized approach to MLS is the Bell-LaPadula security model (Bell and La Padula, 1974). The model effectively captures the essentials of the access restrictions implied by conventional military security levels. Most MLS mechanisms implement Bell-LaPadula or a close variant of it. Although Bell-LaPadula has accurately defined a MLS capability that keeps data safe, it has not led to the widespread development of successful multilevel systems. In practice, developers have not been able to produce MLS mechanisms that work reliably with high confidence, and some important defense applications require a "write down" capability that renders Bell-LaPadula irrelevant (for example, see the later section "Sensor to Shooter").

In the Bell-LaPadula model, programs and processes (called subjects) try to transfer information via files, messages, I/O devices, or other resources in the computer system (called objects). Each subject and object carries a label containing its security level, that is, the subject's clearance level or the object's classification level. In the simplest case, the security levels are arranged in a hierarchy as shown earlier in Figure 1. More elaborate cases involve compartments, as described in a later section.

The Bell-LaPadula model enforces MLS access restrictions by implementing two simple rules: the simple security property and the *-property. When a subject tries to read from or write to an object, the system compares the subject's security label with the object's label and applies these rules. Unlike typical access restrictions on multiuser computing systems, these restrictions are mandatory: no users on the system can turn them off or bypass them. Typical multiuser access restrictions are discretionary, that is, they can be enabled or disabled by system administrators and often by individual users. If users, or even administrative users, can modify the access rules, then a Trojan horse can modify or even disable those rules. To prevent leakage by browsing users and Trojan horses, the Bell-LaPadula systems always enforce the two properties.

Simple Security Property:

A subject can read from an object as long as the subject's security level is the same as, or higher than, the object's security level. This is sometimes called the no read up property.

*-Property:

A subject can write to an object as long as the subject's security level is the same as or lower than the object's security level. This is sometimes called the no write down property.

The simple security property is obvious: it prevents people (or their processes) from reading data whose classification exceeds their security clearances. Users can't "read up" relative to their security clearances. They can "read down," which means that they can read data classified at or below the same level as their clearances.

The *-property prevents people with higher clearances from passing highly classified data to users who don't share the appropriate clearance, either accidentally or intentionally. User programs can't "write down" into files that carry a lower security level than the process they are currently running. This prevents Trojan horse programs from secretly leaking highly classified data. Figure 4 illustrates these properties: the dashed arrows show data being read in compliance with the simple security property, and the lower solid arrow shows an attempted "write down" being blocked by the *-property.

If the program can read Top Secret, it can\'t write lower levels
Figure 4: If the program can read Top Secret data, it can't write data to a lower level

A system enforcing the Bell-LaPadula model blocks the word processing macro in either of two ways, depending on the security level at which the user runs the word processing program. In one case, shown in Figure 4, the user runs the program at Top Secret. This allows the macro to read the user's Top Secret files, but the *-property prevents the macro from writing to the attacker's Confidential files. When the process tries to open a Confidential file for writing, the MLS access rules prevent it. In the other case, the Top Secret user runs the program at the Confidential level, since the file is classified Confidential. The program's macro function can read and modify Confidential files, including files set up by the attacker, but the simple security property prevents the macro from reading any Secret or Top Secret files.

(back to top)

Compartments

Although the hierarchical security levels like Top Secret are familiar to most people, they are not the only restrictions placed on information in the defense community. Organizations apply hierarchical security levels (Confidential, Secret, or Top Secret) to their data according to the damage that might be caused by leaking that data. Some organizations add other markings to classified material to further restrict its distribution. These markings go by many names: compartments, codewords, caveats, categories, and so on, and they serve many purposes. In some cases, the markings indicate whether or not the data may be shared with particular organizations, enterprises, or allied countries. In many cases these markings give the data's creator or owner more control over the data's distribution. Each marking indicates another restriction placed on the distribution of a particular classified data item. People can only receive the classified data if they comply with all restrictions placed on the data's distribution.

The Bell-LaPadula model refers to all of these additional markings as compartments. A security level may include compartment identifiers in addition to a hierarchical security level. If a particular file's security level includes one or more compartments, then the user's security level must also include those compartments or the user won't be allowed to read the file.

A system with compartments generally acquires a large number of distinct security levels: one for every legal combination of a hierarchical security level with zero or more compartments. The interrelationships between these levels form a directed graph called a lattice. Figure 5 shows the lattice for a system that contains Secret and Top Secret information with compartments Ace and Bar.

The arrows in the lattice show which security levels can read data labeled with other security levels. If the user Cathy has a Top Secret clearance with access to both compartments Ace and Bar, then she has permission to read any data on the system (assuming its owner has also given her "read" permission to that data). We determine the access rights associated with other security labels by following arrows in Figure 5.

Lattice for Secret, Top Secret, and compartments
Figure 5: Lattice for Secret and Top Secret including the compartments Ace and Bar

If Cathy runs a program with the label Secret Ace, then the program can read data labeled Unclassified, Secret, or Secret Ace. The program can't read data labeled Secret Bar or Secret Ace Bar, since its security label doesn't contain the Bar compartment. Figure 4 illustrates this: there is no path to Secret Ace that comes from a label containing the Bar compartment. Likewise, the program can't read Top Secret data because it is running at the Secret level, and no Top Secret labels lead to Secret labels.

(back to top)

MLS and System Privileges

A high security clearance like Cathy's shows that a particular organization is willing to trust Cathy with certain types of classified information. It is not a blank check that grants access to every resource on a computer system. MLS access rules always work in conjunction with the system's other access rules.

Systems that enforce MLS access rules always combine them with conventional, user-controlled access permissions. If a Secret or Confidential user blocks access to a file by other users, then a Top Secret user can't read the file either. The "need to know" rule means that classified information should only be shared among individuals who genuinely need the information. Individual users are supposed to keep classified information protected from arbitrary browsing by other users. Higher security clearances do not grant permission to arbitrarily browse: access is still restricted by the need to know requirement.

Users who have Top Secret and higher clearances don't automatically acquire administrative or "super user" status on multilevel computer systems, even if they are cleared for everything on the computer. In a way, the Top Secret security level actually restricts what the user can do: a program running at Top Secret can't install an unclassified application program, for example. Many administrative tasks, like installing application programs and other shared resources, must take place at the unclassified level. If an administrator installs programs while running at the Top Secret level, then the programs will be installed with Top Secret labels. Users with lower clearances wouldn't be authorized to see the programs.

(top)

Limitations in MLS Mechanisms

Despite strong support from the military community and a strong effort by computing vendors and computer security researchers, MLS mechanisms failed to provide the security and functionality required by the defense community. First, security researchers and MLS system developers found it to be extremely difficult, and perhaps impossible, to completely prevent information flow between different security levels in an MLS system. We will explore this problem further in the next section on "Assurance." A second problem was the virus threat: when we enforce MLS information flow we do nothing to prevent a virus introduced at a lower clearance level from propagating into higher clearance levels. Finally, the end user community found a number of cases where that the Bell-LaPadula model of information flow did not entirely satisfy their operational and security needs.

Self-replicating software like computer viruses became a minor phenomenon among the earliest home computer users in the late 1970s. While viruses weren't enough of a phenomenon to interest MLS researchers and developers, MLS systems caught the interest of the pioneering virus researcher Fred Cohen (1990, 1994). In 1984 he demonstrated that a virus inserted at the unclassified level of a system that implemented the Bell-LaPadula model could rapidly spread throughout all security levels of a system. This particular infestation did not reflect a bug in the MLS implementation. Instead, it indicated a flaw in the Bell-LaPadula model, which strives to allow information flows from low to high while preventing flows from high to low. Viruses represent a security threat that exploits an information flow from low to high, so MLS protection based on Bell-LaPadula gives no protection against it.

Viruses represented one case in which the Bell-LaPadula model did not meet the end users' operational and security needs. Additional cases emerged as end users gained experience with MLS systems. One problem was that the systems tended to collect a lot of "overclassified" information. Whenever a user created a document at a high security level, the document would have to retain that security level even if the user removed all sensitive information in order to create a less-classified or even unclassified document. In essence, end users often needed a mechanism to "downgrade" information so its label reflected its lowered sensitivity.

The downgrading problem became especially important as end users sought to develop "sensor to shooter" systems. These systems would use highly classified intelligence data to produce tactical commands to be sent to combat units whose radios received information at the Secret level or lower (see the later section on "Sensor to Shooter"). In practice, systems would address the downgrading problem by installing privileged programs that bypassed the MLS mechanism to downgrade information. While this served as a convenient patch to correct the problem, it also showed that practical systems did not entirely rely on the Bell-LaPadula mechanisms that had cost so much to build and validate. This further eroded the defense community's interest in MLS based on Bell-LaPadula products.

Wordpress tag: 
Post category: 

The MLS Assurance Problem

Members of the defense community identified the need for MLS-capable systems in the 1960s, and a few vendors implemented the basic features (Weissman 1969, Hoffman 1973, Karger and Schell 1974). However, government studies of the MLS problem emphasized the danger of relying on large, opaque operating systems to protect really valuable secrets (Ware 1970, Anderson 1972). Operating systems were already notorious for unreliability, and these reports highlighted the threat of a software bug allowing leaks of highly sensitive information. The recommended solution was to achieve high assurance through extensive analysis, review, and testing.

High assurance would clearly increase vendors' development costs and lead to higher product costs. This did not deter the US defense community, which foresaw long-term cost savings. Karger and Schell (1974) repeated an assertion that MLS capabilities could save the US Air Force alone $100,000,000 a year, based on computing costs at that time.

Every MLS device poses a fundamental question: does it really enforce MLS, or does it leak information somehow? The first MLS challenge is to develop a way to answer that question. We can decompose the problem into two more questions:

  • What does it really mean to enforce MLS?
  • How can we evaluate a system to verify that it enforces MLS?

The first question was answered by the development of security models, like the Bell-LaPadula model summarized earlier. A true security model provides a formal, mathematical representation of MLS information flow restrictions. The formal model makes the enforcement problem clear to non-programmers. It also makes the operating requirement clear to the programmers who implemented the MLS mechanisms.

To address the evaluation question, designers needed a way to prove that the system's MLS controls indeed work correctly. By the late 1960s, this had become a really serious challenge. Software systems had become much too large for anyone to review and validate: Brooks (1975) reported that IBM had over a thousand people working on its ground breaking system, OS/360. In the book, The Mythical Man-Month, Brooks described the difficulties of building a large-scale software system. Project size wasn't the only challenge in building reliable and secure software: smaller teams, like the team responsible for the Multics security mechanisms, could not detect and close every vulnerability (Karger and Schell, 1974).

The security community developed two sets of strategies for evaluating MLS systems: strategies for designing a reliable MLS system and strategies to prove the MLS system works correctly. The design strategies emphasized a special structure to ensure uniform enforcement of data access rules, called the reference monitor. The design strategies further required that the designers explicitly identify all system components that played a role in enforcing MLS; those components were defined as being part of the trusted computing base, which included all components that required high assurance.

The strategies for proving correctness relied heavily on formal design specifications and on techniques to analyze those designs. Some of these strategies were a reaction to ongoing quality control problems in the software industry, but others were developed as an attempt to detect covert channels, a largely unresolved weakness in MLS systems.

(back to top)

System Design Strategies

During the early 1970s, the US Air Force commissioned a study to develop feasible strategies for constructing and verifying MLS systems. The study pulled together significant findings by security researchers at that time into a report, called the Anderson report (1972), which heavily influenced subsequent US government support of MLS systems. A later study (Nibaldi 1979) identified the most promising strategies for trusted system development and proposed a set of criteria for evaluating such systems.

These proposals led to published criteria for developing and evaluating MLS systems called the Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book" (Department of Defense, 1985a). The US government established a process by which computer system vendors could submit their products for security evaluation. A government organization, the National Computer Security Center (NCSC), evaluated products against the TCSEC and rated the products according to their capabilities and trustworthiness. For a product to achieve the highest rating for trustworthiness, the NCSC needed to verify the correctness of the product's design.

To make design verification feasible, the Anderson report recommended (and the TCSEC required) that MLS systems enforce security through a "reference validation mechanism" that today we call the reference monitor. The reference monitor is the central point that enforces all access permissions. Specifically, a reference monitor must have three features:

  • It must be tamperproof - there must be no way for attackers or others on the system to intentionally or accidentally disable it or otherwise interfere with its operation.
  • It must be nonbypassable - All accesses to system resources must be mediated by the reference monitor. There must be no way to gain access to system resources except through mechanisms that use the reference monitor to make access control decisions.
  • It must be verifiable - there must be a way to convince third-party evaluators (like the NCSC) that the system will always enforce MLS correctly. The reference monitor should be small and simple enough in design and implementation to make verification practical.

Operating system designers had by that time recognized the concept of an operating system kernel: a portion of the system that made unrestricted accesses to the computer's resources so that other components didn't need unrestricted access. Many designers believed that a good kernel should be small for the same reason as a reference monitor: it's easier to build confidence in a small software component than in a large one. This led to the concept of a security kernel: an operating system kernel that incorporated a reference monitor. Layered atop the security kernel would be supporting processes and utility programs to serve the system's users and the administrators. Some non-kernel software would require privileged access to system resources, but none would bypass the security kernel. The combination of the computer hardware, the security kernel, and its privileged components made up the trusted computing base (TCB) - the system components responsible for enforcing MLS restrictions. The TCB was the focus of assurance efforts: if it worked correctly, then the system would correctly enforce the MLS restrictions.

(back to top)

Verifying System Correctness

Research culminating in the TCSEC had identified three essential elements to ensure that an MLS system operates correctly:

  • Security policy - an explicit statement of what the system must do. This was based on a security model that described how the system enforced MLS.
  • Security mechanisms - features of the reference monitor and the TCB that enforce the policy.
  • Assurances - evidence that the mechanisms actually enforce the policy.

The computer industry has always relied primarily on system testing for quality assurance. However, the Anderson report recognized the shortcomings of testing by repeating Dijkstra's observation that tests can only prove the presence of bugs, not their absence. To improve assurance, the report made specific recommendations about how MLS systems should be designed, built, and tested. These recommendations became requirements in the TCSEC, particularly for products intended for the most critical applications:

  • Top-down design - the product design should have a top-level design specification and a detailed design specification.
  • Formal policy specification - there must be a formal specification of the product's security policy. This may be a restatement of Bell-LaPadula that identifies the elements of the product that correspond to elements in the Bell-LaPadula model.
  • Formal top-level specification - there must be a formal specification of the product's externally visible behavior.
  • Design correctness proof - there must be a mathematical proof showing that the top-level design is consistent with the security policy.
  • Specification to code correspondence - there must be a way to show that the mechanisms appearing in the formal top-level specification are implemented in the source code.

These activities were not substituted for conventional product development techniques. Instead, these additional tasks were combined with the accepted "best practices" used in conventional computer system development. These practices tended to follow a "waterfall" process (Boehm, 1981; Department of Defense, 1985b): first, the builders develop a requirements specification, from that they develop the top-down design, then they implement the product, and finally they test the product against the requirements. In the idealized process for developing an MLS product, the requirements specification focuses on testable functions and measurable performance capabilities while the policy model captures security requirements that can't be tested directly. (see Figure 6). shows how these elements worked together to validate an MLS product's correct operation.

Product development has always been expensive. Many development organizations, especially smaller ones, try to save time and money by skipping the planning and design steps of the waterfall process. The TCSEC did not demand the waterfall process, but its requirements for highly assured systems imposed significant costs on development organizations. Both the Nibaldi study and the TCSEC recognized that not all product developers could afford to achieve the highest levels of assurance. Instead, the evaluation process identified a range of assurance levels that a product could achieve. Products intended for less-critical activities could spend less money on their development process and achieve a lower standard of assurance. Products intended for the most critical applications, however, were expected to meet the highest practical assurance standard.

Interrelation of policy, specifications, implementation, and assurances
Figure 6: Interrelation of policy, specifications, implementation, and assurances
(back to top)

Covert Channels

Shortly after the Anderson report appeared, Lampson (1973) published a note which examined the general problem of keeping information in one program secret from another, a problem at the root of MLS enforcement. Lampson noted that computer systems contain a variety of channels by which two processes might exchange data. In addition to explicit channels like the file system or interprocess communications services, there are covert channels that can also carry data between processes. These channels typically exploit operating system resources shared among all processes. For example, when one process can take exclusive control of a file, it prevents other processes from accessing the file, or when one process uses up all the free space on the hard drive, other processes will "see" this activity.

Since MLS systems could not achieve their fundamental objective (to protect secrets) if covert channels were present, defense security experts developed techniques to detect such channels. The TCSEC required a covert channel analysis of all MLS systems except those achieving the lowest assurance levels.

In general, there are two categories of covert channels: storage channels and timing channels. A storage channel transmits data from a "high" process to a "low" one by writing data to a storage location visible to the "low" one. For example, if a Secret process can see how much memory is left after a Top Secret process allocates some memory, the Top Secret process can send a numeric message by allocating or freeing the amount of memory equal to the message's numeric value. The covert channel consists of setting the contents of a storage location (the size of free memory) to a value by the "high" process that is readable by the "low" one.

A timing channel is one in which the "high" process communicates to the "low" one by varying the timing of some detectable event. For example, the Top Secret process might instruct the hard drive to visit particular disk blocks. When the Secret process goes to read data from the hard drive itself, the disk activity by the Top Secret process will cause varying delays in the Secret program when it tries to use the hard drive itself. The Top Secret program can systematically impose delays on the Secret program's disk activities, and thus transmit information through the pattern of those delays. Wray (1991) describes a covert channel based on hard drive access speed, and also uses the example to show how ambiguous the two covert channel categories can be.

The fundamental strategy for seeking convert channels is to inspect all shared resources in the system, decide if any could yield an effective covert channel, and to measure the bandwidth of whatever covert channels are uncovered. While a casual inspection by a trained analyst may often uncover covert channels, there is no guarantee that a casual inspection will find all such channels. Systematic techniques help increase confidence that the search has been comprehensive. An early technique, the shared resource matrix (Kemmerer, 1983; 2002), can analyze a system from either a formal or informal specification. While the technique can detect covert storage channels, it cannot detect covert timing channels. An alternative approach, noninterference, requires formal policy and design specifications (Haigh and Young, 1987). This technique locates both timing and storage channels by proving theorems to show that processes in the system, as described in the design specification, can't perform detectable ("interfering") actions that are visible to other processes in violation of MLS restrictions.

To be effective at locating covert channels, the design specification must accurately model all resource sharing that is visible to user processes in the system. Typically, the specification focuses its attention on the system functions made available to user processes: system calls to manipulate files, allocate memory, communicate with other processes, and so on. The development program for the LOCK system (Saydjari, Beckman, and Leaman, 1989; Saydjari, 2002), for example, included the development of a formal design specification to support a covert channel analysis. The LOCK design specification identified all system calls, described all inputs and outputs produced by these calls, including error results, and represented the internal mechanisms necessary to support those capabilities. The LOCK team used a form of noninterference to develop proofs that the system enforced MLS correctly (Fine, 1994).

As with any flaw detection technique, there is no way to confirm that all flaws have been found. Techniques that analyze the formal specification will detect all flaws in that specification, but there is no way to conclusively prove that the actual system implements the specification perfectly. Techniques based on less-formal design descriptions are also limited by the quality of those descriptions: if the description omits a feature, there's no way to know if that feature opens a covert channel. At some point there must be a trade-off between the effort spent on searching for covert channels and the effort spent searching for other system flaws.

In practice, system developers have found it almost impossible to eliminate all covert channels. While evaluation criteria encourage developers to eliminate as many covert channels as possible, the criteria also recognize that practical systems will probably include some channels. Instead of eliminating the channels, developers must identify them, measure their possible bandwidth, and provide strategies to reduce their potential for damage. While not all security experts agree that covert channels are inevitable (Proctor and Neumann, 1992), typical MLS products contain covert channels. Thus, even the approved MLS products contain known weaknesses.

(back to top)

Evaluation, Certification, and Accreditation

How does assurance fit into the process of actually deploying a system? In theory, one can plug a computer in and throw the switch without knowing anything about its reliability. In the defense community, however, a responsible officer must approve all critical systems before they can go into operation, especially if they handle classified information. Approval rarely occurs unless the officer receives appropriate assurance that the system will operate correctly. There are three major elements to this approval process in the US defense community:

  • Evaluation - validates a set of security properties in a particular product or device. Product evaluation is not strictly required.
  • Certification - verifies that a specific installation and configuration of a system meets the requirements for that site. While not absolutely required, systems are rarely approved for operation without at least an attempt at certification.
  • Accreditation - formal approval for operation by a senior military commander. The approval is almost always based on the results of the certification.

In military environments, a highly-ranked officer, typically an admiral or general, must formally grant approval (accreditation) before a critical system goes into operation. Accreditation shows that the officer believes the system is safe to operate, or at least that the system's risks are outweighed by its benefits. The decision is based on the results of the system's certification: a process in which technical experts analyze and test the system to verify that it meets its security and safety requirements. The certification and accreditation process must meet certain standards (Department of Defense, 1997). Under rare, emergency conditions an officer could accredit a system even if there are problems with the certification.

Certification can be very expensive, especially for MLS systems. Tests and analyses must show that the system is not going to fail in a way that will leak classified information or interfere with the organization's mission. Tests must also show that all security mechanisms and procedures work as specified in the requirements. Certification of a custom-built system often involves design reviews and source code inspections. This work requires a lot of effort and special skills, leading to very high costs.

The product evaluation process heralded by the TCSEC was intended to provide off-the-shelf computing equipment that reliably enforced MLS restrictions. Although organizations could implement, certify, and accredit custom systems enforcing MLS, the certification costs were hard to predict and could overwhelm the project budget. If system developers could use off-the-shelf MLS products, their certification costs and project risks would be far lower. Certifiers could rely on the security features verified during evaluation, instead of having to verify a product's implementation themselves.

Product evaluations assess two major aspects: functionality and assurance. A successful evaluation indicates that the product contains the appropriate functional features and meets the specified level of assurance. The TCSEC defined a range of evaluation levels to reflect increasing levels of compliance with both functional and assurance requirements. Each higher evaluation level either incorporated the requirements of the next lower level, or superseded particular requirements with a stronger requirement. Alphanumeric codes indicated each level, with D being lowest and A1 being highest:

  • D - the lowest level, assigned to evaluated products that achieve no higher level; this level was rarely used.
  • C1 - a single-user system; this level was eventually discarded.
  • C2 - a multi-user system, like a Unix time sharing system.
  • C3 - an enhanced multi-user system; this level was eventually discarded.
  • B1 - the lowest level of evaluation for a system with MLS support.
  • B2 - an MLS system that incorporates basic architectural assurance requirements and a covert channel analysis.
  • B3 - an MLS system that incorporates more significant assurance requirements, including formal specifications.
  • A1 - an MLS system whose design has been proven correct mathematically.

Although the TCSEC defined a whole range of evaluation levels, the government wanted to encourage vendors to develop systems that met the highest levels. In fact, one of the pioneering evaluated products was SCOMP, an A1 system constructed by Honeywell (Fraim, 1983). Very few other vendors pursued an A1 evaluation. High assurance caused high product development costs; one project estimated that the high assurance tasks added 26% to the development effort's labor hours (Smith, 2001). In the fast-paced world of computer product development, that extra effort can cause delays that make the difference between a product's success or failure.

(back to top)

The Empty Shelf

To date, no commercial computer vendors have offered a genuine "off the shelf" MLS product. A handful of vendors had implemented MLS operating systems, but none of these were standard product offerings. All MLS products were expensive, special purpose systems marketed almost exclusively to the military and government customers. Almost all MLS products were evaluated to the B1 level, meeting minimum assurance standards. Thus, the TCSEC program failed on two levels: it failed to persuade vendors to incorporate MLS features into their standard products, and it failed to persuade any vendors to produce products that met the "A1" requirements for high assurance.

A survey of security product evaluations completed by the end of 1999 (Smith, 2000) noted that only a fraction of security products ever pursued evaluation. Most products pursued "medium assurance" evaluations, which could be sufficient for a minimal (B1) MLS implementation.

Click here for more information about historical trends in security product evaluations

TCSEC evaluations were discontinued in 2000. The handful of modern MLS products are evaluated under the Common Criteria (Common Criteria Project Sponsoring Organizations, 1999), an evaluation criteria designed to address a broader range of security products.

The most visible failure of MLS technology is its absence from typical desktops. As Microsoft's Windows operating systems came to dominate the desktop in the 1990s, Microsoft made no significant move to implement MLS technology. Versions of Windows have earned a TCSEC C2 evaluation and a more-stringent EAL-4 evaluation under the Common Criteria, but it has never incorporated MLS. The closest Microsoft has come to offering MLS technology has been its "Palladium" effort announced in 2002. The technology focused on the problem of digital rights management - restricting the distribution of copyrighted music and video - but the underlying mechanisms caught the interest of many in the MLS community because of potential MLS applications. The technology was slated for incorporation in a future Windows release codenamed "Longhorn," but was dropped from Microsoft's plans in 2004 (Orlowski, 2004).

Arguably several factors have contributed to the failure of the MLS product space. Microsoft demonstrated clearly that there was a giant market for products that omit MLS. Falling computer prices also played a role: sites where users typically work at a couple of different security levels find it cheaper to put two computers on every desktop than to try to deploy MLS products. Finally, the sheer cost and uncertainty of MLS product development undoubtedly discourage many vendors. It is hard to justify the effort to develop a "highly secure" system when it's likely that the system will still have identifiable weaknesses, like covert channels, after all the costly, specialized work is done.

(back to top)
Wordpress tag: 
Post category: 

Multilevel Networking

As computer costs fell and performance soared during the 1980s and 1990s, computer networks became essential for sharing work and resources. Long before computers were routinely wired to the Internet, sites were building local area networks to share printers and files. In the defense community, multilevel data sharing had to be addressed in a networking environment. Initially, the community embraced networks of cheap computers as a way to temporarily sidestep the MLS problem. Instead of tackling the problem of data sharing, many organizations simply deployed separate networks to operate at different security levels, each running in system high mode.

This approach did not help the intelligence community. Many projects and departments needed to process information carrying a variety of compartments and code words. It simply wasn't practical to provide individual networks for every possible combination of compartments and code words, since there were so many to handle. Furthermore, intelligence analysts often spent their time combining information from different compartments to produce a document with a different classification. In practice, this work demanded an MLS desktop and often required communications over an MLS network.

Thus, MLS networking took two different paths in the 1990s. Organizations in the intelligence community continued to pursue MLS products. This reflected the needs of intelligence analysts. In networking, this called for labeled networks, that is, networks that carried classification labels on their traffic to ensure that MLS restrictions were enforced.

Many other military organizations, however, took a different path. Computers in most military organizations tended to cluster into networks handling data up to a specified security level, operating in system high mode. This choice was not driven by an architectural vision; it was more likely the effect of the desktop networking architecture emerging in the commercial marketplace combined with existing military computer security policies. Ultimately, this strategy was named multiple single levels (MSL) or multiple independent levels of security (MILS).

(back to top)

Labeled Networks

The fundamental objective of a labeled network is to prevent leakage of classified information. The leakage could occur through eavesdropping on the network infrastructure or by delivering data to an uncleared destination. This yielded two different approaches to labeled networking. The more complex approach used cryptography to keep different security levels separate and to prevent eavesdropping. The simpler approach inserted security labels into network traffic and relied on a reference monitor mechanism installed in network interfaces to restrict message delivery.

In practice, the cryptographic hardware and key management processes have often been too expensive to use in certain large scale MLS network applications. Instead, sites have relied on physical security to protect their MLS networks from eavesdropping. This has been particularly true in the intelligence community, where the proliferation of compartments and codewords have made it impractical to use cryptography to keep security levels separate.

Within such sites, the network infrastructure is physically isolated from any contact except by people with Top Secret clearances supported by special background investigations. Network wires are protected from tampering, though not from sophisticated attacks that might tempt uncleared outsiders. MLS access restrictions rely on security labels embedded in network messages. If cryptography is used at all, its primary purpose is to protect the integrity of security labels.

Standard traffic using the Internet Protocol (IP) does not include security labels, but the Internet community developed standards for such labels, beginning with the IP Security Option (IPSO) (St. Johns, 1988). The US Defense Intelligence Agency developed this further when implementing a protocol for the DOD Intelligence Information System (DODIIS). The protocol, called DODIIS Network Security for Information Exchange (DNSIX), specified both the labeling and the checking process to be used when passing traffic through a DNSIX network interface (LaPadula, LeMoine, Vukelich, and Woodward, 1990). To increase the assurance of the resulting system, the specification included a design description for the checking process; the design had been verified against a security model that described the required MLS enforcement.

In the US, MLS cryptographic techniques were exclusively the domain of the National Security Agency (NSA), since it set the standards for encrypting classified information. Traditional NSA protocols encrypted traffic at the link level, carrying the traffic without security labels. During the 1980s and 1990s the NSA started a series of programs to develop cryptographic protocols for handling labeled, multilevel data, including the Secure Data Network System (SDNS), the Multilevel Network System Security Program (MNSSP), and the Multilevel Information System Security Initiative (MISSI).

These programs yielded Security Protocol 3 (SP3) and the Message Security Protocol (MSP). SP3 protects messages at the network protocol layer (layer 3) and has been used in gateway encryption devices for the DOD's Secret IP Router Network (SIPRNET), which shares classified information at the Secret level among approved military and defense organizations. However, SP3 is a relatively old protocol and will probably be superseded by a variant of the IP Security Protocol (IPSEC) that has been adapted for the defense community, called the High Assurance IP Interface Specification (HAIPIS). MSP protects messages at the application level and was originally designed to encrypt e-mail. The Defense Message System, the DOD's evolving secure e-mail system, uses MSP.

Obviously an MLS network uses encryption to protect traffic against eavesdropping. In addition, MLS protocols can use cryptography to enforce MLS access restrictions. The FIREFLY protocol illustrates this. Developed by the NSA for multilevel telephone and networking protocols, FIREFLY uses public-key cryptography to negotiate encryption keys to protect traffic between two entities. Each FIREFLY certificate contains the clearance level of its owner, and the protocol compares the levels when negotiating keys. If the two entities are using secure telephones, then the protocol yields a key whose clearance level matches the lower of the two entities. If the two entities are establishing a computer network connection, then the negotiation succeeds only if the clearance levels match.

(back to top)

Multiple Independent Levels of Security (MILS)

Despite the shortage of MLS products, the defense and intelligence communities dramatically expanded their use of computer systems during the 1980s and 1990s. Instead of implementing MLS systems, most organizations chose to deploy multiple computer networks, each dedicated to a security level they needed. This eliminated multiuser data sharing risks by eliminating the multiuser data sharing at multiple security levels. When necessary, less-classified data was copied one-way onto servers on a higher-classified network from a removable disk or tape volume.

To simplify data sharing in the MILS environment, many organizations have implemented devices to transfer data from networks at one security level to networks at other levels. These devices generally fall into three categories:

  • Multilevel servers - network servers implemented on an MLS system with individual network interfaces to connect to system high networks running at different security levels.
  • One-way guards - devices that could transfer data from a network at a "low" security level to a network at a "high" level.
  • Downgrading guards - devices that could transfer data in either direction between networks at different security levels.

In a multilevel server, computers on a network at a lower security level can store information on a server, and computers on networks at higher levels can visit the same server and retrieve that information. Vendors provide a variety of multilevel servers, including web servers, database servers, and file servers. While such systems are popular with some defense organizations, others avoid them. Most server products achieve relatively low levels of assurance, which suggests that attackers might find ways to leak information through them from a higher network to a lower one.

One-way guards implement a one-way data transfer from a network at a lower security level to a network at a higher level. The simplest implementations rely on hardware restrictions to guarantee that traffic flows in only one direction. For example, conventional fiber optic network equipment supports bidirectional traffic, but it's not difficult to construct fiber optic hardware that only contains a transmitter on one end and a receiver on the other. Such a device can transfer data in one direction with no risk of leaking data in the other. An obvious shortcoming is that there is no efficient way to prevent congestion since the low side has no way of knowing when or if its messages have been received. More sophisticated devices like the NRL Pump (Kang, Moskowitz, and Lee, 1996) avoid this problem by implementing acknowledgements using trusted software. However, devices like the Pump can suffer from the same shortcoming as MLS servers: there are very few trustworthy operating systems on which to implement trusted MLS software, and most achieve relatively low assurance. The trustworthiness of the Pump will often be limited by the assurance of the underlying operating system.

Downgrading guards are important because they address a troublesome side-effect of MILS computing: users often end up with overclassified information. A user on a Secret system may be working with both Confidential and Secret files, and it is simple to share those files with other users on Secret systems. However, he faces a problem if he needs to provide the Confidential file to a user on a Confidential system: how does he prevent Secret information from leaking when he tries to provide a clean copy of the Confidential file? There is no simple, reliable, and foolproof way to do this, especially when using commercial desktop computers.

The same problem often occurs in e-mail systems: a different user on a Top Secret network may wish to send an innocuous but important announcement to her colleagues on a Secret network. She knows that the recipients are authorized to receive the message's contents, but how does she ensure that no Top Secret information leak out along with the e-mail message? The problem also appears in military databases: the database may contain information at a variety of security levels, but not all user communities will be able to handle data at the same level as the whole database. To be useful, the data must be sanitized and then released to users at lower classification levels. When a downgrading guard releases information from a higher security level to a lower one, the downgrading generally falls into one of three categories:

  • Manual review and release - a trained and trusted operator carefully examines all files submitted for downgrading. If the file appears clean of unreleasable information, then it is released.
  • Automated release - the system looks for indicators to show that an authorized person has reviewed the file and approved it for release. The indicators usually include cryptographic authentication, like a digital signature.
  • Automated review - the system uses a set of carefully constructed filters to examine the file to be released. The filters are designed to check for unreleasable data.

The traditional technique was manual review and release. A site would train an operator to identify classified information that should not be released, and the operator would manually review all data passing through the guard. This strategy proved impractical because it has become very difficult to reliably scan files for sensitive information. Word processors like Microsoft Word tend to retain sensitive information even after the user has attempted to remove it from a file (Byers, 2004). Another problem is steganography: a subverted user on the high side of the guard, or a sophisticated piece of subverted software, can easily embed large data items in graphic images or other digital files so that a visual review won't detect their presence. In addition to the problem of reliable scanning, there is a human factors problem: few operators would remain effective in this job for very long. Military security officers tell of review operators falling into a mode in which they automatically approve everything without review, partly to maintain message throughput and partly out of boredom.

The automated release approach was used by the Standard Mail Guard (SMG) (Smith, 1994). The SMG accepted text e-mail messages that had been reviewed by the message's author, explicitly labeled for release, and digitally signed using the DMS protocol. The SMG would verify the digital signature and check the signer's identity against the list of users authorized to send e-mail through the guard. The SMG would also search the message for words associated with classified information that should not be released and block messages containing any such words. Authorized users could also transmit files through the guard by attaching them to e-mails. The attached files had to be reviewed and then "sealed" using a special application program: the SMG would verify the presence of the seal before releasing an attached file.

The automated review approach has been used by several guards since the 1980s, primarily to release database records from highly classified databases to networks at lower security levels. Many of these guards were designed to automatically review highly formatted force deployment data. The guards were configured with detailed rules on how to check the fields of the database records so that data was released at the correct security level. In some cases the guards were given instructions on how to sanitize certain database fields to remove highly classified data before releasing the records.

While guards and multilevel servers provide a clear benefit by enabling data sharing between system-high networks running at different clearance levels, they also pose problems. The most obvious problem is that it puts all of the MLS eggs in one basket: the guard centralizes MLS protection in a single device that undoubtedly draws the interest of attackers. Downgrading guards pose a particular concern since there are many ways that a Trojan Horse on the "high" side of a guard may disguise sensitive information so that it passes successfully through the guard's downgrading filters. For example, the Trojan could embed classified information in an obviously unclassified image using steganography. Another problem is that the safest place to attach a label to data is at the data's point of origin: guards are less likely to label data correctly because they are removed from the data's point of origin (Saydjari, 2004). Guards that used an automated release mechanism may be somewhat less prone to this problem if the guard bases its decision on a cryptographically-protected label provided at the data's point of origin. However, this benefit can be offset by other risks if the guard or the labeling process are hosted on a low-assurance operating system.

(back to top)

Sensor-to Shooter

During the 1991 Gulf War, the defense community came to appreciate the value of classified satellite images in planning attacks on enemy targets. The only complaint was that the imagery couldn't be delivered as fast as the tactical systems could take advantage of it (Federation of American Scientists, 1997). In the idealized state of the art "sensor to shooter" system, analysts and mission commanders select targets electronically from satellite images displayed on workstations, and they send the targeting information electronically to tactical units (see Figure 7). Clearly this involves at least one downgrading step, since tactical units probably won't be cleared to handle satellite intelligence. So far, no general-purpose strategy has emerged for handling automatic downgrading of this kind. In practice, downgrading mechanisms are approved for operation on a case-by-case basis.

Sensor-to-shooter
Figure 7: In sensor-to-shooter, data flows from higher levels to lower levels.

(back to top)

Nonmilitary Applications Similar to MLS

The following is a list of nonmilitary applications that bear some similarities to the MLS problem. While this may suggest that there may someday be a commercial market for MLS technology, a closer look suggests this is unlikely. As noted earlier, MLS systems address a level of risk that doesn't exist in business environments. Buyers of commercial systems do not want to spend the money required to assure correct MLS enforcement. This is illustrated by examining the following MLS-like business applications.

  • Protecting sensitive information - businesses are legally obligated to protect certain types of information from disclosure. However, as noted earlier, the effects of leaking secrets in the defense community are far greater than the results of leaking secrets in the business world. Private companies have occasionally tried using MLS to protect sensitive corporate data, but the attempt has usually been abandoned as too expensive.
  • Digital rights management - To some extent, MLS protection is similar to the security problem of digital rights management (DRM). Typically, an organization or enterprise purchases computer equipment to benefit their business activities, and employees and associates use the computers to support the business activities. In both MLS and DRM, there is an absentee third party with a strong interest in restricting information disclosure, and this interest may in some cases interfere with the objectives of the computer's owners and users. In this case, as in the previous case, however, the trade-off in the business world will not justify the type of investment that MLS protection demands in the defense community.
  • Secure server platforms - As more and more companies deploy e-commerce servers on the Internet, they face increased risks of losing assets to Internet-based crime. In theory, a hard-to-penetrate server built to military specifications would provide more confidence of safety to a company's customers and investors. In practice, however, neither the developers of these servers nor their customers have shown much interest in military-grade server security. Several vendors have offered such servers, notably Hewlett-Packard with its Virtual Vault product line, but vendors have found very little market interest in such products. Arguably it's for the same reason that MLS fails to excite interest in other applications: the risk is not perceived as being high enough to justify the expense of military-grade protection. Moreover, attacks on servers arrive in a variety of directions, many of which are not addressed by MLS protections.
  • Firewall platforms - In the 1990s, many companies working on MLS systems developed and sold commercial firewalls that incorporated MLS-type protections, notably the Cyberguard and Sidewinder products. While these products achieved some success in the defense community, neither competed effectively against products hosted on common commercial operating systems.
Post category: 

Conclusion

Despite the failures and frustrations that have dogged MLS product developments for the past quarter century, end users still call for MLS capabilities. This is because the problem remains: the defense community needs to share information at multiple security levels. Most of the community solves the problem by working on multilevel data in a system high environment and dealing with downgrading problems on a piecemeal basis. While this solves the problem in some situations, it isn't practical others, like sensor to shooter applications.

The classic strategies intended to yield MLS products failed in several ways. First, the government's promotion of product evaluations failed when vendors found that MLS capabilities did not significantly increase product sales. The concept of deploying a provably secure system failed twice: first, when vendors found how expensive and uncertain evaluations could be, especially at the highest levels, and second, when security experts discovered how intractable the covert channel problem could be. Finally, the few MLS products that did make their way to market languished when end users realized how narrowly the products solved their security and sharing problems. The principal successes in MLS today are based on guard and trusted server products.

Wordpress tag: 
Post category: 

Glossary

accreditation - approval granted to a computer system to perform a critical, defense-related application. The accreditation is usually granted by a senior military commander.

assurance - a set of processes, tests, and analyses performed on a computing system to ensure that it fulfills its most critical operating and security requirements.

Bell-LaPadula model - a security model that reflects the information flow restrictions inherent in the access restrictions applied to classified information.

certification - the process of analyzing a system being deployed in a particular site to verify that it meets its operational and security requirements.

covert channel - in general, an unplanned communications channel within a computer system that allows violations to its security policy. In an MLS system, this is an information flow that violates MLS restrictions.

cross domain security/solution/system (CDS) - in general, a term applied to multilevel security problems in a networking environment.

evaluation - the process of analyzing the security functions and assurance evidence of a product by an independent organization to verify that the functions operate as required and that sufficient assurance evidence has been provided to have confidence in those functions.

labeled network - a computer network on which all messages or data packets carry labels to indicate the classification level of the information being carried.

multilevel security (MLS) - an operating mode in which the users who share a computing system and/or network do not all hold clearances to view all information on the system.

multiple independent levels of security (MILS) - a networking and desktop computing environment which assigns dedicated, system-high resources for processing classified information at different security levels. Users in a MILS environment may have two or more desktop computers, each dedicated to work at a particular security level.

reference monitor - the component of an operating system that mediates all access attempts by subjects (processes) on the system and objects (files and other system resources).

security model - an unambiguous, often formal, statement of the system's rules for achieving its security objectives, such as protecting the confidentiality of classified information from access by uncleared or insufficiently cleared users.

system high - an operating mode in which the users who share a computing system and/or network all hold clearances that could allow them to view any information on the system.

trusted computing base - the specific hardware and software components upon which a computing system relies when enforcing its security policy.

Wordpress tag: 
Post category: 

References

Anderson, J.P. (1972). Computer Security Technology Planning Study Volume II, ESD-TR-73-51, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at: http://csrc.nist.gov/publications/history/ande72.pdf (Date of access: August 1, 2004).

Bell, D.D. and L.J. La Padula (1974). Secure Computer System: Unified Exposition and Multics Interpretation, ESD-TR-75-306. Bedford, MA: ESD/AFSC, Hanscom AFB. Available at: http://csrc.nist.gov/publications/history/bell76.pdf (Date of access: August 1, 2004).

Boehm, B.W. 1981, Software Engineering Economics. Englewood Cliffs, NJ: Prentice Hall.

Brooks, F.P. (1975). The Mythical Man-Month. Reading, MA: Addison-Wesley.

Byers, S (2004). Information leakage caused by hidden data in published documents. IEEE Security and Privacy 2 (2) 23-27. Available at: http://www.computer.org/security/v2n2/byers.htm (Date of access: October 1, 2004).

Cohen, F.C. (1994) Short Course on Computer Viruses, Second Edition. New York: John Wiley & Sons, pp. 35-36.

Cohen, F.C. (1990) Computer Viruses. Computer Security Encyclopedia. Available at: http://www.all.net/books/integ/encyclopedia.html (Date of access: February 20, 2005).

Common Criteria Project Sponsoring Organizations (1999). Common criteria for information technology security evaluation, version 2.1. Available at: http://csrc.nist.gov/cc/Documents/CC%20v2.1%20-%20HTML/CCCOVER.HTM (Date of access: October 1, 2004).

Department of Defense (1997). DOD Information Technology Security Certification and Accreditation, DOD Instruction 5200.40. Washington, DC: Department of Defense. Available at: http://www.dtic.mil/whs/directives/corres/pdf/i520040_123097/i520040p.pdf (Date of access: October 1, 2004).

Department of Defense (1985a). Trusted Computer System Evaluation Criteria (Orange Book), DOD 5200.28-STD. Washington, DC: Department of Defense. Available at: http://www.radium.ncsc.mil/tpep/library/rainbow/index.html#STD520028 (Date of access: October 1, 2004).

Department of Defense (1985b). "Specification Practices," MIL-STD 490A, 4 June 1985. Washington, DC: Department of Defense.

Federation of American Scientists (1997). Imagery Intelligence: FAS Space Policy Project - Desert Star. Available at: http://www.fas.org/spp/military/docops/operate/ds/images.htm (Date of access: August 1, 2004).

Fine, T. (1996). Defining noninterference in the temporal logic of actions. Proc. 1996 IEEE Conference on Security and Privacy. pp. 12-21.

Fraim, L.J. SCOMP: a solution to the multilevel security problem. IEEE Computer 16 (7) 26-34.

Haigh, J.T., and Young, W.D. (1987). Extending the noninterference version of MLS for SAT. IEEE Transactions on Software Engineering SE-13 (2).

Hoffman, L.J. (1973). IBM's Resource Security System (RSS). In L.J. Hoffman (ed.), Security and Privacy in Computer Systems (pp. 379-401). Los Angeles: Melville Publishing Company.

Kang, M.H., Moskowitz, I.S., and Lee, D.C. (1996). A network pump. IEEE Transactions on Software Engineering 22 (5) 329-338.

Karger, P.A. and R.R. Schell (1974). MULTICS Security Evaluation, Volume II: Vulnerability Analysis, ESD-TR-74-193, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at http://csrc.nist.gov/publications/history/karg74.pdf (Date of access: August 1, 2004).

Kemmerer, R.A. (2002). A practical approach to identifying storage and timing channels: twenty years later. Proceedings of the 18th Annual Computer Applications Security Conference.

Kemmerer, R.A. (1983). Shared resource matrix methodology: an approach to identifying storage and timing channels. ACM Transactions on Computer Systems 1 109-118.

Lampson, B. (1973). A note on the confinement problem. Communications of the ACM 16 10, pp 613-615.

LaPadula, L.J., LeMoine, J.E., Vukelich, D.F. and Woodward, J.P.L. (1990). DNSIX Detailed Design Specifications, Version 2. Bedford, MA: MITRE Corporation.

Nibaldi, G.H., (1979). Proposed Technical Evaluation Criteria for Trusted Computer Systems, M79-225. Bedford, MA: The Mitre Corporation. Available at: http://csrc.nist.gov/publications/history/niba79.pdf (Date of access: August 1, 2004).

Orlowski, A. (2004). MS Trusted Computing back to drawing board. The Register, May 6, 2004. Available at: http://www.theregister.co.uk/2004/05/06/microsoft_managed_code_rethink/ (Date of access: August 1, 2004).

Proctor, N.E., and Neumann, P.G. (1992). Architectural implications of covert channels. Proceedings of the Fifteenth National Computer Security Conference pp. 28-43. Available at: http://www.csl.sri.com/users/neumann/ncs92.html (Date of access: November 15, 2004).

St. Johns, M. (1988). Draft Revised IP Security Option, RFC 1038. Available at: http://www.ietf.org/rfc/rfc1038.txt (Date of access: October 1, 2004).

Saydjari, O.S. (2004). Multilevel security: reprise. IEEE Security and Privacy 2 (no. 5). pp. 64-67.

Saydjari, O.S. (2002). LOCK: an historical perspective. Proceedings of the 2002 Annual Computer Security Applications Conference pp. Available at: http://www.acsac.org/2002/papers/classic-lock.pdf (Date of access: November 15, 2004).

Saydjari, O.S., Beckman, J.K., Jeffrey R. Leaman, J.R. (1989). LOCK Trek: navigating uncharted space. Proceedings of the 1989 IEEE Symposium on Security and Privacy 167-175.

Smith, R.E. (2005). Observations on multi-level security. Web pages available at http://www.smat.us/crypto/mls/index.html (Date of access: October 31, 2005).

Smith, R.E. (2001). Cost profile of a highly assured, secure operating system. ACM Transactions on Information System Security 4 pp. 72-101. A draft version is available at http://www.smat.us/crypto/docs/Lock-eff-acm.pdf (Date of access: February 20, 2005).

Smith, R.E. (2000). Trends in government endorsed security product evaluations, Proceedings of the 23rd National Information Systems Security Conference. Available at: http://www.smat.us/crypto/evalhist/evaltrends.pdf. (Date of access: February 20, 2005).

Smith, R.E. (1994). Constructing a high assurance mail guard. Proceedings of the 17th National Computer Security Conference 247-253. Available at: http://www.smat.us/crypto/docs/mailguard.pdf (Date of access: February 20, 2005).

Ware, W.H. (1970) Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security. Santa Monica, CA: The RAND Corporation. Available at: http://csrc.nist.gov/publications/history/ware70.pdf (Date of access: August 1, 2004).

Weissman, C. (1969). Security controls in the ADEPT-50 time-sharing system. Proceedings of the 1969 Fall Joint Computer Conference. Reprinted in L.J. Hoffman (ed.), Security and Privacy in Computer Systems (pp. 216-243). Los Angeles: Melville Publishing Company, 1973.

Wray, J.C. (1991). An analysis of covert timing channels. Proceedings of the 1991 IEEE Symposium on Security and Privacy 2-7.

Wordpress tag: 
Post category: 

Observations on Multi-Level Security

Multilevel security (MLS) is an overloaded term that describes both an abstract security objective and a well-known mechanism that is supposed to achieve that objective, more or less.

Click here for a general introduction to MLS.

MLS: The Objective

A system or device achieves the objective of being "multilevel secure" if it can handle information at a variety of sensitivity levels without disclosing information to an unauthorized person. In a perfect world, this yields two more specific objectives:

  • If a document contains classified information, then the system will prevent the document from being delivered to a user who lacks the appropriate clearance. For example, a document containing Top Secret information can't be given to a user who only has a Secret clearance. This is the information flow problem.
  • If a document contains no information above a particular classification level, then it can be shared with users who have that clearance. For example, if a Top Secret user takes a Top Secret document and removes all information classified above the Secret level, then the system should allow Secret users to look at the document. This is the sanitization problem.

The information flow problem is very hard to solve through automated access restrictions. The sanitization problem is almost impossible to solve through automated access restrictions.

A device enforces MLS information flow if it can't be induced (accidentally or intentionally) to release information to the wrong person. The U.S. government has a body of laws and regulations that make it illegal to share classified information with people who do not possess the appropriate security clearance. In theory, an "MLS device" will automatically enforce those restrictions. Some defense officials refer to MLS devices as felony boxes since they can automatically commit felonies if they malfunction.

While it might seem easy to implement such a thing just by setting up the right access restrictions on files, this approach isn't reliable enough for large-scale use. For one thing, it's hard to reliably establish and review the permission settings for hundreds or even thousands of files and expect to have them all set correctly at all times. Another problem is that a malicious user could leak enormous amounts of classified information with a few simple keystrokes. Moreover, an innocent user could be tricked into releasing comparable amounts of classified information if subjected to a virus or other malicious software.

The sanitization problem is made difficult by two things. First, it assumes that software applications don't hide data from their users, and that simply isn't true. A Microsoft Word file carries all sorts of information, including reams of text that its owner may have tried to remove. There are even circumstances where cut-and-paste may copy deleted information from one Word file to another. The second problem is that it requires a certain level of intellect to distinguish more-sensitive informaiton from less-sensitive information. While classifying authorities may sometimes attempt to make it obvious which data is Top Secret, which is Secret, and which is unclassified, it is often impossible in practice to build an automated tool to identify sensitivity simply on the basis of the unmarked material.

MLS: The Mechanism

In practice, the devices we generally recognize as implementing "MLS" are based on a hierarchical, lattice-based access control model developed for the Multics operating system in 1975. In this model, the system assigns security classification labels to processes it runs and to files and other system-level objects, particularly those containing data. The mechanism enforces the following rules:

  • A process is always allowed to read and write an object if the process’ label matches the object’s label (subject to other permissions, like those linked to file ownership).
  • A process is allowed to read the data in an object if the object’s classification label is lower than the process’ label. For example, a “secret” process can read data from an “unclassified” object, but never vice versa.
  • A process is never allowed to write data to an object when the object’s classification label is lower than the process’ label. This prevents programs from accidentally leaking classified information into files where unauthorized users might see the data. More importantly, it prevents users from being tricked into leaking information by trojan horse programs.

The mechanism is also referred to as mandatory access control because it is always enforced and users could not disable or bypass it. One of the reasons why you can't reliably enforce MLS (the objective) with conventional access control mechanisms is that such mechanisms are usually under the complete control of a file's owner. Thus the owner can accidentally or intentionally violate the access control rules simply by changing the permissions on a sensitive file. The MLS mechanism is mandatory, which means that individual users can't really control it themselves. Once a file is labeled "top secret," the MLS mechanism won't permit the user to share it with merely "secret" users or to mingle its contents with that of "secret" files. Once "top secret" information is mixed with "secret" information, the aggregate becomes "top secret."

The original motivation for building MLS mechanisms was the perception that it would make applications development safe from serious security risks. The notion was that applications programmers would be saved from the bother of having to worry about security labels. Existing applications would be able to handle labeled data safely and correctly without reprogramming. Even more important, programmers would not be able to sneak data between security levels by writing clever programs: the MLS protection mechanism would prevent any attempt to violate the most important security constraints on the system.

MLS: Hard to Implement

Although this basic MLS mechanism may seem simple to implement, it proved very difficult to implement effectively. Several systems were developed that enforced the security rules, but experimenters quickly found ways to bypass the mechanism and leak data from high classification levels to low ones. The techniques were often called covert channels. These channels used operating system resources to transmit data between higher classified processes and lower classified processes. For example, the higher process might systematically modify a file name or other file system resource (like the free space count for the disk or for main memory) that is visible to the lower process in order to transmit data.

Attempts to block covert channels led to very expensive operating system development efforts. The resulting systems were hard to use in practice and often had performance problems. While it's never been conclusively proven that you can't build a high performance system that avoids or blocks covert channels, there are no working examples.

The high cost of MLS systems was also driven by the high cost of security product evaluations performed by the US government.

MLS: An Impractical Mechanism

Although MLS (the objective) remains a requirement for many military and government systems, the objective is not met by MLS (the mechanism). The fundamental problem is that the mechanism only allows data to flow "upwards" in terms of sensitivity. So, automatic software can easily take unclassified data, mix it with secret data to make more secret data, and then with top secret data to produce more top secret data. There are a few applications in the intelligence community where this is a very useful property.

However, this upward flow works against modern notions of "information dominance" and "sensor to shooter" information flow. The modern concept is that technical intelligence assets should identify targets, pass the information to mission planners, who assemble a mission, and pass the mission details to tactical assets, who in turn share details with support and maintenance assets. The problem is that technical intelligence, mission planning, tactical assets, and support assets tend to operate at decreasing security levels. The flow of information goes the exact opposite of what the MLS mechanism allows.

Creative Commons License

 

This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.

Post category: 

LOCK - A trusted computing system

The LOCK project (short for LOgical Coprocessing Kernel) developed a "trusted computing system" that implemented multilevel security. LOCK was intended to exceed the requirements for an "A1" system as defined by the old Trusted Computing System Evaluation Criteria (a.k.a. the TCSEC or "Orange Book").

The project was modestly successful in that we actually deployed a couple dozen systems in military command centers. A major design feature of LOCK was type enforcement, a fine grained access control mechanism that could tie access restrictions to the programs being run.

The work was performed at Secure Computing Corporation in Minnesota.

LOCK technology, particularly type enforcement, live on in two 'children' that still exist:

  • the Sidewinder Internet Firewall product line
  • the "SELinux" security enhancements to Linux

Here are links to papers about LOCK and the Standard Mail Guard (the deployed version of LOCK).

Post category: 

Multilevel Security and Internet Servers

I wrote the following message as part of a discussion on the old Firewalls mailing list in 1996. The message was part of a discussion on the use of MLS technology to protect Internet servers from attack. The basic concepts still apply in some ways, though the threats have evolved in many other ways.

 

The message opens with a summary of my background in MLS (through 1996, anyway) motivated by some questions raised in the discussion. Then it explains why, based on classical MLS reasoning, you can't use MLS to enforce separation between different Internet servers that share a common network interface. The article ends by explaining how an MLS-based Internet server can enforce the separation once we change our assumptions about how MLS works. An unspoken assumption of this discussion is that a "firewall" might be a large scale device that hosts a variety of "secure" services, like e-mail and DNS. Only a handful of firewalls (notably Secure Computing's Sidewinder) do anything like this today.

Someday I'd like to reconstruct the entire discussion from my own archives and from copies on the Internet, since it was a particularly satisfying one. It led to a paper I presented at ACSAC later that year. Click here to see the paper (PDF).


FROM: Rick Smith

DATE: 01/31/1996 15:09:50

SUBJECT: Re: Mandatory protection (was: product selection)

 

I think we`ve covered most of the issues so far in the Type Enforcement (TE) versus Multilevel Security (MLS) discussion pretty well, but there are two remaining issues that need clearing up.

 

I don`t think the unresolved topics arise from ignorance or a simple failure to communicate; we have a genuine and fully unintended culture clash.

 

The first is a matter of credibility. Since the relevance of anything else I say probably hinges on this, I`ll start here

 

"Does Rick Smith have a clue regarding MLS?"

 

There are several people at the National Computer Security Center and the MISSI Program Office that would be astonished by this question. Before moving to firewalls I was a key designer and the lead systems engineer on the SNS Mail Guard, one of the few MLS systems that comes close to being a turnkey device (I bring this up as evidence and not as a topic of Firewalls discussion - comment privately if you must). I`ve also done a variety of other MLS related analysis, design, and implementation tasks. So I do have some credentials.

 

But my background is entirely in high assurance MLS systems. Those are systems where MLS has only one meaning: obsessive protection of confidentiality in accordance with the Bell-LaPadula access control rules. Labels define barriers to information disclosure, and nothing in the platform architecture or services is permitted to compromise confidentiality. My statements on what MLS systems can and can`t do are based on the implications of highly assured confidentiality, not on some "strawman" MLS notion nor on "misconfigured" MLS systems.

 

That`s where the culture clash comes in. My colleagues in this discussion are using B1 MLS systems. These are systems where confidentiality protection is not pursued to such an extreme. This is *not* intended as a put-down, especially in the firewalls environment. Firewalls don`t need obsessively strong confidentiality. They need integrity protection. That`s why we put TE in Sidewinder and left out MLS -- we see MLS as a confidentiality mechanism and that`s not what we needed. But if you`re using MLS for mandatory protection and don`t have an obsessively strong confidentiality objective, then the picture changes a bit.

 

Here`s how this relates to the last open technical issue:

 

"Can MLS systems protect Internet servers from one another?"

 

I`ve always recognized that MLS systems can impose mandatory protection bariers between processes by using levels, categories, and compartments, but I still concluded "No." This is based on my view of high assurance MLS obsessed with confidentiality. The argument goes as follows:

 

  1. Typical Internet TCP/IP traffic does not contain labels.
  2. The network interface in an MLS system is always assigned a label.
  3. If a network interface receives a packet that does not already contain a label, then the packet must be assigned the network interface`s label.
  4. All packets sent or received as typical Internet TCP/IP traffic carry the same label (from 1, 2, 3). Call this label the "Internet Label."
  5. If two processes have the same label, there is no way to enforce mandatory MLS protection between them.
  6. Every network server process is assigned a label.
  7. A network server process can only send and receive packets if the packets` labels are identical to the label of the network server process.
  8. Any network server process that handles Internet traffic must be assigned the "Internet Label" (from 4, 7).
  9. All Internet server processes must be assigned the "Internet Label" (from 6, 8).
  10. You can`t enforce MLS between Internet servers (from 5, 9).

I suspect our misunderstandings are tied to statement 3) above. On Sidewinder we can associate TCP/IP port numbers with separately labeled domains in the TE system. The only way you can get a similar result in an MLS system is to associate TCP/IP port numbers with MLS confidentiality labels. For example, the B1 system might define a category or compartment label for "Mail" and restrict Port 25 traffic to processes with the Mail label. If so, this changes how statement 3) is phrased, and completely changes the conclusion.

 

The problem is, you can`t assign MLS labels that way if you`re obsessed with confidentiality. I can think of three reasons immediately as to why not:

 

1) Port numbers aren`t confidentiality labels and aren`t intended to be. There`s no reason to believe that any other system you`re communicating with is going to keep traffic separte according to port numbers. This means you can`t depend on confidentiality, the prime objective. This leads to the next reason:

 

2) Treating a port number like a label establishes an uncontrolled data channel between processes with different labels, independent of the label enforcement rules. This is because there`s nothing in the TCP/IP operating concept to prevent one such process from opening a connection directly or indrectly to a process on the same system associated with a different port number, bypassing the MLS barriers between differently labeled processes.

 

3) You can almost certainly construct a nifty timing channel between two processes that have different labels/ports and share the same TCP/IP stack. So even if the rest of the network behaves, there are ways to circumvent the labels.

 

But the bottom line answer to the question, in the context of *firewalls* and the irrelevance to them of a high assurance obsession with confidentiality, appears to be "Yes, If."

 

IF the vendor puts in the trusted code to associate different port numbers with different MLS process labels, THEN their firewall *can* enforce mandatory MLS protection between Internet servers. It`s not clear that a firewall is "misconfigured" if this degree of protection is omitted, but a thorough implementation really should include it. So, if you`re buying an MLS based firewall, look for this feature.

 

Peace?

 

Rick.

 

smith at secure computing corporation

 

 

Creative Commons License

 

This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.

Post category: 

Cross Domain Security

I first encountered the term cross domain security (or cross domain systems, or cross domain solutions, or just CDS) at a workshop in the late 1990s. We were discussing the problem of how to share information with coalition forces even though different countries had different, treaty-based access to US defense information. Even worse, there were coalitions that contained countries who were not on the best of terms (like Japan and Korea).

These days the term has often replaced MLS in the defense community. Some argue that the term has changed in hopes that the community can lower the assurance requirements, thus putting information at risk. Time will tell if this is indeed true.

Taxonomy upgrade extras: 

Security Evaluations

Starting in the 1980s, the US government established a program to evaluate the security of computer operating systems. Since then, other governments have established programs. In the late 1990s, major governments agreed to recognize a Common Criteria for security product evaluations. Since then, the number of evaluations have skyrocketed.

The following figure summarizes the number of government endorsed security evaluations completed every year since the first was completed in 1984. The different colors represent different evaluation criteria, with "CC" representing today's Common Criteria.

 

Number of evaluations completed per year since their inception

Starting in 1999, I have occasionally run projects where I have tracked down every report I could find of a security product evaluation. My first project led to some preliminary results.

At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations. I also produced a summary page of those results.

In 2006, I ran another survey that yielded the chart above and a paper (PDF) reviewing current trends. This was published in Information Systems Security (v. 16, n. 4) in 2007. The work was done with the help of several undergrads at the Univerisity of St. Thomas.

Other Observations

For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.

In the US, cryptographic products are certified under the FIPS 140 process, administered by the National Institute for Standards and Technology (NIST). Evaluation experts are quick to point out that the process and intentions are different between FIPS 140 and the Common Criteria. For the end user, they may appear to yield a similar result: a third-party assessment of a security device or product. In practice, different communities have different requirements. In the US and Canada, there is no substitute for an up-to-date FIPS 140 certification when we look at cryptographic products. Other countries or communities may acknowledge FIPS 140 certifications or they may require Common Criteria certifications.

In any case, security evaluations and certifications simply illustrate a form of due diligence. They do not guarantee the safety of a device or system. In 2009, for example, researchers found that many self-encrypting USB drives contained an identical, fatal security flaw. All of the drives had completed a FIPS 140 evaluation that did not highlight the failure.

Creative Commons License

This article by Rick Smith is licensed under a

Creative Commons Attribution 3.0 United States License.

Earlier evaluation notes

For a more recent view of this topic, visit my Security Evaluations page.

Date: 4/20/2005.

In 1999, I tracked down every report I could find of a product that had completed a published, formal security evaluation in accordance with trusted systems evaluation criteria. This led to some preliminary results. At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations.

I collected all of my data in an Excel 97/98 spreadsheet that contained an entry for every evaluation I could find through the end of 1999. At the moment the spreadsheet includes the reported evaluations by the United States (TCSEC/NCSC and Common Criteria), United Kingdom (ITSEC and Common Criteria), Australia, and whatever evaluations were reported from Canada, France, and Germany by the US, UK, and Australian sites. I am not convinced that this is every published evaluation that took place, but it's every report I could find.

Key Observations

  • Product evaluations still take place: There was a big surge of evaluations in 1994 followed by a drop, but the number of evaluations never fell back to 1993 levels, and a few dozen still take place every year.
  • Few products actually bother with evaluation: Even the most limited estimates would suggest that hundreds of security products enter the market every year, yet it appears that no more than a tenth of them ever enter the evaluation process.
  • Flight from the US: Despite the fact that the US pioneered security evaluation and US companies arguably dominate the international market in information technology, most products, including US products, are evaluated overseas. The trend is increasing in this direction. This is probably because evaluations in the US tend to be far more expensive and time consuming.
  • Popularity of middle ratings: The trend in the vast majority of evaluations is to seek EAL 3 or EAL 4 ratings, or their rough equivalent in ITSEC ratings. Very few go for lower or higher ratings. The higher ratings appear to all go to government or military specific products.
  • Network security products (including access control) have come to dominate evaluations: originally, operating systems were the main type of product evaluated. Then the trusted DBMS criteria were published, and several DBMSes were evaluated. However, other products like access control products, data comm security devices, and networking devices like firewalls have come to dominate evaluations in recent years.

For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.

I would be thrilled if anyone interested in a weird research project would use my spreadsheet as a starting point to further analyze the phenomenon of security evaluations. There are probably other facts to be gleaned from the existing data, or other information to be collected. As noted, I stopped collecting data at the end of the last century.

Notes on File Systems

This is a collection of notes intended to introduce the fundamentals of file systems. This section summarizes the challenges of using hard drives and the general objectives of file systems. Subsections introduce simple file systems that are for the most part obsolete today.

Hard Drive Challenges and File System Objectives

 

Three Challenges

When faced with a large expanse of hard drive space, one has three problems:

  • How to store distinct groups of data in a unified manner - these are usually called files
  • How to find files - this usually leads to directories, or folders as they are called in graphical interfaces
  • How to manage the free space on the hard drive - avoid losing space to fragmentation or errors

Five Objectives

Over the decades, different file systems have produced different solutions to these problems. Usually the differences can be traced back to the following, sometimes mutually exclusive, objectives:

  • Easy to implement
  • FAST, FAST, FAST
  • Direct access versus sequential access of data in files
  • Support for hard drives of a particular maximum size
  • Robustness

As hard drives have grown in capacity, file systems have grown in complexity. Still, the systems' weird features usually trace their origins back to the problems being solved or the particular objectives being pursued.

If we look back into ancient history, when semi-trailor-sized behemoths were being out-evolved by refrigerator-sized creatures in university computer labs, we find many comprehensible file systems.

 

Creative Commons License
This work (which includes its subsections) by Dr. Rick Smith is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.

Classic Forth

(circa 1970-85, maybe later)

The Forth programming system was developed in the late 1960s by Chuck Moore. It provided a very powerful, text based mechanism for controlling a computer and writing programs when RAM and hard drive space were extremely tight. Early implementations were routinely restricted to 8KB of RAM. Some early implementations relied exclusively on diskette drives that stored less than a half a megabyte of data.

Starting in the 1970s, typical Forth systems treated hard drives as consisting of a linear set of numbered blocks, each 1KB in size. The first block on the drive (block 0) contained the bootstrap program to get Forth started, and a small number of subsequent blocks might also contain binary executable code that was loaded into RAM when Forth started.

Following the blocks of executable code, the remaining hard drive blocks generally contained ASCII text and were referred to by number. If a programmer needed to modify part of a Forth program, he would edit the hard drive block that contained that program, and refer to the block by its number.

Here is an assessment of Forth's file system in the context of the eight concepts noted above:

  • File storage - each file was essentially of a fixed, 1KB size. Other measures had to be used to link multiple blocks together into longer sequences of data.
  • Locating files - files were referred to by block number, and the number had to be remembered by the programmers.
  • Free space management - programmers had to remember what blocks had not been used, or which blocks contained obsolete text that could be erased so the block could be reused.
  • Easy to implement - Yes, yes, yes!
  • Speed - Very fast at the hardware level, since no hard drive searching had to take place
  • Direct vs. sequential - supported both
  • Storage sizes - no built-in limit, but this obviously became impractical as drives increased dramatically in size
  • Robustness - there is little on-disk structure to destroy, so the robustness becomes a social issue having to do with the reliabilty of the people using the system: will they forget, or resign, or otherwise be unavailable? Can others fill in the gaps left by missing people? Will the hard drive get too big for humans to keep track of its contents without a more conventional file system?

 

Boston University's RAX Library

(circa 1973-8)

Boston University (BU) developed its own timesharing system in the 1970s for its IBM 360 and 370 mainframes. The system was based on the batch-oriented Remote Access Computing System (RACS) developed by IBM. McGill University also participated in RAX development, but their version was renamed "McGill University System for Interactive Computing" (MUSIC). Although many of the details are lost in the mists of time, both systems used some text processing tools developed at BU.

At the time, IBM had developed a few timesharing systems, but they were generally expensive and slow. IBM's standard operating systems for the 360 series had a file system; files were referred to as data sets. To put matters as charitably as possible, IBM's data set support was not suited to the dynamic nature of file access in timesharing environments. Frankly, it was a beast. So RAX really needed its own file system.

In accordance with the traditions of IBM data processing, a RAX file looked more-or-less like a deck of punched cards. Files consisted of "records" that carried individual lines of text. Unlike punched cards, trailing blanks were omitted and the individual records (lines) could vary in length. More significantly, files were either read sequentially in a single pass, or written sequentially in a single pass. There wasn't any notion of random access or of modifying the middle of a file without rewriting the whole thing. While RAX did support random access to hard drive files, the function was limited to specially allocated files (standard IBM data sets, actually) and used special operations that were only avaliable to assembly language programmers.

Each file had a unique name and was 'owned' by the user that created it. Users could modify the permissions on files to share them with other users.

The RAX system's timesharing hours were generally limited to daytime and evenings. Overnight, the CPU was rebooted with IBM's OS/360 or OS/VS1 to run batch jobs. Thus, the RAX hard drives had to be compatible with IBM's native file system, such as it was. The RAX library was implemented inside a collection of IBM data sets, each data set serving as a pool of disk blocks to use in library files. These disk blocks were called space sets and contained 512 bytes each.

A complete RAX library file name contained two parts: an 8-character index name and an 8-character file name. While this gave the illusion of there being a hierarchical file system, there was no true 'root' directory. All files not used by the RAX system programming staff resided in the "userlib" index; if no index name was given, RAX searched in userlib. The directory arrangement apparently worked as follows:

There were a small number of IBM data sets that served as library directories (indexes). A file's index name selected the appropriate data set to search for that file's directory entry. These index files were apparently set up using IBM's Indexed Sequential Access Method (ISAM). Such files were specially formatted to use a feature of the IBM disk hardware. Each data block in the file contained a key field along with space for a library file's directory entry. The "key" part contained the file name. The IBM disk hardware could be told to scan the data set until it found the record whose key contained that name, and then it would retrieve the corresponding data. This put the burden of directory searching on the hard drive, and freed up the CPU to work on other tasks.

The directory entry contained the usual timestamps (date created, accessed, modified, etc.), ownership information, access permissions, size, and a pointer to the first space set in the file.

Once the system knew the location of the file's first space set, it could retrieve the file's contents sequentially. A space set address was a 32-bit number formatted in 2 fields:

RAX space set ID

Remember that the library consisted of numerous data sets that served as pools of data blocks These pools were called lib files, and were numbered sequentially. The data blocks, or space sets, were numbered sequentially inside each lib file.

Files within the RAX library were implemented as a list of linked space sets. The first four bytes of each space set carried the pointer to the next one in the file. The pointer bytes were managed automatically by the system's read and write operations; they were invisible to user programs. The net result was that user programs perceived space sets as containing only 508 bytes, since 4 bytes were used for the link pointer.

A single library file could contain space sets from many different lib files. Since each lib file tended to represent a contiguous set of disk space, file retrieval was most efficient when all space sets came from the same lib file. In practice, however, a file would incorporate space sets from whichever lib file had the most available.

Free space was managed within individual lib files. Each lib file kept a linked list of free space sets. Space sets from deleted files were added back to the free list in the appropriate lib file.

Here is a review of the eight issues listed above:

  • File data structure - variable length records that more or less corresponded to lines of text.
  • File block structure - uses a linked list to organize randomly located disk blocks into a sequential file
  • Directories - effectively a single level directory structure with user permissions and timestamps
  • Free space - manages in arbitrary, locally maintained lists. Any block can be in any file, eliminating fragmentation problems.
  • Easy to implement - built atop a rich IBM-oriented I/O mechanism - simple to implement in that environment, but hard to replicate in non-IBM environments.
  • Speed - Directory lookup is very fast. File data access is rarely optimized, though
  • Sequential vs. direct - system really only supports sequential access
  • Storage sizes - can combine data sets on multiple drives to store perhaps 2TB of data, assuming 512 byte space sets. Individual files are probably limited to 4GB by the size field in the directory entry.
  • Robustness - Links are brittle. System crashes could cause link inconsistencies, and the risk of a file's link pointing to a space set on the free list. However, file replacement consists of not eliminating the old file and its chain of space sets until the new file's chain has been completely built. System crashes during such updates would usually leave the previous file intact.

 

Digital's RT-11 File System

(Circa 1975-199?)

The PDP-11 computer, build by Digital Equipment Corporation (DEC) in the late 20th century, was a classic machine of the minicomputer era. At the time of the -11's introduction, DEC really had no idea what to do about software for its machines, and wasn't even sure what was appropriate in the way of operating systems. Over the next several years, DEC (and others) introduced a flotilla of operating systems for the PDP-11. Here are DEC's contributions:

  • Paper Tape Operating System - name says it all.
  • DOS-11 - the Disk Operating System; spent too much time visiting the disk.
  • RT-11 - a real time operating system.
  • RSX-11 - a resource sharing executive, produced in three flavors.
    • RSX11-S - a single-user or stand-alone version
    • RSX11-D - disk-based, multi-user, a resource hog.
    • RSX11-M - multitasking - a prototype for later systems, notably VMS
  • RSTS-11 - a resource sharing, time sharing system.

The RT-11 operating system came in several flavors, but all shared the same, simple file structure. The file system consisted of a single directory configured at a fixed location at the start of the disk volume. The directory could consist of multipe non-contiguous "segments" as defined in the directory's header.

Searching for files was very simple: when the system booted, the boot code would search the RT-11 directory for the name "MONITR.SYS" which indicated the system image to load and run.

An RT-11 directory entry consisted of the following:

  1. 16-bit flag word to indicate if the directory entry is permanent, temporary, or currently unused.
  2. 6-character file name using DEC's Radix-50 character set (A-Z, 0-9, $, and ".").
  3. 3-character extension suffix, also in Radix-50
  4. File creation date
  5. File's starting block number on the hard drive
  6. File's length in blocks
  7. Additional data, as configured when the directory was created

When creating a directory, the operator can configure it to contain extra space for application specific file data. It would be the application's responsibility to update the additional data in a file's entry.

To create a new file, even temporarily, RT-11 had to allocate space by updating the directory. If the program specified a particular file size, RT-11 would try to fit that size. If the program did not specify a file size, RT-11 would allocate half of the largest block of free space available on the hard drive.

Once RT-11 had determined the amount of space for the new fiile, and its intended location on the hard drive, it would usually allocate the hard drive space by creating two adjacent directory entries. The first entry would be for the new file, and would contain the number of blocks desired for the new file. The second entry would be for any free space left over after allocating the desired amount of space for the file.

When first opened, a file was generally marked as "tentative" in the directory, and updated to "permanent" when and if the creating program actually closed the file.

Here is an assessment of RT-11's file system in the context of the eight concepts (Challenges and Objecives) noted earlier:

  • File storage - each file consists of a contiguous group of blocks on the hard drive.
  • Locating files - files were located via the directory, which resided in a fixed location at the beginning of the hard drive. The directory consisted of a single array of entries, each with a 6.3 character file name formatted in DEC's Radix-50 format. A file's directory entry indicated the address of the first block of the file.
  • Managing free space - unallocated disk space is represented by "empty" directory entries that indicate the starting disk address and length of the empty space.
  • Easy to implement - Yes. Consists of a single directory data structure.
  • Speed - Efficient for very small hard drives: to find the last file in the directory you must read every block in the directory.
  • Direct vs. sequential - supported both. Direct access simply required the offset to the start of the file and a seek limit based on the file's length.
  • Storage sizes - limited to the offset sizes in the directory entries, which were generally 16-bit values.
  • Robustness - since files were made up of sequential blocks, accidental deletions were easy to undo. A trained developer could recover an entire directory with a lot of work by looking at block contents to identify the start and ending blocks for individual files.

The RT-11 file system's simplicity led to its being used in various other applications. For example, the microcode disk used by the original VAX hardware used an RT-11 file system. The LOCK trusted computing base used the RT-11 file system for accessing files on its embedded SIDEARM processor.

 

Curriculum and Courseware Certification

NSA sealThese articles are for instructors or curriculum developers who want to have their courseware certified by the National Security Agency's Information Assurance Courseware Evaluation (IACE) program. These articles focus on certifying compliance with NSTISSI 4011, the national training standard for information security (INFOSEC) professionals.

NOTE: This information is based on the previous CNSS standards. The government has replaced those standards. They have also changed the process, rendering this discussion irrelevant.

I recommend that instructors adopt Elementary Information Security for one or more courses in their curriculum and take advantage of the textbook's NSTISSI 4011 mapping information. Existing courses may already cover many of the training standard's required topics, and the mapping information makes it easy to develop course notes and to identify assigned readings that fill the remaining gaps.

NSTISSI 4011 was issued in 1994. This predates the widespread use of technologies like firewalls and SSL. To ensure up-to-date coverage of essential topics, the book also incorporates information from curriculum recommendations published jointly by the ACM and the IEEE Computer Society. Specifically, the book covers the topics and core learning outcomes listed in the Information Security and Assurance knowledge area of the Information Technology 2008 curriculum recommendations.

To best ensure up-to-date coverage, the book also reflects a review of recent malware implementations and techniques, and of web server vulnerabilities. Additional coverage of cryptographic techniques serves as a sort of update to a different, aging book I wrote, Internet Cryptography.

Here are some resources to support the mapping effort:

The following articles further outline the certification process.

Earning IACE Certification Using a Certified Textbook

CNSS certified to conform to NSTISSI 4011The U.S. government certifies courses of study in information security under the Information Assurance Courseware Evaluation (IACE) program. If a course is certified under one of the approved standards, then students are eligible to receive a certificate that carries the seal of the U.S. Committee on National Security Systems (CNSS, left) to indicate they have completed an approved course of study.

My new textbook, Elementary Information Security, has just earned certification that it conforms fully to the CNSS national training standard for information security professionals (NSTISSI 4011).

It can be challenging for an institution to get its course of study certified. Many of the topics are obvious ones for information security training, but others are relatively obscure. Several topics, like TEMPEST, COMSEC, and transmission security, have lurked in the domain of classified documents for decades.

This new text provides a comprehensive and widely available source for all topics required for NSTISSI 4011 certification. An institution can use the textbook along with the details of its NSTISSI 4011 topic mapping to establish its own certified course of study.

IACE Certification Explained

The U.S. National Security Agency (NSA) operates a program to evaluate programs of study for compliance against published U.S. government training standards. Institutions may apply to have their courseware certified. Numerous two- and four-year colleges, universities, and private training academies have earned this certification under the NSA's Information Assurance Courseware Evaluation (IACE) program. Certified courses of study may issue certificates to their students that carry the seal of the U.S. Committee on National Security Systems (CNSS) and indicate they have completed an approved course of study.

In 2012, the IACE program certified a textbook for the first time: Elementary Information Security was certified to conform fully to the CNSS training standard NSTISSI 4011. Institutions may use this textbook to efficiently develop a course of study eligible for government certification under this same standard. Consult the posted topic mapping to NSTISSI 4011 for further details.

The Curriculum or Courseware Mapping Process

IACE uses a mapping process to show that a particular curriculum or set of courseware complies with a standard. While IACE will certify compliance with several different standards, this discussion focuses on NSTISSI 4011. The mapping process typically goes through the following steps:

  1. Contact IACE to apply for certification. The IACE will establish login credentials on the mapping web site.
  2. Download a copy of the training standard and the spreadsheets provided to assist in mapping.
  3. Identify the courses, current or planned, that will cover the required topics. Consult Elementary Information Security for any topics not covered in existing courses.
  4. Each course should be broken down into separate topics or lectures covered in the course. Each topic required by the training standard must map to at least one of these course topics/lectures.
  5. Fill out a mapping spreadsheet to indicate which courses and which topics/lectures within each course covers each required topic. Consult the Elementary Information Security topic mapping for an example.
  6. If any required topics are not currently covered by a course, either establish a course to cover those topics, or modify lectures in existing courses to cover those topics. Course lecture notes or presentation slides must be available on-line in order to be part of the course mapping.
  7. When all required topics are covered by on-line course notes, the actual mapping may begin. Log in to the IACE mapping web site and start by defining all courses used in the mapping. Each course must be broken into individual topics or lectures, and there should be an obvious relationship between those topics or lectures and the on-line course material (i.e. files or pages containing course notes or lecture slides).
  8. Once all courses are entered, including course topics or lectures, go to “Map Standard” and select NSTISSI 4011. Follow the instructions provided by the site. For each topic, be sure to provide at least one reference to a course topic or lecture. 
  9. Once the standard is 100% mapped, it may be submitted. Double-check the mapping for accuracy and then submit it.

Both the public IACE web site and the mapping web site have help and instructions to guide and simplify the process. However, it will still require several hours of work to enter all of the details.

The mapping instructions sometimes refer to “Entry,” “Intermediate,” and “Advanced” coverage of topics. These do not apply to NSTISSI 4011. To comply with this standard, the curriculum must make the students aware of all topics covered by the standard.

Mapping Schedule

Note that the mapping process is not available at certain times of the year. At present, it is not available between January 15 and March 1. That is the time period during which IACE evaluations take place. Here is the current calendar for IACE certification:

  • January 15: Deadline for the IACE certification cycle. All mappings must be entered and submitted by this time.
  • March 1: Submitted certifications may be complete, and recipients are notified. This is not an official deadline and may vary with circumstances
  • mid-June: Official certificates are presented to successful institutions at the annual Colliquium for Information Systems Security Education (CISSE).
  • Five years later: Certification expires

Disclaimer

While Elementary Information Security should make courseware mapping clearer and more accessible for institutions, the process described here is not guaranteed to work. Moreover, the process as described could be changed by IACE without notice. CNSS only certifies that the textbook's contents fully conform to NSTISSI 4011.

Post category: 

Elementary Information Security Topic Mapping for NSTISSI 4011

Elementary Information SecurityElementary Information Security has been certified to conform fully to  to the Committee on National Security System’s national training standard for information security professionals (NSTISSI 4011). To do this, I had to map each topic required by the standard to the information as it appears in the textbook. Instructors who map their courses to the standard must map the topics to lectures, readings, or other materials used in those courses.

I have exported the textbook's mapping to an Excel spreadsheet file. Curriculum developers may use this information to develop a course of study that complies with NSTISSI 4011 and is eligible for certification. I'm describing the courseware mapping process in another post. Read that post first.

The topic mapping for Elementary Information Security relates a single “course,” the textbook itself, to the required topics in the NSTISSI 4011 standard. The mapping is made available in a spreadsheet. The first column contains row numbers. The next three columns, Subsection, Element, and Topic, contain topic identifiers from the standard. The Chapters column lists the chapters by number that cover a particular topic. The column may also contain the letter “B” to point to Appendix B.

The Notes column points directly to each topic by section number and page number. These were placed in the “Additional Comments” field of the mapping. This is because the textbook focuses primarily on readability and an appropriate progression of topics. Detailed comments were provided to ensure that appropriate information about every topic appeared in the mapping.

Most mappings should not require additional comments, or at least require this level of detail. If the course and topic mappings link to files in which the material is easily found, then additional comments shouldn't be required.

The phrase “summary justification” is used when an earlier or higher-level topic covers several subtopics as well. The mapping web site allows summary justification for selected topics. When used, the mapping only needs to specify the earlier or higher-level topic. The subsequent, related topics are then filled in automatically and locked from editing.

The Elementary Information Security mapping may be downloaded in spreadsheet form, and used as a starting point for mapping the institution's curriculum. However, this particular spreadsheet maps all topics to a single “course,” the Elementary Information Security textbook. Mapping is by chapter numbers in the Chapters column.

In a curriculum containing several courses, the spreadsheet should be modified to allow mapping of multiple courses and the topics they contain. One approach is to include a column with two-part entries, one part indicates a course and the other part selects a particular course topic. Another approach is to provide a separate column for each course. If a particular course covers a particular required topic, then the course's column identifies the corresponding course topic.

Post category: 

The First Textbook Certified by the NSA

CNSS LogoI received an email this morning announcing that Elementary Information Security has been certified by the NSA's Information Assurance Courseware Evaluation program as covering all topics required for training information security professionals. Here is the certification letter.

This is the first time thay have certified textbooks. In the past they've only certified training programs and degree programs.

The evaluation is based on the national training standard NSTISSI 4011. The book also covers the core learning outcomes for Information Assurance and Security listed in the Information Technology 2008 Curriculum Recommendations from the ACM and IEEE Computer Society.

The textbook is currently available from the publisher, Jones and Bartlett, and I notice that they offer a PDF-ish version as well as the 890-page hardcopy edition.

It was a bit of a challenge trying to fit all of that information into a single textbook, and to target it at college sophomores and two-year college programs. The book contains a lot of tutorial material to try to bridge the gap between the knowledge of introductory students and the required topics.

 

Wordpress tag: 
Post category: 

Textbook Materials

Elementary Infosec cover

Draft supporting materials for the textbook

Visit the textbook site eisec.us for the latest supporting materials and study guides.

Draft materials are provided below:

About the Book

Elementary Infosec cover

Jones & Bartlett Learning, November, 2011.

CNSS LogoThe only textbook verified by the US Government to conform fully to the Committee on National
Security Systems' national training standard for information security professionals (NSTISSI 4011)

ISBN-10: 0763761419

This comprehensive, accessible Information Security text is ideal for the one-term, undergraduate college course. The text integrates risk assessment and security policy throughout the text, since security systems work best at achieving goals they are designed to meet, and security policy ties real-world goals to security mechanisms. Early chapters in the text discuss individual computers and small LANS, while later chapters deal with distributed site security and the Internet. Cryptographic topics follow the same progression, starting on a single computer and evolving to Internet-level connectivity. Mathematical concepts throughout the text are defined and tutorials with mathematical tools are provided to ensure students grasp the information at hand.

See below for sample contents. Sample chapters are also available from the publisher's site.

Table of Contents

Preface

1: Security From the Ground Up

1.1. The Security Landscape

1.2. Process Example: Bob’s Computer

1.4. Identifying Risks

1.5. Prioritizing Risks

1.6. Ethical Issues in Security Analysis

1.7. Security Example: Aircraft Hijacking

1.8. Resources

2: Controlling a Computer

2.1. Computers and Programs

2.2. Programs and Processes

2.3. Buffer Overflow and The Morris Worm

2.4. Access Control Strategies

2.5. Keeping Processes Separate

2.6. Security Policy and Implementation

2.7. Security Plan: Process Protection

2.8. Resources

3: Controlling Files

3.1. The File System

3.2. Executable Files

3.3. Sharing and Protecting Files

3.4. Security Controls for Files

3.5. File Security Controls

3.6. Patching Security Flaws

3.7. Process Example: The Horse

3.8. Chapter Resources

4: Sharing Files

4.1. Controlled Sharing

4.2. File Permission Flags

4.3. Access Control Lists

4.4. Microsoft Windows ACLs

4.5. A Different Trojan Horse

4.6. Phase Five: Monitoring The System

4.7. Chapter Resources

5: Storing Files

5.1. Phase Six: Recovery

5.2. Digital Evidence

5.3. Storing Data on a Hard Drive

5.4. FAT: An Example File System

5.5. Modern File Systems

5.6. Input/Output and File System Software

5.7. Resources

6: Authenticating People

6.1. Unlocking a Door

6.2. Evolution of Password Systems

6.3. Password Guessing

6.4. Attacks on Password Bias

6.5. Authentication Tokens

6.6. Biometric Authentication

6.7. Authentication Policy

6.8. Resources

7: Encrypting Files

7.1. Protecting the Accessible

7.2. Encryption and Cryptanalysis

7.3. Computer-Based Encryption

7.4. File Encryption Software

7.5. Digital Rights Management

7.6. Resources

8: Secret and Public Keys

8.1. The Key Management Challenge

8.2. The Reused Key Stream Problem

8.3. Public-key Cryptography

8.4. RSA: Rivest-Shamir-Adleman

8.5. Data Integrity and Digital Signatures

8.6. Publishing Public Keys

8.7. Resources

9: Encrypting Volumes

9.1. Securing a Volume

9.2. Block Ciphers

9.3. Block Cipher Modes

9.4. Encrypting a Volume

9.5. Encryption in Hardware

9.6. Managing Encryption Keys

9.7. Resources

10: Connecting Computers

10.1. The Network Security Problem

10.2. Transmitting Information

10.3. Putting Bits on a Wire

10.4. Ethernet: A Modern LAN

10.5. The Protocol Stack

10.6. Network Applications

10.7. Resources

11: Networks of Networks

11.1. Building Information Networks

11.2. Combining Computer Networks

11.3. Talking Between Hosts

11.4. Internet Addresses in Practice

11.5. Network Inspection Tools

11.6. Resources

12: End-to-End Networking

12.1. “Smart” Versus “Dumb” Networks

12.2. Internet Transport Protocols

12.3. Names on the Internet

12.4. Internet Gateways and Firewalls

12.5. Long Distance Networking

12.6. Resources

13: Enterprise Computing

13.1. The Challenge of Community

13.2. Management Processes

13.3. Enterprise Issues

13.4. Enterprise Network Authentication

13.5. Contingency Planning

13.6. Resources

14: Network Encryption

14.1. Communications Security

14.2. Crypto Keys on a Network

14.3. Crypto Atop the Protocol Stack

14.4. Network Layer Cryptography

14.5. Link Encryption on 802.11 Wireless

14.6. Encryption Policy Summary

14.7. Resources

15: Internet Services and Email

15.1. Internet Services

15.2. Internet Email

15.3. Email Security Problems

15.4. Enterprise Firewalls

15.5. Enterprise Point of Presence

15.6. Resources

16: The World Wide Web

16.1. Hypertext Fundamentals

16.2. Basic Web Security

16.3. Dynamic Web Sites

16.4. Content Management Systems

16.5. Ensuring Web Security Properties

16.6. Resources

17: Governments and Secrecy

17.1. Secrecy In Government

17.2. Classifications and Clearances

17.3. National Policy Issues

17.4. Communications Security

17.5. Data Protection

17.6. Trustworthy Systems

Preface

The goal of this textbook is to introduce college students to information security. Security often involves social and organizational skills as well as technical understanding. To solve practical security problems, we must balance real-world risks and rewards against the cost and bother of available security techniques. The text uses continuous process improvement to integrate these elements.

Security is a broad field. Some students may excel in the technical aspects, while others may shine in the more social or process-oriented aspects. Many successful students fall between these poles. The text offers opportunities for all types of students to excel.

Introducing Technology

If we want a solid understanding of security technology, we must look closely at the strengths and weaknesses of underlying information technology itself. This requires a background in computer architecture, operating systems, and computer networking. It’s hard for a typical college student to achieve breadth and depth in these subjects and still have time to really study security.

Instead of leaving a gap in students’ understanding, this book provides introductions to essential technical topics. Chapter 2 explains the basics of computer operation and instruction execution. This prepares students for a description of process separation and protection, which illustrates the essential role of operating systems in enforcing security.

Chapter 5 introduces file systems and input/output in modern operating systems. This lays a foundation for forensic file system analysis. It also shows students how a modern operating system organizes a complex service. This sets the stage for Chapter 10’s introduction to computer networking and protocol software.

Introducing Continuous Process Improvement

The text organizes security problem-solving around a six-phase security process. Chapter 1 introduces the process as a way of structuring information about a security event, and presents a simple approach to risk analysis. Chapter 2 introduces security policies as a way to state security objectives, and security controls as a way to implement a policy. Subsequent chapters introduce system monitoring and incident response as ways to assess system security and improve it.

Each step in the process builds on earlier steps. Each step also provides a chance to assess how well our work addresses our security needs. This is the essence of continuous process improvement.

In order to give students an accurate view of process improvement, the text introduces document structures that provide cross references between different steps of the process. We use elements of each earlier phase to construct information in the following phase, and we often provide a link back to earlier data to ensure complete coverage. While this may seem like nit-picking in some cases, it allows mastery of essential forms of communication in the technical and professional world.

Intended Audience

When used as a textbook, the material is intended for lower division undergraduates, or for students in a two-year community college program. Students should have completed high school mathematics. Typical students should have completed an introductory computing or programming course.

Instructors may want to use this book for either a one or two semester course. A one semester course would usually cover one chapter a week; the instructor may want to combine a couple of earlier chapters or skip the final chapter. Some institutions may find it more effective to teach the material over a full year. This gives the students more time to work with the concepts and to cover all topics in depth.

Following the style of my earlier books, this text focuses on diagrams and practical explanations to present fundamental concepts. This makes the material clearer to all readers and makes it more accessible to the math-phobic reader. Many concepts, particularly in cryptography, can be clearly presented in either a diagram or in mathematical notation. This text uses both, with a bias towards diagrams.

Many fundamental computing concepts are wildly abstract. This is also true in security, where we sometimes react to illusions perceived as risks. To combat this, the text incorporates a series of concrete examples played out by characters with names familiar to those who read cryptographic papers: Bob, Alice, and Eve. They are joined by additional classmates named Tina and Kevin, since different people have different security concerns.

Curriculum Standards

The material in this text fulfills curriculum requirements published by the US government and the Association for Computing Machinery (ACM). In particular, the text covers all required topics for training information systems security professionals under the Information Assurance Courseware Evaluation Program (NSTISSI #4011) established by the US National Security Agency (NSA). The text also provides substantial coverage of the required topics for training senior system managers (CNSSI #4012) and for system administrators (CNSSI #4013).

The text also covers the core learning outcomes for information security education published in the ACM’s “IT 2008” curricular recommendations for Information Technology education. As a reviewer and contributor to the published recommendations, the author is familiar with its guidelines.

Students who are interested in becoming a Certified Information System Security Professional (CISSP) may use this book as a study aid for the examination. All key areas of the CISSP Common Body of Knowledge are covered in this text. Certification requires four or five years of professional experience in addition to passing the exam.

Teaching Security

Information security is a fascinating but abstract subject. This text introduces students to real-world security problem solving, and incorporates security technology into the problem-solving process. There are lots of “how to” books that explain how to harden a computer system.

Many readers and most students need more than a “how to” book. They need to decide which security measures are most important, or how to trade off between alternatives. Such decisions often depend on what assets are at stake and what risks the computer’s owner are willing to take.

The practitioner’s most important task is the planning and analysis that helps choose and justify a set of security measures. In my own experience, and in that of my most capable colleagues, security systems succeed when we anticipate the principal risks and design the system to address them. In fact, once we have identified what we want for security (our policy requirements), we don’t need security gurus to figure out what we get (our implementation). We just need capable programmers, testers, and administrators.

Teaching Aids

As the chapter unfolds, we encounter certain key terms indicated in bold italics. These highlight essential terms that students may encounter in the information security community. Successful students will recognize these terms.

The Resources section at the end of each chapter lists the key terms and provides review and exercises. The review questions help students confirm that they have absorbed the essential concepts. Some instructors may want to use these as recitation or quiz questions. The problems and exercises give students the opportunity to solve problems based on techniques presented in the text.

Additional Reading Materials

Here is an incomplete collection of reading materials associated with the textbook. Visit the Elementary Infosec site for a complete set of reading and study materials

Locating copies of on-line articles and papers

Some links lead to on-line articles published by professional societies like the ACM (Association for Computing Machinery) or IEEE (Institute of Electrical and Electronic Engineers). Serious computing experts often join one or both of these, and sign up for electronic library subscriptions. Many college and university libraries may also provide free access to these for students and faculty.

1: Security From the Ground Up

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

Security Overview - Recent Events

The following blogs provide readable reports and commentary on information security.

Bruce Schneier (of Schneier on Security) coined the term security theater in his book Beyond Fear (New York: Copernicus Books, 2003).

Continuous Improvement

ASQ (the American Society for Quality, which has now gone international) provides tutorials on basic quality concepts including continuous improvement.

In Japan, the term kaizen embodies the continuous improvement process.

Systems Engineering

During World War II, military enterprises poured vast resources into various techincal projects, notably radar, codebreaking, and the atomic bomb. Those successes encouraged the peacetime military to pursue other large scale technical projects. A typical project would start with a very large budget and a vague completion date a few years in the future. But in practice, many of these projects vastly exceeded both the budget and the predicted time table.

A handful of projects, notably the Polaris Missile project, achieved success while adhering closely to their initial budget and schedule estimates. Pressure on defense budgets led the US DOD to identify features of the successful projects so they might be applied to future work. This was the genesis of systems engineering. The DOD's Defense Aquisition University has produced an introduction to systems engineering (PDF format) in the defense community.

The International Council on Systems Engineering (INCOSE) provides a more general view of systems engineering. NASA also provides on-line training materials on their Space Systems Engineering site.

Basic Principles of Information Security

Chapter 1 introduces the first three of eight basic principles of information security.

  1. Continuous Improvement
  2. Least Privilege
  3. Defense in Depth

Different authorities present different lists of principles. International standards bodies, including NIST in the US, tend to produce very general lists of principles, reflecting notions such as "be safe," "keep records," and other generalizations (for example, see NIST's SP 800-14: "Generally Accepted Principles and Practices for Securing Information Technology Systems"). These principles represent basic truths about security, but few are stated in a way that helps one make security decisions.

Saltzer and Schroeder produced a now-classic list based on experience with the Multics time sharing system in the 1970s: "The Protection of Information in Computer Systems," Proceedings of the IEEE 63, 9 (September, 1975). Some of these principles reflect features of the Multics system while others reflect some well-known shortcomings with most systems of that time. Copies exist online at Saltzer's own web site and at the University of Virginia.

There is also a Cryptosmith blog post that compares the textbook's list of principles with those in Saltzer and Schroeder.

Microsoft's Technet archives also contain a well-written summary of principles titled "10 Immutable Laws of Security," followed with "10 Immutable Laws of Security Administration."

High-Level Security Analysis

A high-level security analysis provides a brief summary of a security situation at a given point of time. The next section provides a checklist for writing a high-level security analysis.

Risk Assessment

The risk assessment processes noted in the textbook are all avaliable online:

Ethical Issues

CERT has published the policy it follows when disclosing vulnerabilities. Changes to Microsoft's disclosure policy yield news coverage. Microsoft publishes its policy in MS Word "docx" format.

Several news sources including the BBC reported on Michael Lynn's canceled talk on Cisco router vulnerabilities at the 2005 Black Hat conference.

Aircraft Hijacking

The fundamental reference for everything related to the events of September 11, 2001, is the Final Report of the National Commission on Terrorist Attacks Upon the United States, a.k.a. "The 9/11 Commission Report," published in 2004.

Following 9/11, the BBC published a brief history of airline hijackings. The 2003 Centennial of Flight web site provides a more general summary of violent incidents in aviation security. In 2007, New York Magazine published a more detailed hijacking time-line in conjunction with breaking news on the D. B. Cooper hijacking case. The US FBI web site contains a lot of information about the D. B. Cooper case, including a 2007 update.

VideoHere is a 30 second video clip of traditional aircraft hijacking. The History.com web site provides several videos of the 9/11 attack from different vantage points on its Witness to 9/11 web page.

Checklist for a High-Level Security Analysis

The textbook presents a high-level security analysis as a short writing exercise that summarizes a security situation. The analysis generally describes a situation at a particular point in time. For example, the 9/11 discussion in the textbook describes air travel security before 9/11. The analysis describes the six phases of the security process:

  • Assets
  • Risks
  • Policy
  • Implementation
  • Monitoring
  • Recovery

Here is a checklist of the basic properties of a high-level analysis:

  1. Is the analysis at least seven paragraphs long?
  2. Is the analysis too long? A typical analysis should be no more than 14 paragraphs long unless it addresses a really unusual subject area that requires extra explanation.
  3. Do the initial paragraphs introduce the scenario?
  4. Are there six headings following the introductory paragraph, one per phase?
  5. Is there at least one paragraph discussing each phase?

Note that a complete security plan will also cover the six phases, but it is not limited to this length. A complete plan covers each phase thoroughly.

2: Controlling a Computer

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

Computer Internals

Video Here is a 26-minute video of middle school students visiting a "walk through" computer to learn the basics of computer operation. Taken from the PBS TV series "Newton's Apple," 1990. Although the technology is over 20 years old, the fundamental components remain the same, except for speed speed, size, and capacity.

There are, of course, countless images and videos available through on-line searching that show specific elements of computer systems.

Binary Numbers and Hexadecimal Notations

VideoStudents who have not yet studied these topics in detail will want to visit web sites that provide an introduction to binary and hex. YouTube user Ryan of Aberdeen has created a video tutorial (9 minutes). There are also written tutorials:

The Morris worm

A faculty member at NC State University maintains a site that provides an overview of the Morris worm.

Eugene Spafford (aka spaf) wrote a report describing the worm, its operations, and its effects (PDF), shortly after the incident.

Eichin and Rochlis of MIT published a report of the worm incident from the MIT perspective (PDF). This was presented at the IEEE Symposium on Security and Privacy the following year.

In 1990, Peter Denning published a book that brought together several papers on the Morris worm and other security issues emerging at that time, titled Computers Under Attack: Intruders, Worms and Viruses.

Spafford also maintains an archive of worm-related information at Purdue University.

Open Design

Auguste Kerckhoffs' original paper on cryptographic system design recommended that cryptographic systems be published and that secrecy should reside entirely in a secret key. The paper was published in French in 1883. Portions of Kerckhoffs' paper are available on-line including partial English translations.

Claude Shannon's "Communication Theory of Secrecy Systems" (Bell System Technical Journal, vol. 28, no. 4, 1949) contains his assumption "the enemy knows the system being used" (italics his). Bell Labs provides general information about Shannon's work and publications.

Eric Raymond published a famous essay on the benefits of open design and of sharing program source code in general, called "The Cathedral and The Bazaar." This essay has inspired many members of the Open Source community.

Program and Process Security

Butler Lampson introduced the access matrix in his 1971 paper, "Protection." Lampson has posted a copy of his paper on-line in several formats.

 

 

Identifying Motherboard Features

This is, unfortunately, a lot harder than it seems. As RAM has grown smaller and I/O grown more complex, motherboard components have changed dramatically in size and appearance. Here are suggestions on identifying key features in older and newer motherboards.

  • CPU - The CPU is generally covered by a large, louvered object called a heat sink, and is often held in place by a lever or catch. The heat sink is often glued to the top of the CPU. In some cases you must remove the heat sink to remove the CPU, while in other cases they are removed as a single unit. Higher performance CPUs may have fans or liquid cooling systems attached to the heat sink.
  • RAM - these have resided on daughterboards for many years now. The daughterboards usually - but not always - plug directly into the motherboard. Each daughterboard will typically be long and thin to handle a row of individual RAM circuits. As a motherboard gets older and the RAM becomes more dense, there are fewer and fewer RAM circuits on each daughterboard. 
  • I/O circuits - the latest CPUs have special-purpose I/O circuits that are as large and power-hungry as the CPUs themselves. These may often carry their own heat sinks. Unlike the CPUs, though, a motherboard is often built around the I/O circuits. Thus, the I/O circuits are built in to the motherboard, while the CPU is added by the computer builder. In Figure 2.1 of the textbook (page 44), the large, unmarked metal object in the center of the motherboard is the heat sink for an I/O circuit.
  • I/O connectors - these are the easiest items to identify. They always mate with standardized cables to power supplies or to peripheral devices. There are often PCI connectors that accept daughterboards with specialized I/O circuits. There is often an additional high-performance connector for attaching a graphics circuit.
  • BIOS ROM - this is hard to spot on the newest devices because ROM circuits have grown so tiny. On old motherborads, the BIOS appears as a distinct row of ROM chips. On the latest, there may be only one or two chips interspersed among others on the board.

The most reliable way to identify a motherboard's contents is to locate a copy of its installation manual. These are usually posted on the web by the motherboard's manufacturer. Most boards clearly include the manufacturer's name and the board's model number.

If the manufacturer and model number aren't obvious, it may be possible to identify the motherboard using Google Images. Enter the word "motherboard" as a search term along with other textual items on the board. Compare the images displayed with the color and layout of the motherboard in question. Keep in mind everything should match when you find the correct board. Missing or misplaced features indicate that the boards don't match. A popular motherboard may appear many times, but most images will lead to pages that indicate the manufacturer and model. In some cases, the image may lead directly to the manufacturer's own pages.

3: Controlling Files

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

Computer Viruses

Security consultant Fred Cohen performed much of the pioneering analysis of computer viruses. His web site contains several useful articles on virus technology. Even though some of the material is 30 years old, the basic technical truths remain unchanged.

Some anti-virus vendors provide summaries of current anti-virus and malware activities:

Modern Malware

Here is a list of malware briefly described in the textbook, plus links to in-depth reports on each one. Check recent news: security experts occasionally make progress in eradicating one or another of these, but the botnets sometimes recover. Many of these are PDFs.

VideoVideos: Ralph Langer, a German expert in control systems security, gave a TED talk describing Stuxnet (~11 minutes). Bruce Dang of Microsoft also gave a detailed presentation about Stuxnet (75 minutes) at a conference.

Access Rights and Capabilities

Butler Lampson introduced the access matrix in his 1971 paper, "Protection" (PDF). Lampson has posted a copy on-line in several formats.

Although most modern systems use resource-oriented permissions to control access rights, there are a few cases that use capabilities, which associate rights with active entities like programs and users. Jack Dennis and Earl Van Horn of MIT introduced the notion of capabilities in their 1965 report "Programming Semantics for Multiprogrammed Computers," which was published in Communications of the ACM in 1966.

Marc Stiegler has posted an interesting introduction to capability based security that ties it to other important security concepts. The EROS OS project has also posted an essay that explains capability-based security. For a thorough coverage of capability based architecture circa 1984, see Henry Levy's book Capability-Based Computer Systems. He has posted it on-line.

Microsoft has posted an article that describes access control on Windows files and on the Windows "registry," a special database of system-specific information.

State Diagrams

Electrical engineers have relied on state diagrams for decades to help design complicated circuits. The technique is also popular with some software engineers, though it rarely finds its way into courses on computer programming. Any properly-constructed state diagram may be translated into a state table that provides the same information in a tabular form. Tony Kuphaldt's free on-line textbook Lessons in Electric Circuits explains state machines in the context of electric circuits in Volume IV, Chapter 11: Sequential Circuits-Counters.

Upper-lever computer science students may encounter state diagrams in a course on automata theory in which they use such diagrams to represent deterministic finite automata. Such mechanisms can handle the simplest type of formal language, a regular grammar. Most people encounter regular grammars as regular expressions, an arcane syntax used to match text patterns when performing complicated search-and-replace operations in text editors.

Students introduced to modern structured design techniques using the Unified Modeling Language (UML) often use state machine diagrams or state charts (a diagram in table form). On-line tutorials about UML state machines appear at Kennesaw State University and the Agile Modeling web site.

Tracking and Managing Security Flaws

In the US, there are several organizations that track and report on information security vulnerabilities. Many of these organizations provide email alerts and other data feeds to keep subscribers up to date on emerging vulnerabilities. Some organizations provide their services to particular communities (e.g. government or military organizations, or customers of a vendor's products) while others provide reports to the public at large.

The SANS Internet Storm Center also provides a variety of on-line news feeds and reports, as well as a continuously-updated "Infocon Status" to indicate unusual changes in the degree of malicious activity on the Internet. Click on the image below to visit the Internet Storm Center for further information on current vulnerabilities and malicious Internet activity.

Internet Storm Center Infocon Status

In 2000, Arbaugh, Fithen, and McHugh wrote an article describing a life-cycle model of information security vulnerabilities titled "Windows of Vulnerability: A Case Study Analysis", (IEEE Computer 33, December 2000). The authors have posted a copy of the article online (PDF).

The Trojan Horse

The library at Stanford posted a brief history of the Trojan War. Although Homer's Iliad tells the story of the Trojan War, it says very little about the Greek trickery that led to the city's fall. The story is more the province of Virgil's Aeneid.

Trojan horse by Henri Motte, Paris, 1873

In the 1970s, Guy Steele at MIT started collecting bits of jargon used in the computer community. This yielded "The Jargon File," which Steele maintained for several years until it was passed on to Eric Raymond. According to the Jargon File, the term Trojan horse entered the computing lexicon via Dan Edwards of MIT and the NSA.

US-CERT has published a two-page guide on how to deal with a Trojan horse or virus infection on a computer (PDF).

4: Sharing Files

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

File Permission Flags - Unix Style

There are numerous on-line tutorials on Unix and/or Linux file permissions, including ones provided by:

VideoSeveral people have also posted videos explaining file permissions,
including thedangercobra, theurbanpenguin, and elithecomputerguy.

Access Control Lists

ACLs first appeared in the Multics timesharing system, as described in the paper "A General-Purpose File System For Secondary Storage," by R. Daley and Peter Neumann (Proc. 1965 Fall Joint Computer Conference) and on the Multicians web site.

Since ACLs could provide very specific access restrictions, they became recommended features of high-security systems. When the US DOD developed the "Trusted Computer System Evaluation Criteria," (PDF) (a.k.a. the TCSEC or Orange Book) ACLs were an essential feature of higher security systems. Modern security products are evaluated against the Common Criteria.

POSIX ACLs

While traditional Unix systems did not have ACLs, more advanced versions of Unix incorporated them, partly to meet high security requirements like those in the Orange Book. This led to the development of POSIX ACLs as part of a proposed POSIX 1003.1e standard. The standards effort was abandoned, but several Unix-based systems did incorporate POSIX ACLs. Here are examples:

Mac OS-X ACLs

The ACL user interface on Mac OS-X is very simple. In fact, the OS-X ACLs are based on POSIX ACLs and may incorporate more spohisticated settings and inheritances than we see in the Finder's "Information" display. These features are available through special ACL options of the chmod shell command. One developer has produced an application called Sandbox that provides a more extensive GUI for managing the ACLs.

Windows ACLs

It can be challenging to find accurate online information about Windows ACLs, because the computer-based access controls are often confused with network-based access controls. The MS Developer Network provides general information about ACLs.

Researchers at CMU evaluated the Windows XP version of ACLs in a series of experiments documented in "Improving user-interface dependability through mitigation of human error," Intl J. Human-Computer Studies 63 (2005) 25-50, by Maxion and Reeder.

5: Storing Files (uc)

Under Construction

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

TBD

5: Memory Sizes: kilo mega giga tera peta exa

Here is a summary of memory size names and their corresponding address sizes. Many people memorize this type of information naturally through working with computer technology over time or during a professional career.

Names for large numbers

Practicing with Quizlet

If you want to memorize these values, visit the Quizlet page. The page tests your knowledge of the smaller sizes (K, M, G, T), how these sizes are related (i.e. a terabyte is a thousand billion bytes), and how they relate to memory sizes (a TB needs an address approximately 40 bits long).

Address Size Shortcut

Here is a simple shortcut for estimating the number of bits required to address storage of a given size.

103 ~ 210

To put this into practice, we do the following:

  1. Count the number of sets of three zeros (the "thousands") in the storage size.
  2. Multiply the number of thousands by ten

Let's work out an example with a terabyte: a trillion-byte memory.

  1. In a trillion (1,000,000,000,000) there are 4 sets of three zeroes
  2. Multiply 4 by 10, and we get approximately 40 bits.

6: Authenticating People (tbd)

UNDER CONSTRUCTION

Cryptographic hash functions

Cryptographers develop new hash functions every few years because cryptanalysts and mathematicians find weaknesses in the older ones. Valerie Aurora provides a graphic illustration of this.

7: Encrypting Files (tbd)

TBD

8: Secret and Public Keys (tbd)

TBD

9: Encrypting Volumes (tbd)

TBD

10: Connecting Computers (tbd)

TBD

11: Networks of Networks (tbd)

TBD

12: End-to-End Networking (uc)

Under Construction

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

TBD

13: Enterprise Computing (uc)

Under Construction

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

TBD

14: Network Encryption (uc)

Under Construction

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

TBD

15: Internet Services and Email (uc)

Under Construction

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

TBD

16: The World Wide Web (uc)

Under Construction

Study Acronyms and Terms

Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.

TBD

17: Governments and Secrecy (tbd)

TBD

Appendix: Alternative Terms and Concepts

This section of the textbook provides details not otherwise addressed by the main text.

A Comprehensive Model of Information System Security

Two educational standards in information system security refer to closely-related models of information system security. First, we have a US government training standard:

Second, we have an academic curriculum standard:

Detailed Chapter Outlines

1: Security From the Ground Up

1.1 The Security Landscape

Not Just Computers Any More

1.1.1 Making Security Decisions

1.1.2 The Security Process

1.1.3 Continuous Improvement: A Basic Principle

The Roots of Continuous Improvement

1.2 Process Example: Bob’s Computer

Assets

Risks

Policy

Implementation

Monitoring

Recovery

Bob’s Response

1.3 Assets and Risk Assessment

Identifying Risks

Fine Points of Terminology

Hackers

1.3.1 What Are We Protecting?

Identifying Goals

Identifying Assets

1.3.2 Security Boundaries

Least Privilege: A Second Basic Principle

Example: Boundaries in a Dorm

Analyzing the Boundary

The Insider Threat

1.3.3 Security Architecture

Defense In Depth: A Third Basic Principle

1.3.4 Risk Assessment Overview

1.4 Identifying Risks

1.4.1 Threat Agents

1.4.2 Security Properties, Services, and Attacks

1.5 Prioritizing Risks

1.5.1 Example: Risks to Alice’s Laptop

Step 1: Identify Computing Assets

Step 2: Identify Threat Agents and Potential Attacks

Step 3: Estimate the Likelihood of Individual Attacks

Step 4: Estimate the Impact of Attacks over Time

Step 5: Calculate the Impact of Each Attack

1.5.2 Other Risk Assessment Processes

NIST Recommendation

OCTAVE Process

1.6 Ethical Issues in Security Analysis

Laws, Regulations, and Codes of Conduct

1.6.1 Searching for Vulnerabilities

1.6.2 Sharing or Publishing Vulnerabilities

1.7 Security Example: Aircraft Hijacking

1.7.1 Hijacking: A High-Level Analysis

Assets

Risks

Policy

Implementation

Monitoring

Recovery

1.7.2 September 11, 2001

1.8 Resources

1.8.1 Review Questions

1.8.2 Exercises and Problems

2: Controlling a Computer

2.1 Computers and Programs

The Motherboard

2.1.1 Input/Output

Parallel Versus Serial Wiring

The BIOS

2.1.2 Program Execution

Separating Data and Control

2.1.3 Procedures

The Stack

Buffers

2.2 Programs and Processes

2.2.1 Switching Between Processes

Observing Active Processes

2.2.2 The Operating System

I/O System

Security

2.3 Buffer Overflow and The Morris Worm

The ‘finger’ program

2.3.1 The ‘finger’ Overflow

Exploiting ‘finger’

The Shellcode

The Worm Released

2.3.2 Security Alerts

Establishing CERT

2.4 Access Control Strategies

Islands

Vaults

2.4.1 Puzzles and Patterns

Open Design: A Basic Principle

Cryptography and Open Design

Pattern-based Access Control

Biometrics

2.4.2 Chain of Control: Another Basic Principle

Controlling the BIOS

Subverting the Chain of Control

2.5 Keeping Processes Separate

Separation Mechanisms

Program Modes

RAM Protection

User Identities

Evolution of Personal Computers

Security on Personal Computers

Operating System Security Features

2.5.1 Sharing a Program

Access Matrix

Access Rights

2.5.2 Sharing Data

2.6 Security Policy and Implementation

Constructing Alice’s Security Plan

Writing a Security Policy

2.6.1 Analyzing Alice’s Risks

2.6.2 Constructing Alice’s Policy

2.6.3 Alice’s Security Controls

Alice’s Backup Procedure

2.7 Security Plan: Process Protection

Policy for Process Protection

Functional Security Controls

The Dispatcher’s Design Description

The Design Features

The Dispatching Procedure

Security Controls for Process Protection

2.8 Resources

2.8.1 Review Questions

2.8.2 Problems and Exercises

3: Controlling Files

3.1 The File System

File and Directory Path Names

3.1.1 File Ownership and Access Rights

File Access Rights

Initial File Protection

3.1.2 Directory Access Rights

3.2 Executable Files

3.2.1 Execution Access Rights

Types of Executable Files

3.2.2 Computer Viruses

Virus Infection

Malicious Viruses

3.2.3 Macro Viruses

3.2.4 Modern Malware: A Rogue’s GAllery

Waledac

Conficker, also called Downadup

Pushdo/Cutwail

ZeuS

Stuxnet

3.3 Sharing and Protecting Files

Objectives

Threats

Risks

3.3.1 Policies for Sharing and Protection

Underlying System Policy

User Isolation Policy

User File Sharing Policy

3.4 Security Controls for Files

3.4.1 Deny by Default: A Basic Principle

The opposite of Deny by Default

3.4.2 Managing Access Rights

3.4.3 Capabilities

Capabilities in Practice

Resource-oriented Permissions

3.5 File Security Controls

3.5.1 File Permission Flags

System and Owner Access Rights in Practice

3.5.2 Security Controls to Enforce Bob’s Policy

3.5.3 States and State Diagrams

Information States

3.6 Patching Security Flaws

The Patching Process

Security Flaws and Exploits

Windows of Vulnerability

3.7 Process Example: The Horse

3.7.1 Troy: A High-Level Analysis

3.7.2 Analyzing the Security Failure

3.8 Chapter Resources

3.8.1 Review Questions

3.8.2 Exercises

4: Sharing Files

4.1 Controlled Sharing

Tailored File Security Policies

Bob’s Sharing Dilemma

4.1.1 Basic File Sharing on Windows

4.1.2 User Groups

Administrative Groups

4.1.3 Least Privilege and Administrative Users

Administration by Regular Users

User Account Control on Windows

4.2 File Permission Flags

4.2.1 Permission Flags and Ambiguities

4.2.2 Permission Flag Examples

Security Controls for the File Sharing Policy

4.3 Access Control Lists

Multics ACLs

Modern ACL Implementations

4.3.1 POSIX ACLs

4.3.2 Macintosh OS-X ACLs

4.4 Microsoft Windows ACLs

4.4.1 Denying Access

Determining Access Rights

Building Effective ACLs

4.4.2 Default File Protection

Inherited Rights

Dynamic ACLs

Moving and Copying Files

4.5 A Different Trojan Horse

A Trojan Horse Program

Transitive Trust: A Basic Principle

4.6 Phase Five: Monitoring The System

Catching an intruder

4.6.1 Logging Events

A Log Entry

The Event Logging Mechanism

Detecting Attacks by Reviewing the Logs

4.6.2 External Security Requirements

Laws, Regulations, and Industry Rules

External Requirements and the Security Process

4.7 Chapter Resources

4.7.1 Review Questions

4.7.2 Exercises

5: Storing Files

5.1 Phase Six: Recovery

Incidents and Damage

Compromised Systems

5.1.1 The Aftermath of an Incident

Digital Forensics

Fault and Due Diligence

5.1.2 Legal Disputes

Legal Systems

Resolving a Legal Dispute

5.2 Digital Evidence

The Fourth Amendment

Legal Concepts

5.2.1 Collecting Legal Evidence

Collecting Evidence at The Scene

Securing the Scene

Documenting the Scene

5.2.2 Digital Evidence Procedures

Authenticating a Hard Drive

5.3 Storing Data on a Hard Drive

Magnetic Recording and Tapes

Hard Drive Fundamentals

5.3.1 Hard Drive Controller

5.3.2 Hard Drive Formatting

High Level Format

Fragmentation

Quick Format

Flash Drives

5.3.3 Error Detection and Correction

Parity Checking

Checksums

Cyclic Redundancy Checks

Error Correcting Codes

5.3.4 Hard Drive Partitions

Partitioning to Support Older Drive Formats

Partitioning in Modern Systems

Partitioning and Fragmentation

Hiding Data with Partitions

5.3.5 Memory Sizes and Address Variables

Address, Index, and Pointer Variables

Memory Size Names and Acronyms

Estimating the Number of Bits

5.4 FAT: An Example File System

Volume Layout

5.4.1 Boot Blocks

The FAT

5.4.2 Building Files from Clusters

Cluster Storage

An Example FAT File

FAT Format Alternatives

5.4.3 FAT Directories

Long File Names

Deleting Files

Undeleting a File

5.5 Modern File Systems

File System Design Goals

Conflicting File System Objectives

5.5.1 Unix File System

Inodes

Directories

5.5.2 Apple’s HFS Plus

5.5.3 Microsoft’s NTFS

5.6 Input/Output and File System Software

Device Independence

The Hourglass

File System Software

Programming Assumptions

5.6.1 Software Layering

An Example

Layering Logic

Abstraction

5.6.2 A Typical I/O Operation

Part A: Call the operating system

Part B: OS constructs the I/O operation

Part C: The driver starts the actual I/O device

Part D: The I/O operation ends

5.6.3 Security and I/O

Restricting the devices themselves

Restricting Parameters in I/O Operations

File Access Restrictions

5.7 Resources

5.7.1 Review Questions

5.7.2 Problems/Exercises

6: Authenticating People

Chapter Outline

 

6.1 Unlocking a Door

6.1.1 Authentication Factors

Two-Factor Authentication

Three-Factor Authentication

6.1.2 Threats and Risks

Risks

Attack Strategy: Low Hanging Fruit

6.2 Evolution of Password Systems

Password Hashing

Procedure Diagrams

Password Hashing in Practice

6.2.1 One-way Hash Functions

Modern Hash Functions

A Cryptographic Building Block

6.2.2 Sniffing Credentials

6.3 Password Guessing

DOD Password Guideline

Interactive Guessing

Network-based Guessing

Off-line Password Cracking

6.3.1 Password Search Space

6.3.2 Truly Random Password Selection

6.3.3 Cracking Speeds

6.4 Attacks on Password Bias

Bias and Entropy

6.4.1 Biased Choices and Average Attack Space

Average Attack Space

Biased Password Selection

Measuring Likelihood, not Certainty

Making Independent Guesses

Example: 4-digit luggage lock

6.4.2 Estimating Language-Based Password Bias

Klein’s Password Study

6.5 Authentication Tokens

Passive Authentication Tokens

6.5.1 Challenge-Response Authentication

Implementing Challenge-Response

Another Cryptographic Building Block

The Nonce

Direct Connect Tokens

6.5.2 One-time Password Tokens

A Token’s Search Space

Average Attack Space

Attacking One-Time Password Tokens

Guessing a Credential

6.5.3 Token Vulnerabilities

6.6 Biometric Authentication

6.6.1 Biometric Accuracy

6.6.2 Biometric Vulnerabilities

6.7 Authentication Policy

6.7.1 Weak and Strong Threats

Weak Threats

Strong Threats

Effect of Location

6.7.2 Policies for Weak Threat Environments

A Household Policy

A Workplace Policy: Passwords Only

A Workplace Policy: Passwords and Tokens

6.7.3 Policies for Strong and Extreme Threats

Passwords Alone for Strong Threats

Passwords Plus Biometrics

Passwords Plus Tokens

Constructing the Policy

6.7.4 Password Selection and Handling

Simple Passwords

Strong But Memorable Passwords

The Strongest Passwords

6.8 Resources

6.8.1 Review Questions

6.8.2 Problems/Exercises

7: Encrypting Files

Chapter Outline

 

7.1 Protecting the Accessible

7.1.1 Process Example: The Encrypted Diary

7.1.2 Encryption Basics

Categories of Encryption

A Process View of Encryption

Shared Secret Keys

Effective Encryption

7.1.3 Encryption and Information States

Illustrating Policy with a State Diagram

Proof of Security

7.2 Encryption and Cryptanalysis

7.2.1 The Vignère Cipher

7.2.2 Electromechanical Encryption

7.3 Computer-Based Encryption

The Data Encryption Standard

The Advanced Encryption Standard

Predicting Cracking Speeds

7.3.1 Exclusive Or: A Crypto Building Block

7.3.2 Stream Ciphers: Another Building Block

Generating a Key Stream

An Improved Key Stream

7.3.3 Key Stream Security

RC4 Biases

Pseudo-Random Number Generators

The Effects of Ciphertext Errors

7.3.4 The One-time Pad

Soviet Espionage and One-time pads

Modular Arithmetic

Practical One-time pads

7.4 File Encryption Software

7.4.1 Built-in File Encryption

7.4.2 Encryption Application Programs

Ensuring Secure File Encryption

Protecting the Secret Key

7.4.3 Erasing a Plaintext File

Risks That Demand Overwriting

Preventing Low Level Data Recovery

Erasing Optical Media

7.4.4 Choosing a File Encryption Program

Software Security Checklist

File Encryption Security Checklist

Cryptographic Product Evaluation

7.5 Digital Rights Management

Policy Dilemmas

The DVD Content Scrambling System

7.6 Resources

7.6.1 Review Questions

7.6.2 Problems/Exercises

8: Secret and Public Keys

Chapter Outline

 

8.1 The Key Management Challenge

Cryptonets

Levels of Risk

Key Sharing Procedures

8.1.1 Rekeying

Cryptoperiods

Distributing New Keys

Public-Key Cryptography

8.1.2 Using Text for Encryption Keys

Taking Advantage of Longer Passphrases

Software Checklist for Key Handling

8.1.3 Key Strength

Operating Recommendations

8.2 The Reused Key Stream Problem

8.2.1 Avoiding Reused Keys

Changing the Internal Key

Combining the Key with a Nonce

Software Checklist for Internal Keys Using Nonces

8.2.2 Key Wrapping: Another Building Block

Multiple Recipients

Key Wrapping and Cryptoperiods

Key “Splitting”

Software Checklist for Wrapped Keys

8.2.3 Separation of Duty: A Basic Principle

Separation of Duty with Encryption

8.2.4 DVD Key Handling

8.3 Public-key Cryptography

Attacking Public Keys

8.3.1 Sharing a Secret: Diffie-Hellman

Perfect Forward Secrecy

Variations of Diffie-Hellman

8.3.2 Diffie-Hellman: The Basics of the Math

8.3.3 Elliptic Curve Cryptography

8.4 RSA: Rivest-Shamir-Adleman

Digital Signatures

RSA Applications

8.4.1 Encapsulating Keys with RSA

8.4.2 An Overview of RSA Mathematics

Brute Force Attacks on RSA

The Original Challenge

The Factoring Problem

Selecting a Key Size

Other Attacks on RSA

8.5 Data Integrity and Digital Signatures

8.5.1 Detecting Malicious Changes

One-way Hash Functions

Birthday Attacks

8.5.2 Detecting a Changed Hash Value

Keyed Hash

8.5.3 Digital Signatures

Non-Repudiation

8.6 Publishing Public Keys

8.6.1 Public-Key Certificates

Certificate Authorities

8.6.2 Chains of Certificates

Certificate Hierarchy

Web of Trust

Self-Signed Certificates

Trickery with Certificates

8.6.3 Authenticated Software Updates

8.7 Resources

8.7.1 Review Questions

8.7.2 Problems/Exercises

9: Encrypting Volumes

Chapter Outline

 

9.1 Securing a Volume

9.1.1 Risks To Volumes

Eavesdropping

Discarded Hard Drives

9.1.2 Risks and Policy Trade-offs

Identifying Critical Data

Policy for Unencrypted Volumes

Policy for Encrypted Volumes

9.2 Block Ciphers

Building a Block Cipher

The Effect of Ciphertext Errors

9.2.1 Evolution of DES and AES

DES and Lucifer

Triple DES

The Development of AES

AES Finalists

9.2.2 The RC4 Story

Export Restrictions

RC4 Leaking, Then Cracking

Lessons Learned

9.2.3 Qualities of Good Encryption Algorithms

Explicitly designed for encryption

Security does not rely on its secrecy

Available for analysis

Subjected to analysis

No practical weaknesses

Cryptographic evaluation

Choosing an Encryption Algorithm

9.3 Block Cipher Modes

9.3.1 Stream Cipher Modes

Weaknesses

Ciphertext Errors

Counter Mode

9.3.2 Cipher Feedback Mode

Ciphertext Errors

9.3.3 Cipher Block Chaining

Ciphertext Errors

9.4 Encrypting a Volume

Choosing a Cipher Mode

Hardware Versus Software

9.4.1 Volume Encryption in Software

Files as Encrypted Volumes

9.4.2 Adapting an Existing Mode

Drive Encryption with Counter Mode

Constructing the Counter

An Integrity Risk

Drive Encryption with CBC Mode

Integrity Issues with CBC Encryption

9.4.3 A “Tweakable” Encryption Mode

9.4.4 Residual Risks

Untrustworthy encryption

Encryption integrity

Looking for plaintext

Data Integrity

9.5 Encryption in Hardware

Recycling the Drive

9.5.1 The Drive Controller

Security Boundary

Drive Formatting

9.5.2 Drive Locking and Unlocking

9.6 Managing Encryption Keys

Key Generation

Rekeying

9.6.1 Key Storage

Working key storage in hardware

Persistent Key Storage

Protected storage

Key Wrapping

Managing removable keys

9.6.2 Booting an Encrypted Drive

Pre-boot authentication

BIOS integration

Disk-based authentication

Automatic reboot

9.6.3 Residual Risks to Keys

Intercepted Passphrase

Intercepted Keys

Eavesdrop on the encryption process

Sniffing keys from swap files

Cold boot attack

Recycled Password Attack

The “Master Key” Risk

9.7 Resources

9.7.1 Review Questions

9.7.2 Problems/Exercises

10: Connecting Computers

Chapter Outline

 

10.1 The Network Security Problem

10.1.1 Basic Network Attacks and Defenses

Example: Sharing Eve’s Printer

Policy Statements

Potential Controls

10.1.2 Physical Network Protection

Protecting External Wires

10.1.3 Host and Network Integrity

Network Worms

Botnets

Botnets in Operation

Fighting Botnets

The Insider Threat

10.2 Transmitting Information

10.2.1 Message Switching

10.2.2 Circuit Switching

10.2.3 Packet Switching

Mix and Match Network Switching

10.3 Putting Bits on a Wire

Synchronous versus Asynchronous Links

10.3.1 Wireless Transmission

Frequency, Wavelength, and Bandwidth

AM and FM Radio

Frequency sharing

Radio Propagation and Security

10.3.2 Transmitting Packets

Network Efficiency and Overhead

Acknowledgement Protocol

10.3.3 Recovering a Lost Packet

10.4 Ethernet: A Modern LAN

Today’s Ethernet

Wired

Optical

Wireless

10.4.1 Wiring a Small Network

Network Cables

10.4.2 Ethernet Frame Format

LAN Addressing

Address Format

MAC Addresses and Security

10.4.3 Finding Host Addresses

Addresses from Keyboard Commands

Addresses from Mac OS

Addresses from Microsoft Windows

10.4.4 Handling Collisions

Wireless Collisions

Wireless Retransmission

10.5 The Protocol Stack

10.5.1 Relationships Between Layers

10.5.2 The OSI Protocol Model

The Orphaned Layers

10.6 Network Applications

Servers

Peer-to-Peer

Network Applications and Information States

10.6.1 Resource Sharing

10.6.2 Data and File Sharing

Delegation: A Security Problem

10.7 Resources

10.7.1 Review Questions

10.7.2 Problems/Exercises

11: Networks of Networks

Chapter Outline

 

11.1 Building Information Networks

Network Topology: Evolution of the Phone Network

11.1.1 Point-to-Point Network

11.1.2 Star Network

11.1.3 Bus Network

11.1.4 Tree Network

11.1.5 Mesh

11.2 Combining Computer Networks

Traversing Computer Networks

The Internet Emerges

11.2.1 Hopping Between Networks

Routing Internet Packets

Counting Hops

11.2.2 Evolution of Internet Security

Protecting the ARPANET

Early Internet Attacks

Early Internet Defenses

11.2.3 Internet Structure

Autonomous Systems

Routing Security

International Rerouting

Starting an ISP

11.3 Talking Between Hosts

Socket Interface

Port Numbers

Socket Addresses

Socket API capabilities

11.3.1 IP Addresses

IP Version 6

11.3.2 IP Packet Format

Packet Fragmentation

11.3.3 Address Resolution Protocol

The ARP Cache

11.4 Internet Addresses in Practice

Network Masks

IPv4 Address Classes

11.4.1 Addresses, Scope, and Reachability

11.4.2 Private IP Addresses

Assigning Private IP Addresses

Dynamic Host Configuration Protocol

11.5 Network Inspection Tools

Wireshark

11.5.1 Wireshark Examples

Address Resolution Protocol

IP Header

11.5.2 Mapping a LAN with nmap

The Nmap Network Mapper Utility

Use Nmap with Caution

11.6 Resources

11.6.1 Review Questions

11.6.2 Problems/Exercises

12: End-to-End Networking

Chapter Outline

 

12.1 “Smart” Versus “Dumb” Networks

The End-to-End Principle

12.2 Internet Transport Protocols

User Datagram Protocol

End-to-End Transport Protocols

12.2.1 Transmission Control Protocol

Sequence and Acknowledgement Numbers

Window Size

TCP Connections

Connection Timeout

12.2.2 Attacks on Protocols

Internet Control Message Protocol

Ping Attacks

Redirection Attacks

TCP/IP Attacks

12.3 Names on the Internet

The Name Space

12.3.1 Domain Names in Practice

Using a Domain Name

12.3.2 Looking Up Names

DNS Lookup

12.3.3 DNS Protocol

Resolving a Domain Name Via Redirection

DNS Redundancy

12.3.4 Investigating Domain Names

12.3.5 Attacking DNS

Cache Poisoning

DOS Attacks on DNS Servers

Botnets

DOS Attacks and DNS Resolvers

DNS Security Improvements

12.4 Internet Gateways and Firewalls

12.4.1 Network Address Translation (NAT)

Configuring DHCP and NAT

12.4.2 Filtering and Connectivity

Packet Filtering

Inbound Connections

12.4.3 Software-based Firewalls

12.5 Long Distance Networking

12.5.1 Older technologies

Analog broadcast networks

Circuit-switched telephone systems

Analog-based digital networks

Microwave networks

Analog two-way radios

12.5.2 Mature technologies

Dedicated digital network links

Cell Phones

Cable TV

12.5.3 Evolving technologies

Optical fiber networks.

Bidirectional satellite communications

12.6 Resources

12.6.1 Review Questions

12.6.2 Problems/Exercises

13: Enterprise Computing

Chapter Outline

 

13.1 The Challenge of Community

13.1.1 Companies and Information Control

Reputation: Speaking with One Voice

Companies and Secrecy

Accountability

Need To Know

13.1.2 Enterprise Risks

Insiders and Outsiders

13.1.3 Social Engineering

Thwarting Social Engineering

13.2 Management Processes

13.2.1 Security Management Standards

Evolution of Management Standards

13.2.2 Deployment Policy Directives

Risk Acceptance

13.2.3 Management Hierarchies and Delegation

Profit Centers and Cost Centers

Implications for Information Security

13.2.4 Managing Information Resources

Managing Information Security

13.2.5 Security Audits

Compliance Audits

Security Scans

13.2.6 Information Security Professionals

Information Security Training

College Degrees

Information Security Certification

13.3 Enterprise Issues

Education, Training, and Awareness

13.3.1 Personnel Security

Employee Clearances

Employee Life Cycle

Employee Roles

Administrators and Separation of Duty

Partial Insiders

13.3.2 Physical Security

Power Management

Information System Protection

Environmental Management

13.3.3 Software Security

Software Development Security

Configuration Management

Repeatability and Traceability

Formalized Coding Activities

Avoiding Risky Practices

Software-based access controls

13.4 Enterprise Network Authentication

Local Authentication

13.4.1 Direct Authentication

13.4.2 Indirect Authentication

Ticket-based Authentication

Service-based Authentication

Redirected Authentication

Properties of Indirect Authentication

13.4.3 Off-Line Authentication

13.5 Contingency Planning

13.5.1 Data Backup and Restoration

Full Versus Partial Backups

File-Oriented Synchronized Backups

File-Oriented Incremental Backups

Image Backups

RAID as Backup

13.5.2 Handling Serious Incidents

Examining a Serious Attack

13.5.3 Disaster Preparation and Recovery

Business Impact Analysis

Recovery Strategies

Contingency Planning Process

13.6 Resources

13.6.1 Review Questions

13.6.2 Problems/Exercises

14: Network Encryption

Chapter Outline

 

14.1 Communications Security

Cryptographic Protections

14.1.1 Crypto By Layers

Link Layer Encryption and 802.11 Wireless

Network Layer Encryption and IPsec

Socket Layer Encryption with SSL/TLS

Application Layer Encryption with S/MIME or PGP

Layer-Oriented Variations

14.1.2 Administrative and Policy Issues

Sniffing Protection

Traffic Filtering

Automatic Encryption

Internet Site Access

End-to-end Crypto

Keying

14.2 Crypto Keys on a Network

The Default Keying Risk

Key Distribution Objectives

Key Distribution Strategies

Key Distribution Techniques

14.2.1 Manual Keying: A Building Block

14.2.2 Simple Rekeying

Self-Rekeying

New Keys Encrypted With Old

14.2.3 Secret-Key Building Blocks

Key Wrapping

Key Distribution Center (KDC)

Shared Secret Hashing

14.2.4 Public-Key Building Blocks

Secret Sharing with Diffie-Hellman

Wrapping a Secret with RSA

14.2.5 Public-Key versus Secret-Key Exchanges

Choosing Secret-Key Techniques

Choosing Public-Key Techniques

14.3 Crypto Atop the Protocol Stack

Privacy Enhanced Mail

Pretty Good Privacy

Adoption of Secure Email and Application Security

14.3.1 Transport Layer Security - SSL and TLS

The World Wide Web

Netscape Software

Secure Sockets Layer/Transport Layer Security

14.3.2 SSL Handshake Protocol

14.3.3 SSL Record Transmission

Message Authentication Code

Data Compression

Application Transparency and End-to-End Crypto

14.4 Network Layer Cryptography

Components of IPsec

14.4.1 The Encapsulating Security Payload (ESP)

Tunnel Mode

Transport Mode

ESP Packet Format

14.4.2 Implementing a VPN

Private IP Addressing

IPsec Gateways

Bundling Security Associations

14.4.3 Internet Key Exchange (IKE) Protocol

14.5 Link Encryption on 802.11 Wireless

Wireless Defenses

WEP

Wi-Fi Protected Access: WPA and WPA2

14.5.1 Wireless Packet Protection

Encryption processing

Integrity processing

Decryption and Validation

14.5.2 Security Associations

Establishing the Association

Establishing the Keys

14.6 Encryption Policy Summary

Prevent Sniffing

Apply Encryption Automatically

Additional Issues

14.7 Resources

14.7.1 Review Questions

14.7.2 Problems/Exercises

15: Internet Services and Email

Chapter Outline

 

15.1 Internet Services

Traditional Internet Applications

15.2 Internet Email

Message Formatting Standards

Message Headers

The To: Field

The From: Field

Additional Headers

MIME Formatting

15.2.1 Email Protocol Standards

Mailbox Protocols

POP3: An Example

Email Delivery

Port Number Summary

15.2.2 Tracking an Email

#1: From UC123 to USM01

#2: From USM01 to USM02

#3: From USM02 to MMS01

#4: From MMS01 to MMS02

15.2.3 Forging an Email Message

Authenticated Email

15.3 Email Security Problems

Connection-based attacks

15.3.1 Spam

Classic Financial Fraud

Evolution of Spam Prevention

MTA Access Restriction

Filtering on Spam Patterns

15.3.2 Phishing

Tracking a Phishing Attack

15.3.3 Email Viruses and Hoaxes

Virus Execution

Email Chain Letters

Virus Hoax Chain Letters

15.4 Enterprise Firewalls

Evolution of Internet Access Policies

A Simple Internet Access Policy

15.4.1 Controlling Internet Traffic

15.4.2 Traffic Filtering Mechanisms

Session Filtering

Application Filtering

15.4.3 Implementing Firewall Rules

Example of Firewall Security Controls

Additional Firewall Mechanisms

Firewall Rule Proliferation

15.5 Enterprise Point of Presence

Internet Service Providers

The DMZ

Intrusion Detection

Intrusion Prevention Systems

Data Loss Prevention Systems

15.5.1 POP Topology

Single Firewall Topology

Bastion Host Topology

Three-Legged Firewall

Dual Firewalls

15.5.2 Attacking an Enterprise Site

Protocol Attacks

Tunnelling

National Firewalls

15.5.3 The Challenge of Real-Time Media

15.6 Resources

15.6.1 Review

15.6.2 Problems/Exercises

16: The World Wide Web

Chapter Outline

 

16.1 Hypertext Fundamentals

Web Standards

Formatting: Hypertext Markup Language

Hypertext Links

Cascading Style Sheets

Hypertext Transfer Protocol

Retrieving data from other files or sites

16.1.1 Addressing Web Pages

Email URLs

Hosts and Authorities

Path Name

Default Web Pages

16.1.2 Retrieving a Static Web Page

Building a Page from Multiple Files

Web Servers and Statelessness

Web Directories and Search Engines

Web Crawlers

Crime via Search Engine

16.2 Basic Web Security

Client Policy Issues

Policy Motivations and Objectives

Internet Policy Directives

Strategies to Manage Web Use

Traffic Blocking

Monitoring

Training

The Tunneling Dilemma

Firewalling HTTP Tunnels

16.2.1 Security for Static Web Sites

16.2.2 Server Authentication

Mismatched Domain Name: May be Legitimate

Untrusted Certificate Authority: Difficult to Verify

Expired Certificate: Possibly Bogus, Probably Not

Revoked Certificate: Always Bogus

Invalid Digital Signature: Always Bogus

16.2.3 Server Masquerades

Bogus Certificate Authority

Misleading Domain Name

Stolen Private Key

Tricked certificate authority

16.3 Dynamic Web Sites

Web Forms and POST

16.3.1 Scripts on the Web

Scripting Languages

Client Side Scripts

Client Scripting Risks

“Same Origin” Policy

Sandboxing

16.3.2 States and HTTP

Browser Cookies

16.4 Content Management Systems

16.4.1 Database Management Systems

Structured Query Language

Enterprise Databases

Database Security

16.4.2 Password Checking: A CMS Example

Logging In to a Web Site

An Example Login Process

16.4.3 Command Injection Attacks

A Password-Oriented Injection Attack

Inside the Injection Attack

Resisting Web Site Command Injection

Input Validation

16.5 Ensuring Web Security Properties

Web Confidentiality

Serve confidential data

Collect confidential data

Web Integrity

16.5.1 Web Availability

High Availability

Continuous Operation

Continuous Availability

16.5.2 Web Privacy

Client anonymity

Anonymous proxies

Private browsing

16.6 Resources

16.6.1 Review

16.6.2 Problems/Exercises

17: Governments and Secrecy

Chapter Outline

 

17.1 Secrecy In Government

Hostile Intelligence Services

Classified Information

17.1.1 The Challenge of Secrecy

The Discipline of Secrecy

Secrecy and Information Systems

Exposure and Quarantine

17.1.2 Information Security and Operations

Intelligence and Counterintelligence

Information Operations

Operations Security

17.2 Classifications and Clearances

Classification Levels

Other Restrictions

Legal Basis for Classification

Minimizing the Amount of Classified Information

17.2.1 Security Labeling

Sensitive But Unclassified

17.2.2 Security Clearances

17.2.3 Classification Levels in Practice

Working with classified information

Higher levels have greater restrictions

17.2.4 Compartments and Other Special Controls

Sensitive Compartmented Information

SCI Clearances

Example of SCI processing

Special Access Programs

Special Intelligence Channels

Enforcing Access to Levels and Compartments

17.3 National Policy Issues

Legal elements

Federal Information Security Management Act

NIST Standards and Guidance for FISMA

Personnel roles and responsibilities

Threats, vulnerabilities, and countermeasures

17.3.1 Facets of National System Security

Physical Security

Communications Security

Information Security

Life Cycle Procedures

17.3.2 Security Planning

System Life Cycle Management

Security System Training

17.3.3 Certification and Accreditation

NIST Risk Management Framework

17.4 Communications Security

Key Leakage Through Spying

17.4.1 Cryptographic Technology

Classic Type 1 Crypto Technology

17.4.2 Crypto Security Procedures

Two-Person Integrity

Controlled Cryptographic Items

Key Management Processes

Data Transfer Device

Electronic Key Management System

17.4.3 Transmission Security

Electronic Warfare

Spread Spectrum

17.5 Data Protection

Media Handling

Media Sanitization and Destruction

17.5.1 Protected Wiring

17.5.2 TEMPEST

TEMPEST Zones

TEMPEST Red/Black

TEMPEST Separation

17.6 Trustworthy Systems

Reference Monitor

High Assurance

Trusted Systems Today

National Standards

17.6.1 Integrity of Operations

Nuclear Operations

Positive Control

Force Surety

Achieving Nuclear High Assurance

17.6.2 Multilevel Security

Rule- and Identity-Based Access Control

Covert Channels

Other Multilevel Security Problems

17.6.3 Computer Modes of Operation

Dedicated Mode

System High Mode

Compartmented or Partitioned Mode

Multilevel Mode

17.7 Resources

17.7.1 Review Questions

17.7.2 Problems/Exercises

Appendix: Alternative Security Terms and Concepts

This provides brief articles on certain topics relevant to CNSS training standards.

  • Application dependent guidance
  • Attribution of information
  • Comprehensive model of information system security
  • Configuration management of programming standards and control
  • Critical and essential workloads
  • Threats to systems
  • Information identification
  • Network security privileges (class, node)
  • Object reuse
  • Power emergency shutdown
  • Procedural controls on documentation, logs, and journals
  • Public switched network
  • Public versus private networks
  • Remanence
  • Segregation of duties
  • Threats and threat agents
  • Transmission security countermeasures

Drupal

This is a collection of pages describing experiences I've had with Drupal. 

Basics of Views Bulk Operations (VBO)

After reading and re-reading several descriptions of how to do bulk changes to a Drupal site, the Views Bulk Operation (VBO) mechanism sounded most promising. It did, however, take another couple of hours of poking at explanations to figure out how THAT works.

The Views mechanism allows the site manager to create a page or a block that displays information retrieved from the Drupal database. It's mostly intended to pull data out of nodes and display the data as a table. 

The VBO mechanism uses Views to perform database operations on nodes in the database. To use it, we create a page that, instead of simply retrieving data from the database, applies a "bulk operation" to the data. 

Here is how I used it, step-by-step:

1. Start up Views and add a new view

Give the view a name, and select "Create a page." You'll need the page in order to execute your bulk operations. Click "Continue and Edit."

2. Allow entry of fields

By default, a view will display a block of "content" text. We need to change it to display fields  of data. To do this we click on Format: Settings in the leftmost column under Page details.

Click on Force using fields, and apply.

3. Add a field to do bulk operations.

Click on the Add button to the right of the term Fields. This presents a list of fields, including two marked "Bulk Operations." Select a bulk operation.

4. Set Access: Permission so that non-privileged users can't run it.

Click on the Permission link and make sure no non-privileged users can run this view.

5. Save the whole thing to execute later

Views will display a "preview" of the operation just below the settings area. I misunderstood the preview and thought it would allow me to run the bulk operation right there. You have to save the thing, probably with a menu entry, and execute it from the menu.

6. Bring up a page on the site, and select the menu entry to run the operation.

The operation may demand details from you about the settings to change and the nodes to affect. Choose the appropriate inputs and execute the thing.

In context, I see why it works this way. From a usability standpoint, it looks like nonsense.

Migration to Drupal

Drupal Icon

After spending a few years with WordPress, I decided to migrate to Drupal. I installed WordPress in December, 2007, and replaced it in February, 2011.

Existing users had to recover their passwords, since ther was no clean way to use WordPress-encrypted passwords on a Drupal site. I also had the site off-line for part of the day while I replaced the WordPress software with the Drupal software.

My main reason for the migration is Drupal's "book" feature. Drupal makes it easy to structure short articles into a long, hierarchically-structured narrative. This is essentially how I write books anyway. This makes it easier to present complex topics built from a series of short articles.

For example, the site contains this long presentation on multilevel security. The presentation consists of several sections and subsections. The text began as a long article, and I broke it into pieces that I hooked together manually using links. The subsections should each be a separate article. Instead, I kept each section as a single article. And I had to hand-craft the navigation between sections.

The site also contains a detailed discussion of stream ciphers and the risk of reused key streams. At present the discussion consists of several separate articles that are interconnected via hyperlinks. This makes it hard to add details to the discussion.

For example, I need to post a discussion of the folly of using statistical tests to prove the effectiveness of a stream cipher's key stream. Believe it or not, there are graduate students who seriously believe this is a valid technique. Where do I put this explanation, and how do I hook it in with the existing articles? I have to fiddle with the navigation by hand in WordPress.

Drupal Security

Drupal also supports a more flexible authentication and access control discipline. I'm skeptical that this will provide better security - it's harder to mark everything correctly when you have finer-grained permissions, especially when still learning about the system.

As before, visitors need to set up an account to post comments, but don't need an account to read the site's public contents.

Instructor Discussions

Jones and Bartlett, the publisher of the upcoming Elementary Information Security, will be providing a protected area to distribute materials to course instructors who use the textbook. This would include lecture notes, figures, and answers to problem sets. So I doubt I'll need to set up such a mechanism on this site. However, I would like to encourage discussion of the text and the topics, and how best to present them to students. This may involve a mailing list or an on-line forum. I'll see how things develop.

Wordpress tag: 
Post category: 

Overview of Drupal Migration

The migration did not go without a few hitches, but it went as smoothly as might be expected for such a thing.

My inexperience with Apache's .htaccess files caused an unnecessary delay and a lot of "500 Internal Server Error" messages.

The process began with a Drupal clone of my WordPress site. I created it on a separate domain hosted by GoDaddy, my ISP. Once the clone reached a sufficient degree of done-ness, I removed the WordPress files and installed a fresh Drupal install atop it. The installation was actually performed by GoDaddy's "Host Connection" feature. Then I added the custom files I needed (graphics, themes, and plug-in modules). Once all those parts were in place, I copied the clone's Drupal database.

Migration Objectives

As I approached the migration, I realized that there were certain things I wanted to achieve:

  • Existing links and URLs should continue to work. I was especially careful about links identified by Google. One of my earliest experiences with migrating a web site was at the hands of AT&T, when they briefly owned our local cable company. They made a hideous mess of it, breaking every link I'd managed to collect.
  • The RSS feed for site articles should continue to work. I initially assumed this would happen by default. I'm glad I tested it: Drupal's feed normally links to a specific XML document while my older WordPress feed went to a path.
  • Preserve active users as much as possible. I doubted the move would be transparent to active users, but I did the best I could to cushion the impact.
  • Provide the same general features with Drupal blogging that I relied upon in WordPress. This was a huge challenge. WordPress values ease of use, while Drupal seems to value traditional defaults, even if they arose in ancient times.
  • Do as much as possible through WYSIWYG interfaces. Modern sites thrive on CSS and PHP, neither of which I know very well. I expect to learn a lot more about both in the coming months.

Migration Process

The migration involved the following steps which I'll discuss in other postings:

  • Migrate my existing content from WordPress to Drupal. This looks easy, but Drupal default behaviors actually make it difficult.
  • Ensure that Drupal provides the same text handling and post handling features as WordPress. Again, Drupal's default behaviors make this difficult.
  • Migrate the user community. This was separate from migrating the rest of the site. Meanwhile, I needed to suspend user activities on the Cryptosmith site while I actually performed the migration.

Note that these steps took place before the actual migration - I did all this to the cloned Drupal site while the WordPress Cryptosmith site remained on-line. The actual site migration involved these activities:

  • Provide a "Cryptosmith is Off-Line" page while moving from one hosting package to the other. This was easy, though it made me unrealistically confident about my ability to edit .htaccess files. 
  • Use Godaddy's hosting features to replace the WordPress site with the cloned Drupal site. This took two or three false starts due to .htaccess errors.

I expect there will be a few more topics to cover as the site evolves.

Wordpress tag: 

Migrating Blog Entries from Wordpress to Drupal

This process looks deceptively simple. WordPress happily exports all entries into a nicely formatted XML file. Drupal has a "WordPress Import" module that appears to do a comfortable import. What could possibly go wrong?

Well, in Drupal, everything comes down to a question of surprising choices for defaults. At least, if you are expecting ease of use, the default choices seem surprising.

I performed about a half-dozen imports before I was finally satisfied with the results. No doubt a Drupal expert would have nailed it the first time. But that's the problem: someone who imports another site may be the least likely to be an expert. In my case, the import is my first significant experience with Drupal. And it goes badly.

My specific problem was that the default import format discarded most of my paragraph breaks.

Input Formats

Drupal has a configuration feature called "input formats." Each format consists of three parts: a set of filters to select, configurations for those filters, and the order of those filters. One filter discards all HTML tags that aren't on an approved list. Another filter converts newlines into line or paragraph breaks. The default input formats did not automatically convert the newlines.

After a couple of attempts, I realized that there were these things called input filters, and that they should have been performing the conversion. Eventually I even realized that there was a special WordPress input format. However, the WordPress format proved to be more trouble than aid.

  • The WordPress format remains invisible until after you first try to import WordPress content. Not very helpful. I suppose there might be a way to make the secret format appear before I need it, but I never figured it out. I fell back on the expedient of importing a skeleton file of WordPress content first, and imported the serious content afterward.
  • The WordPress format doesn't actually select automatic line break conversion. I'm not sure why, since this is default behavior on WordPress.

The last thing I wanted to do was to break things at the start of my Drupal experience, so I was cautious about changing default configurations. In fact, I think I would have saved myself a world of effort if I had simply enabled the Line Break filter.

Instead, used a text editor to manually update the paragraph headers in the input text. This took about a half hour, which was a bit quicker than reinstalling a clean Drupal system and repeating the import.

I actually ended up performing this conversion twice. Once I used a global change to add paragraph breaks, but that ran into trouble with breaks that were inside paragraphs. A global change with "<br />" would have been a smarter choice.

The Probable Right Answer

If I had it to do over again, I would have enabled the Line Break Converter in the WordPress input format. I have no idea why the thing isn''t activated by default. The documentation suggests that it should do the right thing.

 

Wordpress tag: 

Blogging in Drupal

WordPress is well designed for blogging. I got used to the TinyMCE editor and easy-to-reach features to import graphics when using WordPress. I also got used to less sophisticated things like paragraph breaks and section subheadings. And I like the email alert when there's something to moderate.

I was appalled to discover that these things are omitted by default in Drupal.

A Litany of Ease-of-Use Oversights

Notice how I stuck an HTML header on the previous line. I like to use these things to break up a lengthy missive to make it easier to read. Drupal omits such things from formatted text by default.

Notice how I just started a second paragraph to address a second issue: and paragraphs, too, are considered unnecessary by Drupal's default configuration. When I talk about the "default" here I speak of "Filtered HTML." Web managers don't want regular users to enter "unfiltered" HTML because it opens the site to such things as "cross site scripting" attacks. On the other hand, we end up with hard-to-read text if our list of permitted HTML is too short.

Drupal's problems aren't limited to the defaults allowed for HTML, but they provide a good start to the list:

  • Filtered contents don't include conventional line or paragraph breaks.
  • Filtered contents don't include subsection headers, like HTML H2, H3, and so on.
  • Filtered contents don't include tables (I'm still fiddling with this one).
  • There's no built-in WYSIWYG editor - you have to install a special module and then you can install the editor software.
  • Once you install the TinyMCE editor, the default configuration provides the user with no toolbar or menu for configuring the text. You must configure such things yourself after installation.
  • Drupal does not provide an easy mechanism to upload images. That requires a separate, optional module even after you take the time to install the WYSIWYG modules and libraries.
  • There is no built-in mechanism to send alerts when it's time to moderate a comment or other new content. In fact, I couldn't find a module that obviously added this feature.

How I Fixed Them

First, I installed the WYSIWYG editor module.

To install modules in Drupal, you generally use FTP. I'm on a Macintosh and I've decided I like CyberDuck. It supports WYSIWYG uploading, downloading, and synchronization. I used to rely on DreamWeaver for synchronization, which is like killing a gnat with a hammer. Even with a nice donation, CyberDuck is much cheaper and easier to live with than DreamWeaver.

The WYSIWYG editor module does not, by itself include editors. It provides the framework within which you can install editors.

Since I like TinyMCE, I installed it as a Drupal library, which is the format that's compatible with the editor module. I simply went to the TinyMCE web site and downloaded their standard package, which is a bunch of Javascript.

Then, I fixed the WYSIWYG editor to work the way I wanted

Off the shelf, TinyMCE provides just about no functions. So you have to burrow into the "Wysiwyg profiles" under the Drupal Site Configuration.

To enable the editor, you first assign it to an input format. Then, the input format gives you a way to configure the editor.

Click on the format's "Configure" link. Then select "Buttons and plugins." This displays all of the features that may appear on the editor's GUI. The features appear as checkboxes.

Some checkboxes appear as plaintext and some appear with underlined links. I generally enable everything that's in plaintext, which is the first half of the list (about 20 boxes). I generally check all of them, plus "context menu" and "HTML block format." I also check "Teaser Break" which appears in plaintext at the bottom of the ilst.

Now, if you want to use the same editor with any other input formats - trust me, you do - then you must repeat the 20-checkbox procedure for each input format. (At least, this is the default behavior. As with most annoyances in Drupal, there may be a module to fix it, assuming you have the patience to look for one and use it. 

Then, I added image uploading

In keeping with Drupal's design philosophy, everything is optional. By itself, the WYSIWYG framework and TinyMCE allow you to create text that includes the usual, desirable elements. You can even include graphics with your text. Well, you can as long as you have a URL that points to your image.

If you want to download a particular image to go with your text, that's another story. All together now:

For that, you need a different module

I installed the IMCE module for this purpose. This provides a convenient little button on the image insertion dialog so that you can upload the image and retrieve its new URL.

Again, however, we must go back to the WYSIWYG configuration and enable IMCE for each input format.

I added paragraphs and heading support to "formatted HTML"

The only really restrictive input format is Formatted HTML, and it's generally too restrictive by default. I revised it to allow paragraph breaks, line breaks, and headings. To do this, I simply added the tags I wanted: <p>, <br>, <h2>, <h3>, and <h4>.

Finally, I added anti-spam support

Oddly enough, the AntiSpam module seems to be the only thing that knows how to contact me when it's time to moderate new content. There are various other notifier modules, but they're primarily intended to tell subscribers when new content has been published. If input is awaiting moderation, it's unpublished and these modules do nothing.

Unfortunately, the anti-spam module doesn't do the perfect job of notifying me. I get emails for every spam posting as well as every legitimate one.

Rant

Why isn't more of this done correctly by default?

Wordpress tag: 

Migrating a WordPress User List to Drupal

I'm always annoyed when I register for a web site only to have my user ID mysteriously disappear. The "scouting.org" web site has recreated itself about four times in the past decade. Each time has led to re-registration by the entire user community.

Therefore I decided to make a strong effort to retain my user community while migrating my site. The easy part was to contact those who provided email addresses and tell them what was happening. The hard part was to deal with passwords.

Step 1: Export the WordPress User List

The first step was to export the user database from WordPress. For this I installed the AmR Users plug-in produced by one AnMari, a WordPress developer. This plug-in provides an interface to construct tailored lists of registered users. I configured a list to display user IDs, "readable" names, email addresses, URLs, and comment counts. Unfortunately I could not display OpenID credentials, so I didn't have a clean way to incorporate them into the Drupal user database.

Another problem: I couldn't transfer user passwords. The passwords were safely stored in a hashed format. In a perfect world, both Drupal and WordPress would have used an identical hashing mechanism. Perhaps the mechanism could have been a standard PHP function used by both packages. In fact, each rolled their own. There were hacks that might have allowed using WordPress hashes on Drupal, but that would have required PHP coding.

A perfect world would have also allowed me to transfer the OpenID credentials. All I should have needed to do is store the appropriate information in the Drupal user database, and everything would have worked.

But that's not how things are. Maybe things will move more smoothly in a few more years.

Step 2: Clean up the user list

Like most webmasters, I get a lot of spam comments. I relied on Akismet to identify them on my WordPress site. I continue to use that service on my Drupal blog via the AntiSpam module.

Akismet killed thousands of bogus comments on Cryptosmith. To post those comments, the spammers had to create user accounts. So I had a lot of bogus accounts. I sorted the wheat from the chaff by retaining only those users whose comments had actually been kept.

I also had to delete a bunch of escaped quotations that the exported AmR User list included in its CSV format.

And I deleted users who hadn't provided email addresses. After all, existing users couldn't retrieve lost passwords without a working email address.

Step 3: Import the user list to Drupal

I installed the "User Import" module to do this. Now, this may simply reflect my ignorance of Drupal, since there seem to be some very general-purpose import mechanisms. It may be that users are a special case of "nodes" or "taxonomies" or some other basic Drupal thing, and that I could import a list of users as a special case of one of those.

In any case, the User Import module accepted the CSV from AmR User, after I massaged it with Excel and a text editor.

Drupal may - or may not - automaticaly send email to new users as soon as they are activated. I chose not to send such emails, since I imported the user list into my Drupal clone system while the main Cryptosmith site was still running WordPress. The automated message would have arrived too soon. Instead, I had a chance to review the imports and make sure that they looked correct. I also tested a few imported users.

Step 4: Tell the users about the transition

Shortly before I shut down the WordPress site, I sent an email to everyone who provided an email address to the site. The message announced the change, and assured everyone that the site contents would still be available without logging in first. I also explained that I would retain login names for users who had posted comments.

Then I performed the transition. I'll talk about that experience elsewhere.

Once the transition was finished, I sent an email to all registered users. Since the updated site was based on Drupal, I needed to install a module to send the email. I chose the "Mass Contact" module. Basic Drupal includes a "Contact" module that provides a web page for contacting the site administrator or other users (depending on how it's configured). The Mass Contact module provides a similar interface to contact specific classes of site users. I configured it to send email to all site users.

Surprising features of Mass Contact

Those of you who received the email were subjected to some of the, well, surprising implications of emailing from Drupal.

  • The Mass Contact module, working with a WYSIWYG editor, does best at producing HTML email.
  • I have this prejudice in favor of raw text email. Mass Contact allows you to select raw text email.
  • You can't really produce raw text with a WYSIWYG editor. The editor naturally inserts HTML tags to indicate the required formatting. If the only formatting consists of newlines to produce line and paragraph breaks, then you still get the tags to produce those breaks.

Thus, my bias in favor of raw text email produced a jumble of text and HTML tags in my first attempt. The Mass Contact also uses some weird scheduling discipline so that it doesn't send too much email at once, and this produces delays in email transmission. In some cases you have to wait a while to see the results of a Mass Contact email, because they aren't always transmitted immediately.

Wordpress tag: 

Taking a Site Solidly Down

If you visited Cryptosmith during the afternoon of February 5, you may have seen this:

Cryptosmith Site Down

This appeared while I was removing WordPress files from the site and inserting Drupal files. The "Site Down" display was controlled by the ".htaccess" file stored in the site's root directory. As soon as Drupal stored a new .htaccess file, links were redirected to Drupal's scripts.

I constructed the Site Down page from raw HTML and a JPEG of the Cryptosmith logo. Such pages are easy to build.

The HTML File

To begin with, here is the HTML text for such  a page:

<html>
<head>
<title>Cryptosmith</title>
</head>
<body>
<p align="middle"><img src="cslogo.JPG" /></p>
<h1 align="middle">SITE DOWN</h1>
<p align="middle">We are making major changes to the hosting software. The site
should return to service in a few hours. For more information, contact the
<a href=mailto:webmaster@cryptosmith.com>Webmaster</a></p>
</body>
</html>

The working files for this display consist of the "index.html" flile containing the above contents, a logo image refered to in the "<img src=" tag, and the .htaccess file.

The .htaccess File

The .htaccess file gives the Apache web server some specific instructions on how to respond to URLs. Those of us who rent server space from others must use .htaccess files to adjust the server's behavior. Those who actually run their own server may put this information directly in the Apache configuration file (it's more efficient that way).

If we put the following into the .htaccess file and store it in the web site's root directory, it will direct everything to the index.html page:

#
# Apache .htaccess setting for the 1-page web site
#

# Don't show directory listings for URLs which map to a directory.
Options -Indexes

# point Apache at the directory/site's home page.
DirectoryIndex index.html

# If the URL points elsewhere in the site, take it back to the home page.
ErrorDocument 404 /index.html

In the above text, the Apache server ignores the lines starting with a "#" since they are comments. Each comment explains the line following it. The "Options -Indexes" line disables directory listings: no matter how complex the site might be, visitors can't retrieve anything unless the explicit URL points to an actual file. The "DirectoryIndex" line directs Apache to the "index.html" file, which contains the contents listed earlier.

In most CMS-based web sites, including those run by WordPress, a URL path either leads to a PHP script (a file with the ".php" suffix) or it's interpreted by a PHP script at the root level. These systems use a .htaccess file that leads to an "index.php" file instead of a .html file.

When we replace the CMS .htaccess file with our own, we redirect paths to lead to "index.html." However, we've only provided one such file at the root level. The third Apache statement above, "ErrorDocument," redirects all nonexistent pages (404 errors) to our solitary home page.

The "500 Internal Server Error"

Treat all .htaccess files with respect. Any syntax errors may yield the "500 Internal Server Error" when you try to visit the web site from a browser. While adjusting the Drupal site configuration I managed to stick an error in my .htaccess file, and this yielded a couple of hours of misdirected work. The mistake probably came down to an extra space in a URL specification somewhere.

The easiest way to eliminate a 500 Server Error is to delete your .htaccess file. That should allow a complete URL (one that includes the file name) to work on your site. Then go back and try to fix the .htaccess file. See if the 500 error reappears.

Wordpress tag: 

Moving a Site on GoDaddy

I created the new Drupal-based site in parallel with the existing Wordpress site. Once the Drupal site seemed solid and provided the necessary content, I installed it over the old Wordpress site.

Why GoDaddy?

I moved my hosting from a local provider to GoDaddy about five years ago. The local provider found it easier to provide virtual hosts to customers who did their own host management. I prefer shared hosting since there's less management. The local provider provided shared-hosting support via email - I'd ask for something via email and it would happen in the next few hours.

GoDaddy offered the whole package: domain registration, SSL certificates, shared hosting, and even some package installation, via a web interface. So I no longer had to change domain aliasing or host configuration via email.

I'm currently evaluating Drupal performance under GoDaddy. Page handling seems excessively slow, especially when logged in. The customer service people suggest trying their new "4GH' service. This apparently hosts the site on scalable multiprocessors.

The Transition Process

The transition process moved the Drupal prototype site onto my main Cryptosmith domain. This was the cheapest and easiest way for me to do it.

  1. I created the prototype Drupal site. This contained a copy of every file I wanted to have on the new account. I created a subdirectory on Cryptosmith and assigned a full .COM domain name to it. Then I told GoDaddy to install Drupal there. Once installed, I migrated the posts and users from the Wordpress site. Then I configured Drupal to mimic Wordpress blogging features.
  2. I did a complete backup of the Drupal site and its database. I gave the backed-up database a special name so I'd recognize it when moving the site.
  3. I took the Wordpress site offline. I used a hand-crafted web page and .htaccess file to do this, since I knew the Wordpress files would be disappearing part way through the process.
  4. I told GoDaddy to delete the Wordpress site. This really only deleted the database. I have several separate GoDaddy hosting accounts and had originally hosted Wordpress on a different account. The Wordpress software on Cryptosmith used the database on a different account and used a no-longer-existing domain name.
  5. I deleted the remaining Cryptosmith files by hand.
  6. I told GoDaddy to install Drupal on Cryptosmith. This provided a skeletal Drupal site that GoDaddy could identify and track with its built-in host support.
  7. I copied the prototype Drupal files atop the skeleton files in Cryptosmith.
  8. I restored the prototype Drupal database atop the skeleton database in Cryptosmith.
  9. I fixed the .htaccess file and the site went live.

What Really Happened

Like true love, the course of migration rarely runs smooth.

I used Cyberduck to upload my web sites to my desktop, and Cyberduck choked when it hit access-restricted files. It's annoying to perform a large-scale file copy only to have it die tragically part way through the process. I had to restart the process and carve around the access-restricted files. I'm not exactly sure why GoDaddy has stuck a few files in my site directory over which I don't have control, but that's what I found.

Once I had copied the needed files over to the Cryptosmith domain, I was greeted with a series of "500 Internal Server Error" messages. Instead of immediately searching help fora for obvious causes of such a message, i repeated my migration steps a few times, hoping to eliminate the problem through brute force.

The typical cause of "500 Internal Server Error" is a flaw in a .htaccess file. Once I went back to an earlier .htaccess file and took more care with my editing, the problem went away and the new site went live.