Trusted system
In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy.
The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent or malicious or whether they execute tasks that are undesired by the user.
A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified, confidential, secret, top secret, and beyond. These also enforce the policies of no read-up and no write-down.
Trusted systems in classified information
A subset of trusted systems implement mandatory access control labels, and as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.Central to the concept of U.S. Department of Defense-style trusted systems is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is
- tamper-proof
- always invoked
- small enough to be subject to independent testing, the completeness of which can be assured.
The dedication of significant system engineering toward minimizing the complexity of the trusted computing base is key to the provision of the highest levels of assurance. This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".
TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introduced Common Criteria, which derive from a blend of technically mature standards from various NATO countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" and support - even encourage - an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command, devised the Bell–LaPadula model, in which a trustworthy computer system is modeled in terms of objects and subjects. The entire operation of a computer system can indeed be regarded as a "history" of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents, e.g. TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, Secure Computer System: Unified Exposition and Multics Interpretation. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject.
The concepts are unified with two properties, the "simple security property" and the "confinement property," or "*-property". Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain Trojan horses, one of the most general classes of attacks.
The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls, i.e. they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model, "The SeaView Model"
An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed discretionary access controls, are under the direct control of system users. Familiar protection mechanisms such as permission bits and access control list are familiar examples of DACs.
The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a finite-state machine with state criteria, state transition constraints, and a descriptive top-level specification, DTLS. Each element of the aforementioned engenders one or more model operations.
Trusted systems in trusted computing
The Trusted Computing Group creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.Trusted systems in policy analysis
In the context of national or homeland security, law enforcement, or social control policy, trusted systems provide conditional prediction about the behavior of people or objects prior to authorizing access to system resources. For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and credit or identity scoring systems in financial and anti-fraud applications. In general, they include any system in which- probabilistic threat or risk analysis is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats ; or
- deviation analysis or systems surveillance is used to ensure that behavior within systems complies with expected or authorized parameters.
In this emergent model, "security" is not geared towards policing but to risk management through surveillance, information exchange, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty, and to a broader philosophical debate about appropriate social governance methodologies.
Trusted systems in information theory
Trusted systems in the context of information theory are based on the following definition:In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust, where all necessarily different subjective and intersubjective realizations of trust in each subsystem may coexist.
Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system in which it is conceived—higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?".
The IBM Federal Software Group has suggested that "trust points" provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise services environment, such a notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.