HK1183182A - Method and apparatus for providing security to devices - Google Patents
Method and apparatus for providing security to devices Download PDFInfo
- Publication number
- HK1183182A HK1183182A HK13110391.9A HK13110391A HK1183182A HK 1183182 A HK1183182 A HK 1183182A HK 13110391 A HK13110391 A HK 13110391A HK 1183182 A HK1183182 A HK 1183182A
- Authority
- HK
- Hong Kong
- Prior art keywords
- tree
- node
- validation
- data
- platform
- Prior art date
Links
Description
Cross Reference to Related Applications
The instant application claims the benefit of U.S. provisional application serial No. 61/314,395, filed on 3/16/2010 and U.S. provisional application serial No. 61/311,106, filed on 3/5/2010, which are incorporated herein by reference in their entirety.
Technical Field
The present application relates to providing security to a device.
Background
With the advent of machine-to-machine communication (M2M), applications in electronic health, geographical tracking, home automation, and consumer electronics devices have become possible. Many such applications require the placement of the network operator's equipment at the user end. These equipment and devices are vulnerable to malicious attacks. To combat such malicious attacks, such user-side based equipment requires device integrity verification in addition to other forms of device protection including firewalls and virus protection.
Several methods for device integrity protection have been discussed. These methods include secure boot (boot) -a software component (component) in which a trusted execution environment loads and only executes that passes integrity verification. However, these methods require an unorganized set of measurements (measurement) that becomes cumbersome to manage when the number of such measurements is large. What is needed, therefore, are methods and related apparatus that facilitate collecting, sorting, and organizing these measurements to facilitate efficient searching for components that fail integrity.
Disclosure of Invention
Various techniques for generating authentication data are disclosed, including a method for generating authentication data usable for acknowledgement (validation) of a wireless transmit-receive unit (WTRU). The WTRU may have one or more components and a secure environment having a plurality of secure registers. According to one embodiment, a value representing measurements of a WTRU component may be obtained for each of a plurality of components of the WTRU. A Measurement Log (ML) may be generated containing records of component measurements and other component specific data may be stored on the WTRU. Verification data may be generated from the component measurements for each component, and the verification data may be stored in one or more secure registers within the trusted platform module. The verification data and the ML may be organized as a tree structure. The security register containing the authentication data may define the root of the tree structure. The ML may define internal nodes of a tree structure, and the measurements contained in the ML may define leaves of the tree structure. The tree structure may be formed using security extension operations of the secure environment.
According to another embodiment, a value representative of a measurement of a WTRU component may be obtained. Verification data may be generated based on the measurements, and the verification data may be stored in a register within a secure environment on the WTRU. The measurement value may be stored at a leaf node in the tree structure. One or more expansion operations may be performed in a secure environment to expand values stored in leaf nodes to a root node of the tree structure. The root node may include data in a secure register in which the generated validation data is stored.
In accordance with another embodiment, a method of validating tree-formed authentication data generated by a wireless transmit/receive unit (WTRU) is described. The tree-shaped verification data may include verification data elements (data elements), Measurement Logs (MLs), and component measurements organized in a tree structure. The verification data element may define a root node of the tree structure. ML may define internal nodes of a tree structure. The component measurements may define leaf nodes of the tree structure. The tree verification data may be received in an organized tree structure. The tree structure is traversed (transitive) starting from the authentication data element at the root of the received tree-shaped authentication data. As part of traversing the tree structure, values at a branch node and child nodes of the branch node of the received tree structure may be compared to values at the same node location in the reference tree. It may then be determined whether to acknowledge the WTRU or individual components of the WTRU based on the comparison of the node values.
According to another embodiment, a method for validating (certifiy) a node value of a Measurement Log (ML) generated by a wireless transmit/receive unit (WTRU) is described. The value of ML may be stored as nodes of a tree structure including a root node, internal nodes, and leaf nodes. An attestation packet may be received indicating a node value to be certified by a sub-tree certificate authority (SCA). The node value may be identified as a node value that can be verified by the SCA. An inventory (manifest) associated with a node value may be created that includes validation information associated with the node value. A certificate for the node value may be created that is configured to bind the confirmation information to the security context of the WTRU. The certificate may be issued with the manifest and provided to the WTRU's security context, which stores the certificate in its ML.
Other features and aspects of the systems, methods, and apparatus described herein will become apparent from the following detailed description and the associated drawings.
Drawings
The foregoing summary, as well as the following detailed description, will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the systems, methods, and apparatus described herein, there is shown in the drawings, exemplary embodiments, however, the invention is not limited to the specific methods and instrumentalities disclosed. In the drawings:
fig. 1 illustrates an exemplary long term evolution wireless communication system/access network;
fig. 2 is an exemplary block diagram of the long term evolution wireless communication system of fig. 1;
FIG. 3 illustrates the general structure of a tree Store Measurement Log (SML) and verification data;
FIG. 4 shows an example of one algorithm (Algorithm 1) showing tree formation;
FIG. 5 shows the correct configuration at the right edge;
FIG. 6 shows one algorithm (Algorithm 2) showing incomplete tree clearing (clear);
FIG. 7 shows sequential formation/branching of an incomplete tree of depth 3;
FIG. 8 shows a maximum capacity arrangement of tree validation data, where the measurement values at the leaves are denoted as m;
FIG. 9 illustrates the classification of node configurations in a tree SML;
FIG. 10 at 2dAn expected fraction of bad internal nodes in a random distribution of f bad leaves, where d = 16;
FIG. 11 shows the correct configuration of the values of each elementary triangle in the SML tree;
FIG. 12 shows Algorithm 1 for finding the first failure point in a Linear Hash (hash) chain;
FIG. 13 shows an example of a Huffman (Huffman) coding tree;
FIG. 14 illustrates an example of tree pruning;
FIG. 15 illustrates an optimal tree verification system diagram/system diagram and associated communications;
FIG. 16 illustrates a tree with alternate child links in which a software component or function represented by one module uses another module;
FIG. 17 shows Algorithm 2-for determining the population of binary trees with metrics (population);
FIG. 18 shows Algorithm 3-an overview of the tree used to determine the use of TPM commands;
FIG. 19 shows algorithm 4 — for determining an ensemble of n-ary (n-ary) trees with metrics;
FIG. 20 shows Algorithm 5-for determining the population of a binary tree with alternate child links;
FIG. 21 illustrates the compare and prune algorithm 6 — to determine the nodes and leaves that fail the integrity check;
FIG. 22 shows Algorithm-1 displaying TPM _ reduced _ Tree _ Verify _ Load;
FIG. 23 shows Algorithm-2 showing TPM _ reduced _ Tree _ Verify;
FIG. 24a shows Algorithm-3 showing TPM _ Tree _ Node (Node) _ Verify;
FIG. 24b shows Algorithm-4 showing TPM _ Reduced Tree _ Update;
FIG. 25 shows data categories for PVM;
FIG. 26 illustrates a subtree validation protocol for subtrees having roots;
FIG. 27 illustrates certificate sub-tree binding;
FIG. 28 shows a multi-tree structure with left-side imbalance;
FIG. 29 shows one exemplary embodiment of a tree structure as described herein;
FIG. 30 is a component sub-tree structure;
fig. 31 shows the division confirmation step 1: collecting the measurement;
fig. 32 shows the segmentation confirmation step 2: verification of subtrees;
fig. 33 shows the segmentation confirmation step 3: connecting a service;
FIG. 34 illustrates the H (e) NB use case for split acknowledgement;
fig. 35 shows h (e) NB blocking access to a malicious device;
fig. 36 shows that M2M GW groups devices based on device type, device class (class), device characteristics, or connection profile (profile) and provides group certificates for a device validation tree;
fig. 37 shows M2M GW P2P split confirmation;
FIG. 38 is a system diagram of an exemplary communication system in which one or more disclosed embodiments may be implemented;
figure 39 is a system diagram of an exemplary wireless transmit/receive unit (WTRU) that may be used in the communication system shown in figure 38; and
fig. 40 is a system diagram of an exemplary radio access network and an exemplary core network that may be used in the communication system shown in fig. 38.
Detailed Description
When referred to hereafter, the term "wireless transmit/receive unit (WTRU)" includes but is not limited to a User Equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a Personal Digital Assistant (PDA), a computer, or any other type of device capable of operating in a wireless environment. When referred to hereafter, the term "base station" includes but is not limited to a node B, a site controller, an Access Point (AP), or any other type of interfacing device capable of operating in a wireless environment.
Various techniques for generating authentication data are disclosed, including a method for generating authentication data that may be used to validate a wireless transmit-receive unit (WTRU). The WTRU may have one or more components and a secure environment having a plurality of secure registers. The secure environment may include a secure hardware and/or software environment that provides a secure execution environment. For example, the secure environment may be a Trusted Platform Module (TPM), a smart card, a Universal Integrated Circuit Card (UICC), or any combination thereof. The secure environment may be used to protect secure functions such as, for example, cryptographic functions, secure resources such as, for example, operational registers, memory, random number generators, timers, and/or clocks.
According to one embodiment, the verification data may be generated by obtaining, for each of a plurality of components of the WTRU, a value representative of a measurement of the WTRU component. A Measurement Log (ML) may be generated that contains a record of the component measurements and other component specific data may be stored on the WTRU. For each component, verification data may be generated from the component measurements, which may be stored in one or more secure registers within the trusted platform module. The verification data and the ML may be organized as a tree structure. The security register containing the authentication data may define the root of the tree structure. The ML may define internal nodes of a tree structure, and the measurements contained in the ML may define leaves of the tree structure. The tree structure may be formed using security extension operations of the secure environment.
According to another embodiment, a value representative of a measurement of a WTRU component may be obtained. Verification data may be generated based on the measurements, and the verification data may be stored in a register within a secure environment on the WTRU. The measurement may be stored at a leaf node in the tree structure. One or more expansion operations may be performed in a secure environment to expand values stored in leaf nodes to a root node of the tree structure. The root node may include data in a secure register in which the generated validation data is stored.
In accordance with another embodiment, a method of validating tree verification data generated by a wireless transmit/receive unit (WTRU) is described. The tree-shaped verification data may include verification data elements, Measurement Logs (ML), and component measurements organized as a tree structure. The verification data element may define a root node of the tree structure. ML may define internal nodes of a tree structure. The component measurements may define leaf nodes of the tree structure. The tree verification data may be received in an organized tree structure. The tree structure is traversed starting from the authentication data element at the root of the received tree-shaped authentication data. As part of traversing the tree structure, values at the branch nodes and child nodes of the branch nodes of the received tree structure may be compared to values at the same node locations in the reference tree. It may then be determined whether to acknowledge the WTRU or individual components of the WTRU based on the comparison of the node values.
According to another embodiment, a method is described for validating a node value of a Measurement Log (ML) generated by a wireless transmit/receive unit (WTRU). The value of ML may be stored as nodes of a tree structure including a root node, internal nodes, and leaf nodes. An attestation packet may be received indicating a node value to be certified by a sub-tree certificate authority (SCA). The node value may be identified as a node value that can be verified by the SCA. A manifest associated with a node value may be created that includes validation information associated with the node value. A certificate for the node value may be created that is configured to bind the confirmation information to the security context of the WTRU. The certificate may be issued with the manifest and provided to the WTRU's security context, which stores the certificate in its ML.
Structured validation is a validation method in which the data and operational aspects of the validation are structured. Separate but related concepts and methods of structured validation are described herein. For example, tree validation (TFV) is described herein, focusing on methods using sub-tree validation, extensions and variants of TFV, and split validation, where validation tasks are distributed among two or more network entities, which allows the network entities to perform device integrity validation for (connected) devices in a distributed manner, so that each validation entity may validate a portion of the device without necessarily having to validate the entire device.
Fig. 1 illustrates a Long Term Evolution (LTE) wireless communication system/access network 400, including an evolved universal terrestrial radio access network (E-UTRAN) 605. The E-UTRAN 605 includes a WTRU 610 and a plurality of evolved node Bs (eNBs) 620. The WTRU 610 communicates with the eNB 620. enbs 620 are connected to each other using an X2 interface. each of the enbs 620 is connected with a Mobility Management Entity (MME)/serving gateway (S-GW) 630 through an S1 interface. Although a single WTRU 610 and three enbs 620 are shown in fig. 1, it should be appreciated that any combination of wireless and wired devices may be included in the wireless communication system access network 600.
Fig. 2 is an exemplary block diagram of an LTE wireless communication system 500 including a WTRU 610, an eNB 620, and an MME/S-GW 630. As shown in fig. 2, the WTRU 610, eNB 620, and MME/S-GW 630 are configured to perform a method for providing security to a device.
In addition to the components found in a typical WTRU, the WTRU 610 includes a processor 716 with an optional linked memory 722, at least one transceiver 714, an optional battery 720, and an antenna 718. The processor 716 is configured to perform a method for providing security to a device.
The transceiver 714 communicates with the processor 716 and the antenna 718 to facilitate the transmission and reception of wireless communications. In the case where the battery 720 is used in the WTRU 610, the battery 720 powers the transceiver 714 and the processor 716.
In addition to the components found in a typical eNB, the eNB 620 includes a processor 717 with an optional linked memory 715, a transceiver 719, and an antenna 721. The processor 717 is configured to perform a method of providing security to a device. The transceiver 719 is in communication with the processor 717 and the antenna 721 to facilitate the transmission and reception of wireless communications. The eNB 620 is connected to a mobility management entity/serving gateway (MME/S-GW), which MME/S-GW 630 includes a processor 733 having an optional link memory 734.
Typically, to ensure device integrity, measurements of software/firmware or hardware (such as, for example, cryptographic hash values) are performed and compared to trusted reference values. This comparison of measurements or a function or grouping of such measurements against trusted reference values, referred to as verification data, may be performed inside the device (autonomous validation) or externally by a validation entity (semi-autonomous or remote validation). In the case of semi-autonomous or remote validation, the measurements may be sent as an unorganized set in a payload (payload), which may be ciphered, integrity protected, and ciphered certified.
To find a component that fails integrity verification, the set of measurements may be compared to a set of reference values, resulting in an index set that fails integrity measurement. However, such an unorganized measurement set may be difficult to manage if the number of such measurements is large.
To optimize the search of modules that fail the integrity check, verification data is generated in the form of a hash tree of measurement logs, such as Stored Measurement Logs (SMLs). The term SML, as used by the Trusted Computing Group (TCG) architecture and specification, may be used in describing various embodiments of the measurement log described herein, but SML is one exemplary embodiment of a measurement log. An example of an organization is tree validation (TFV). The stored measurement logs may be organized into a balanced binary tree and an algorithm may be proposed to generate the tree using TPM commands. The algorithm may be generic and may be extended to a balanced binary tree. The SMLs resulting from tree verification may be similar in content to the TCG SMLs, but may be structured, formed, computed, used, stored, retrieved, updated, deleted, transmitted, and/or received in a communication, or processed or operated differently than how the TCG SMLs are processed and/or operated.
TFV may be implemented in TrE, for example as described below. This will lead to innovation of many other use case types and is a reference enabler (enabler) for network side platform validation of the complex type targeted by TrE. The technical content of TFV is described herein. Summary of TFV pointing (pointing) is described herein. TFV is also put into the context of TrE, the application of which is described herein.
One element (element) of the TFV is a hierarchy of validation data that brings various benefits to platform validation of a remote validator (validator). Algorithms for creating and managing tree-like authentication data are described herein. Furthermore, proof of the substructures of the platform (i.e., subtrees of the validation data) may be used to validate the platform in the TFV. The authentication data at the highest level in the hierarchy, called the root (set), may be protected in the TFV, such as, for example, by an authentication data register with special (e.g., hardware) protection. The TFV may operate on the tree-shaped authentication data through a small and efficient algorithm. The full implementation of TFV may be implemented in the TCB, or even a hardware protected secure execution environment. The complex structure of validation data may be associated with and/or protected by a tree hierarchy (hierarchy) of tree verification data. Sub-trees can provide inherent semantics (semantics) to the validation and validation data through structure-to-sub-structure relationships. The subtree can thus identify the functional part of the platform to the validator.
One embodiment of TFV may use a modified TPM. Trusted computing is a potential entry point for the implementation and/or standardization of TFV as prior art. TFV also easily generalizes from trees (such as binary trees) to more general structures with sub-structures.
Described below are elements that may be provided by TFV.
Safety: hardware protection is provided in TFV by protecting the root of the tree-like authentication data. The reference protection level is the protection level of the PCR in the TPM, shown as being maintainable by the TFV. The small size of the algorithms and low complexity on the platform side enable implementation on small TCBs or chips.
Management: TFV includes methods and apparatus for securely picking out substructures of a platform and managing the platform based on the collection of such substructures. The modules represented by the TFV subtrees can be changed, updated or even moved between platforms flexibly, with the security features required in any case.
Distribution: the hierarchy of TFV allows for a hierarchical segmentation of acknowledgements between entities. This makes the design of communication scenarios and use cases more flexible and beneficial to efficiency.
Efficiency: one efficient data structure for searching, a binary tree, can be implemented in the TFV. It shows that, for example, a search for failed components (with unwanted integrity measurements) can be more efficient in TFV than in TCG-like platform proofs. The natural hierarchy introduced by TFV provides the option of load distribution during validation.
One exemplary feature of TFV is that TFV can be designed according to its built-in hierarchical order for validation of a large number of low-capability platforms by "more core" entities. Thus, the TFV may be applicable to devices connected to the gateway and/or connected to the network via the gateway. One exemplary use case may include an M2M communication scenario. This property of TFV may provide it with a concept that is orthogonal to many existing concepts to provide more semantics to state attestation by a trusted platform. Other approaches to platform attestation show a completely opposite philosophy. They are suitable for platforms that can generate complex validation data, but without much assuming a hierarchical, modular structure. TFV may be combined with other orthogonal methods such as PBA, semantic attestation, virtualization, and/or HIM.
The following gives a list of vocabularies and acronyms used in this description.
The RIM reference integrity metric provides a reference value to which actual measurement data may be compared. The RIM is a copy (counter) of the measurement values provided by the device for validation. They serve as references for comparison of the desired target values with the reported measured values. RIM provides proof of integrity in the sense that they are uniquely associated with a component, for example as a cryptographic digest value of the component code obtained in a secure test facility. They are a metric in the sense that they allow direct (deterministic) comparison with the measured values.
RIM certificate the RIM certificate contains the RIM for a particular component, signed by the TTP.
AuV autonomous validation.
An IDS intrusion detection system.
DoS denies the service (attack).
PVM platform validation and management. A combination of platform validation by PVE and (e) OTA management of the platform by HMS. Including all potential combination functions that can be derived from the combination.
A DMS device management system. The generalized notation (3 GPP LTE) (TS 32.583, [4 ]) for home (e) node B management system (HMS) is applicable to general devices, enhanced by PVM functionality.
The RIM manager/manager RIMman manages the entities that validate the database V _ DB. The entity is the only entity authorized to do so, and the only entity performing encryption operation on the digital certificate (verification, generation).
SAV semi-autonomous acknowledgement. The validation variable upon which the PVM is based.
TCG trusted computing group.
The TSS trusted software stack.
A TPM trusted platform module.
TTP trusted third party.
TCB trusted computing base. Parts of the system that cannot be trusted at run-time and therefore must be unconditionally trusted.
CN operator core network.
Home operator selected by SHO.
PKI public key infrastructure.
PDP policy decision points.
PEP policy enforcement points.
D-H Diffie-Hellman。
TFV tree validation.
The authentication data uniquely identifies the overall result of the internal authentication during the secure boot process. The best example is the PCR value, i.e. digest, of a large number of measurements in the form of chain hash values.
Validation data in distinguishing validation data, validation data is all data submitted to another party (validator) and used to assess the trustworthiness of the platform state.
The submission of validation data to the validator, for example, as a remote attestation from the TCG, and the process of its evaluation by the validator is properly referred to as validation. The validation data may generally include verification data, such as a referenced verification data register (PCR) value. Validation may include policy evaluation and action triggering by the validator in addition to cryptographic validation of the validation data.
The authentication data register protects the hardware storage of authentication data for unauthorized access and changes.
The establishment of a trust relationship for a trusted platform relies on a validation process. Validation allows an external entity to establish trust in the expected behavior of the platform based on the provided evidence of platform configuration. In a validation mechanism, such as remote attestation, the trusted platform displays verification data generated during the boot process. These data may be hardware protected values of platform configuration registers, containing nested measurements of all loaded or launched components, such as hash values. These values may be generated in a linear order by a security extension operation. The validator may be inefficient for low granularity diagnostics of components based on a linear order of validation data and associated measurement logs. A method of generating tree-shaped verification data is provided, wherein component measurements represent leaves and protected registers represent roots. The functionality of the method is shown using a limited number of hardware protection registers and standard extended operations. In this manner, the security of the verification data may be maintained while the stored measurement logs may be organized all the way into a tree. A basic mechanism for validating a platform using tree-like measurement logs and validation data is discussed.
The process of establishing trust in a computing platform may follow a unique general pattern. During platform startup, components may be measured by a protected entity on the platform. For example, a component may be measured before being loaded and/or executed. The generation of a chain of trust is an important concept for trusted computing systems. This chain can be extended without gaps from the system root up to the current system state, including the instructions and programs executed. Each component may be required to measure and report the following components before they are executed. The measurement of the direct successor may prevent the unmonitored execution of the code between the measurement and the actual execution. The measurement process may be protected by a root of trust of the measurement, which may be implemented, for example, by computing a digest value over the code and configuration data.
The validation data may be compiled from the measured values by a protected operation and/or stored in a protected storage. The validation data may uniquely identify the platform state, such as after completion of a secure boot. Embodiments of these processes may be authenticated (authentite) and are secure boots specified by the Trusted Computing Group (TCG). The initiation of authentication may be for the PC client. Secure booting of the mobile platform may also be used. The difference between the two is that secure boot adds a local authentication and execution engine that causes the component to boot if its measured value is equal to the trusted reference value.
TCGs propose extended operations via Trusted Platform Modules (TPMs) and Mobile Trusted Modules (MTMs), respectively, computing verification data from measurements that are hash values of component code and/or data. The data may be stored in Platform Configuration Registers (PCRs). As an example, there may be a minimum of 16 PCRs according to version 1.1 of the specification, and at least 24 PCRs may be identified in version 1.2 of the TPM. The PCR may be accessed by authorized commands. The expand operation establishes a linear sequential, nested hash value chain, similar to the Merkle-Damgard transform, as follows:
(formula 1)
Wherein ViRepresenting the authentication data register (for PCR i = 0;:::;. 23), H is an anti-collision hash function (SHA-1 in the case of TPM), m = H (number)Accordingly) are measured values. Therefore, the verification data of the TCG trusted platform can prevent the protection function and the shielding capability of the TPM from operating and ensuring the safety.
The verification data may be accompanied by more expressive measurement records and/or other component specific data in a Stored Measurement Log (SML). In validation against an external entity, verification and/or other data, such as SML, may be signed by the platform and passed to the validator. The validator can assess the trustworthiness of the platform to any desired granularity (granularity), which is limited by the total information passed in the validation process. Exemplary implementations for the acknowledgement may be defined by the TCG in the attestation protocol. The TCG assumption validation may ultimately be used to take corrective steps on the trusted platform (e.g., upon first network or service access), as envisioned by the TCG's trusted network connection workgroup.
A method is provided for organizing verification data and SMLs in a tree (such as, for example, a Merkle hash tree) in a linear order different from that envisioned by the TCG specification. From an application point of view, the efficiency problem of the linear chain type verification data is prominent. The main security issue of organizing the verification data as a tree is to make their generation as secure as the measurement extension operations of the TCG specification are also provided. A method and algorithm for generating validation data in a limited set of hardware-protected registers that faithfully represent hash tree root nodes is also presented. It also shows how tree verification data and SML can be used efficiently and effectively for validation. Implementation options and experiments performed on the tree-shaped validation data are also discussed.
The verification data provides information about the state of the system with unconditional security. For example, they may be secure independent of SML, which is not specifically protected on the platform or in the validation (which may not be part of the signed attestation data) according to the TCG standard. The signed PCR value, i.e. the verification data itself, may provide implicit integrity control for the SML. To this end, the validation data may be recalculated by wrapping back (trace) all the extension operations based on measurements in the SML.
The method of TCG standardization to ensure measurement log security using PCR values in the initiation of authentication may be based on the techniques introduced by Schneier and Kelsey for ensuring the security of audit logs on untrusted machines. In fact, it may be a simplification, since the last element of the hash chain remains in the PCR, while the SML may contain the measurement value instead of the intermediate term of the hash chain. Integrity measurement using a TPM may be implemented in an Integrity Measurement Architecture (IMA) as a Linux kernel module to measure integrity using a TPM and generate a linear SML.
The validation data generated by the linear chain extension operation may be a limit value for remote diagnostics and/or advanced management such as component-wise remediation of the platform. Essentially, the operational position of the SML cannot be located with certainty by tampering with the measurement values before they extend into the PCR, or tampering with the SML itself after secure boot. Furthermore, the spatial complexity of real-world SMLs with hundreds or thousands of tested components can make it costly to screen (sift) for components that fail validation (components whose measured values differ from the "good" reference values). To check code and/or data, there may be various cryptographic checksum (checksum) functions available, and they may require that the integrity of the checksum on the "correct" data be maintained. The requirement for a centralized database of software in valid versions on various machines can be a significant management problem, requiring an efficient solution. Further, large-scale deployment of networked devices, such as, for example, required for machine-to-machine communication scenarios, may require a balanced and efficient trust infrastructure on the solid device and network side. Devices that are loosely connected to the network and that operate semi-autonomously have high security requirements. Industry considerations may impose high levels of requirements for remote integrity verification or validation of connected devices. The methods and apparatus described herein may be used to make validation expressive, effective, and/or safe.
The specification of the TCG base framework workgroup may include a method to solve this problem, hierarchically differentiating verified components and subcomponents. Concepts and notations of a trust tree (ToT) representing a platform structure are described. ToT may represent platform components, annotated with trust and security claims (status) from the TPM through to the application. This can be used to evaluate the trust that should be put into the platform, or even to reorganize the platform according to certain constraints.
Another area of technology in which the disadvantage of relying on only linear chains becomes imminent is virtualization technology. Virtual machines can be dynamically generated and/or destroyed at many potential levels, resulting in a tree-like dynamic structure of trust dependencies. While this community may have acknowledged that structured validation data may be needed to truly assess the trustworthiness of the platform, the granular association of such tree data hierarchies with verification data (PCR values) may be lacking.
The verification data and SMLs may be organized into a binary tree structure. In such a structure, the verification data register is the root, the SML data structure may include internal nodes, and the leaves may be component measurements. The entire structure may be tree-shaped, such as, for example, a category representing a Merkle hash tree. The method can be generalized to n-grams and arbitrary trees. The Christmas tree of FIG. 3 is used to illustrate the general concept of tree-form verification.
Fig. 3 shows a tree SML and a general structure according to verification data. The stars represent the roots of the trees stored in the verification data register. The components (code and/or data) are represented by packets at the leaves. The measured hash value of the component is represented by a slipknot (slipknot). The internal node (ball) transmits authentication information upstream to the root. The line hints traverse the tree for validation, as explained in more detail later.
The secure generation of verification data representing the root node of the hash tree may present a problem. In normal extended operation, the measurement value taken by the root of trust for measurement (RoTM) of the component and the current verification data register value V are usediAnd the operation itself is performed in a hardware protected TPM.
Thus, in particular, unprotected stored measurements in the SML are not used in the generation process. This is not possible with a hash tree where adding a new leaf may affect d-2 internal nodes of the tree, where d is the depth of the tree. A challenge may be to generate tree verification data within a limited number of hardware protected registers (PCRs), such as by using a single leaf measurement as an input, and using TPM extension operations and other TPM capabilities.
From the minimum requirements required by the system to create and protect tree-like authentication data, it is clear that the methods and apparatus in the following description may not be limited to platforms and secure hardware units that comply with the TCG standard.
The verification of the program may be performed before loading and at the same time as starting. The proof may also be used as described herein. Code authentication is one of the targets of trusted computing. The executed code may be protected by a secure boot of the platform. For example, hardware mechanisms may be used to boot trusted (bootstrap) into a host with a secure coprocessor (coprocessors) on standard PC hardware. A trusted platform may be applied. The secure hardware may be included in a secure boot process. For example, if an exception is detected, the security coprocessor may stop the boot process. This considers the boot ROM secure. To ensure this, the address space of the system may be configured so that the boot vector (boot vector) and boot code (boot code) are provided directly by the security co-processor or the boot ROM itself may be a piece of secure hardware.
In any event, the security co-processor verifies the system software (OS kernel, system-related user-level software) by checking the software signature against a known value. In which process tamper-resistant code can be implemented. One approach to solving this problem may be to fix trust for programs in hardware, such as the XOM (execution only memory) processor architecture and the XOM operating system built on it. This may not address the issue of securely loading programs and/or certifying external entities. The AEGIS is safely started on the PC. For example, the AEGIS uses a hash of the signature to identify each layer in the boot process, as does Terra, which certifies the loaded component with a complete certificate chain at the end of the virtual machine's certification.
The TCG specification defines dual-sided remote attestation to remotely verify the integrity of a platform by verifying a binary executable file. The execution code may be measured when the execution code is loaded. The measurements may be stored in the PCRs as verification data that the TPM may certify by signing with a TPM-protected key. The verifier may, upon receiving these metrics, decide whether the platform can be considered trusted. Because the configuration can be communicated and verified, the verifier can be aware of the configuration of the machine. Moreover, binary attestation discloses the configuration, thus posing a privacy risk. In different solutions, "features" and "feature-based proof" (PBA) are discussed. PBA allows the verifier to be assured of the security features of the verified platform without revealing detailed configuration data. A Trusted Third Party (TTP) is used to issue certificates that map the platform configuration to the features (in particular desired/undesired functionality) that can be implemented in the configuration. The TPM may then use the zero knowledge proof to prove these characteristics to the verifier without disclosing the complete configuration.
PBA transfers platform-validated infrastructure issues to TTP, similar to the privacy Certificate Authority (CA) of TCG, but extends its role. Another alternative is proposed by Nexus OS, which establishes a minimal Trusted Computing Base (TCB) to establish strong isolation between user space and privileged programs. Nexus has secure storage areas and monitoring and execution machines to protect them. One application may move the device driver to the user space. Nexus' certification links the descriptive tags to the program being monitored, thus allowing a similar behavior to PBA, but is inherent to the system. Neither the PBA concept nor the Nexus approach have a way to validate complex systems that are composed of multiple components and that will further be dynamically managed. Both methods are orthogonal to the existing one and can be combined with it.
Hierarchical Integrity Management (HIM), a framework for component-wise integrity measurement and policy-enabled management of platform components is proposed. Components and subcomponents are related in the HIM by a dependency graph (the most common structure useful for this purpose). The aim of HIM is not remote platform validation, however, and does not protect the structured platform validation data in the PCR. Instead, it keeps the measurements together in a global component configuration register (software register) table.
One application of an integrity-protected hash tree for large data sets, such as, for example, the hash tree introduced by Merkle, may be certificate management in PKI. This may result in long-term accountability (accountability) of the CA using a tree structure, such as a Merkle tree or a certified search tree. The use of hash trees can be extended to ordinary long-term secure archiving for digital data. The hash tree may be used for runtime storage protection.
The system may use a hash tree for storage and/or memory protection and may be divided into untrusted storage and TCB. Programs running on the TCB may use the hash tree to maintain the integrity of data stored on an untrusted storage device, which may be, for example, a bulk storage device (bulk store) that is convenient to access, where the program periodically stores and loads data that is not applicable to the TCB. The root of the entire tree may be stored in a fixed-size on-chip trusted register, but other nodes may be stored in main memory or cache. Another use of hash trees may include showing how they support authentication of distributed code in a Wireless Sensor Network (WSN). Also in WSNs, data aggregation that includes multiple nodes may be integrity protected using hash trees.
Another embodiment of making the validation data searchable may include only additional jumplists that are certified, which may be an ordered linked list designed to allow quick lookup of stored data elements by employing "shortcuts".
However, the tree may be more suitable for validation of platform state, such as, for example, a subset of components at a leaf for efficiently determining validation failures. Systems, methods, and apparatus are described herein for generating a tree structure (e.g., a binary Merkle tree) from component measurements using a limited set of tamper-resistant verification data registers. The tree structure may be generated using the capabilities of the TPM, such as, for example, standard extension operations. The algorithm may be small enough to be executed within the TCB, especially on-chip. This part of the method may improve the security of the generation of the root of the hash tree, which in turn may provide more security to the tree nodes. Systems, methods, and apparatus are also described herein for increasing the security features of remote platform validation while benefiting from the efficiency of tree structures for failure point search, utilizing the enhanced diagnostic capabilities through generic PCR values and SMLs to develop tree structures for efficient validation. Such use of tree structured data may be used for security diagnostics, validation and/or attestation.
The systems, methods, and apparatus described herein may securely generate a root verification value using a limited number of verification data registers. Each reference to a particular implementation of trusted computing specified by the TCG, such as, for example, TPM operations, PCRs, and/or SMLs, may be an illustrative implementation used in the implementation of the systems, methods, and apparatus described herein. The algorithm and/or process may be adapted to each security technology with minimal capabilities for its use.
One of the registers of the hardware protection,e.g., PCR, may include the root of the final tree. The tree may be binary, keeping the algorithm compact and, for example, providing fine-grained detection of failed components. The leaf may carry the measurement and the internal nodes may be stored in the modified SML. The SML may be modified in a manner that supports a tree structure of validation data, i.e., it may not be a linear list of measurements, but the data structure may support standard tree operations and traversal. For efficient searching during platform validation, SML may support adding new leaves and maintaining the relationship of edges. Adding a new measurement at a leaf of the tree at depth d may require recalculating the root of the tree stored in V e V and/or the reduced hash tree for that leafd-1 internal nodes. The "left" and "right" sides of the Merkle tree are naturally colored (coloring), respectively, because binary expand operation (1) is not interchangeable. The leaves inherit this order and increase from left to right. Binary system, d number represents leaf n, n is more than or equal to 0 and less than or equal to 2d-1, is described as<n>Natural (natural) coordinates of internal nodes and edges on the unique path from the leaf to the root are generated. That is, the kth number (counted from MSB, k = 1.... d),<n>k=1, respectively by<n>k=0 or<n>k=1 determines whether nodes with a depth of k-1 on the path are connected by the left and right sides, respectively.
The root of each subtree produced during the execution of the algorithm can be securely stored in V e V. If there are two subtrees with the same depth d '(the measure is a subtree with depth 0), they can be merged into a single tree with depth d' + 1. In using a merge operation, one of the two protecting subtree root vs may be released after the merge operation. An update algorithm for newly arrived measured values can be formulated, whereby the register V1,.....,Vd-1Contains the current state of the "active" subtree with a depth of 1,1dMay contain the current global root value.
Here "activation" is described as a subtree whose root waits for the merge to complete with a subtree of the same depth. Careful design is made to use actual measurements, protected registers, and/or normal extended operation, and not involve unprotected memory space. Empty nodes of the full binary tree with depth d are denoted by nil. The formation of the tree may be performed by algorithm 1 shown in fig. 4.
The various operations involved in algorithm 1 include:
m to VdAdding a measurement; vd←m。
SVStoring a verification data register to the SML; vk→SML。
SmStoring the measured SML;m→SML。
v copying a verification data register; vk→Vk+1。
E1 to measure the spread Vd;Vd←Vd◇m。
E2 extends internal node registers; vk←Vk◇Vk+1。
The above symbols may interchangeably represent operations and the number of times they are performed. A miss operation m ← RoTM can be relegated to Sm.
If n is<2dThe tree may be missing at the right and the clean-up process shown in algorithm 2 shown in fig. 6 may be performed.
Algorithm 2 (FIG. 6) may result in a final merging of roots, thus V1Eventually containing all the sub-tree information. Due to the testing of lines 17-21 of algorithm 1 shown in FIG. 4, if the tree is not yet full, a clean-up procedure may be performed. The rule according to which the completion count is based is that the arrangement shown in fig. 5 is correct at the right.
Internal nodes can write to the SML even if they are the result along the left (with some redundancy). Formally, the rules according to which the completion tree described above is based may be interpreted as the concept of modifying the operation "so that x nil = x, as explained herein.
If the leaves and internal nodes are added to the SML in the order specified by Algorithm 1 shown in FIG. 4, a natural ordering (seriation) of the resulting tree can be obtained. This sequence is shown in fig. 7 for a partial tree of depth 3.
In fig. 7, the labeled entries 10 and 11 in the resulting SML are the same, since 11 was created by the forward operation of the cleanup algorithm 2. Binary search may use SML order to address tree nodes in SML. Given length of 2d+1Sequence number K in SML of-1, such a search is made from the last term, the root. Rest 2d+12 items equally divided into size 2dA portion of-1, wherein,and determines whether K is in the left or right portion. This process is repeated until K points to the rightmost element (element) of the current portion. The order in which the decisions are made results in an order from the root to the left-right of the nodes with index K in the SML.
The tree formation algorithm 1 of fig. 4 can easily accommodate arbitrary, uniform numbers of elements (the number of elements of a function or operation is the number of arguments (arguments) or operands used by the function), such as a tree of b. For this purpose, binary coordinates<n>Must be represented by b-element coordinates<n>(b)And its d-th and k-th digit evaluated at lines 4 and 12, respectively, of algorithm 1 shown in FIG. 4, wherein the evaluated expression must become
Algorithm 2 (fig. 6) can be adjusted accordingly. Further generalization to any tree may require the establishment of associated node coordinates, i.e., n → node mappings. At nodes with each number of elements exceeding 2, since the hash expansion for the legs (leg) connected to each node is linear, there may be the above-mentioned disadvantages and a loss of detection granularity may occur.
It is clear from the generation process that a limited number V can be used1,....VrThe verification data register of (1) covers a limited number of components at the leaves of the tree. The maximum capacity can be calculated as follows. For the first register V1Can use r-1 other registers as a pipe of length r-1 to build a tree of depth r. When V is1When occupied, the second register may support a tree of depth r-1, and so on until the last register VrFor the VrThe depth of the pipe is 0 and the depth of the tree is 1. The total number of leaves carried by the tree of registers can thus be given by:
(formula 2)
For the number of PCRs r =24 attached to the TPM of the v 1.2 specification, this yields 33,554,430 positions for component measurements at the leaves of the r trees. If the limit is to the last 16 PCRs, since, for example, PCRs 0-7 are reserved according to the PC client specification of the TCG, the specification also counts 131,070 measurements. Although this capacity is high, it is not infinite because the standard is linearly scalable. Thus, since the number of measurements made during start-up or at run-time is not known in advance, the last register can, as a backup (fallback), be linearly extended after the capacity limit is reached. FIG. 8 illustrates this arrangement-the maximum capacity arrangement of the validation data for the display tree. In fig. 8, the measured value at the leaf is represented as m.
The spatial complexity of the tree formation algorithm is small. Because the internal data explicitly requires three: d is in the range of { 1.. eta.. r }, and n is in the range of { 0.. eta.. 2 }d-1, and k ∈ { 1.. d }, the size of the data being at most the size of the dataAnd (4) a bit.
Additionally, depending on the implementation, a register may be required to receive and hold the current measurement, and/or as an intermediate register for operating on the verification data register. SML increases size moderately. For a fully populated binary tree of depth d, 2d+1-2 nodesValues, including leaf measurements, are stored in the SML (the root node is contained in V)iIn (1). That is, the tree-shaped SML is less than double the size of a linearly formed SML containing only measurement values.
For the estimation of temporal complexity, consider a complete tree of depth d, i.e., 2dIndividual leaf measurements. Depending on the structure of the tree, the occurrence of operations may be counted. Sm.At each leaf, i.e. 2dNext, the process is carried out. E1 and M occur at each internal node with depth d-1, i.e., 2d-1Next, the process is carried out. V and E2 occur at each internal node above depth d-2, i.e., 2d-11 time. Finally, SvOccurring at each internal node in the tree except the root, which remains at V1In (1). That is, SvOccurrence 2d2 times. This collectively yields an estimate of: regardless of flow control, execution time is 2 for the algorithmd-1(E1+M)+(2d-1-1)(V+E2)+2dSm+(2d-2)Sv. Grouping similar operations { E }1,E2},{M,Sv,SmProduction 2d-1(E1+E2)-E2+2d-1(M+2SV+2Sm)-2SV+(2d-1-1)V。
Assuming that the memory operations are approximately equal in time consumption and are common constantsFor boundaries, where a factor of 2 is included in V for simple read/store implementation, at SmFor the above-mentioned missing operation, and also for the extended operation E1≈E2E.ltoreq.E for d>Coarse estimation of the temporal complexity of the tree formation of 1 consists ofIt is given. When an expand operation is the dominant factor, the tree forms one expand operation that may require less than a certified initiated linear chain.
For validation of the tree verification data generated by the above described process, a validation policy using available information at each tree node is described. The average calculation cost may be calculated with respect to the relative share (share), respectively the number of failed measurements.
Using as a reference case a linear chain of measurements generated in a commonly certified startup and stored and then extended to the PCR, it can be seen that the traversal confirmation of the tree is significantly different. In the former case, the operation of the SML may in principle not be localized, whereas traversing the tree-shaped SML may allow to identify the subtree in which the operation has taken place. The same considerations apply to diagnostic validation, i.e., searching for components that do not conform to the expected reference configuration of the validated platform (described herein as failed components). For linear chained SMLs, this may require comparing each measurement to a reference value and/or recalculating an extended chain of operations up to PCR to verify the integrity of the SML. Since the operations in the linear SML may not be localized, failure to copy the PCR values may mean that diagnostic validation is not possible and the failed component may not be distinguished from the good component.
For tree-shaped SMLs, the situation is better. If a sub-tree is identified in which the operation of the SML is suspected, its complement (completion) in the SML tree still needs to be validated. Moreover, for diagnostic validation, significant speed in determining the set of failed components while verifying the contents of the root verification data register may be desired.
Validation of the tree-form SMLs may be used, where possible, to find a subset of leaves that cannot be validated and/or to detect SMLs. It may be assumed that there is a reference tree for the locally available comparison at the validator. Validation may traverse the tree of SML data starting from the root of the tree, i.e. the verification data element V. This may generate a leaf set of components for which the measurement differs from the reference value, referred to as failed components. In traversing the tree, a depth-first search with pruning may be applied, making a decision at each branch node. The tree may be binary. The SML tree values at the branch node and its two children may be compared with the reference tree values at the same node location, and the result may be denoted as g (good) for consistency and/or b (bad) for inconsistency. In this representation, the following may occur, as shown in fig. 9. Fig. 9 shows the classification of node configurations in the tree SML.
In FIG. 9, case (a), the entire subtree below the parent node may be positively validated, with traversal ending at that node. In FIG. 9, case (b), the parent node may be recomputed by the validator applying the expand operation to the child node values. If the recalculated value does not match the value at the parent node, this may indicate that the SML operation of one of the subtrees has a root marked as bad. This may be handled as an exception.
Otherwise, validation may continue to the next level tree, traversing the subtrees where bad values are found, i.e., the left, right, or both subtrees in (b), respectively. Case (c), a tree operation anomaly may be detected. This detection may occur without recalculating the expand operation. The last case (d), may occur when the binary tree is incomplete and/or the right branch is empty. Then the value x is equal to the value y and in this case the traversal may proceed to the left, otherwise a tree operation exception may occur.
One advantage of validating the tree SML is that subtrees with the correct root can be discarded from further searches for failed components. A simple probabilistic model is now provided that quantitatively evaluates the performance of tree validation. Assume, for example, that the SML is a complete tree of depth d. The validator has a complete reference tree representing the known desired platform configuration. The recomputation hash operation can be used to estimate the main cost factor of the validation complexity, while the comparison is inexpensive. A random set of failure leaves is assumed.
An optimistic validation strategy, called diagnostic validation, may be used that traverses the path from the root to the failed component (i.e., the component with bad measurements relative to the leaves of the reference tree). One characteristic of this strategy is that it finds a failed component with a true measurement. The diagnostic confirmation can be performed as follows. One of the cases in fig. 9, case (b), or the rightmost configuration of case (c), may be encountered when accessing an internal parent node that is different from the corresponding node in the reference tree, i.e., a bad parent node. In the latter case, no recalculation of the parent node may be performed, since it is clear that the SML integrity has failed. A subtree with this root configuration may be discarded from further traversal because it cannot yield trusted information about the failed component. In this case, the further steps may depend on the policy of the validator. The node configuration in case (b) is such that it may be necessary to recalculate the parent hash from the root hash by an expand operation 0 to confirm that this configuration, which is not known from the reference tree of the validator, is true. Subtrees whose roots are good child nodes of the bad parent under scrutiny can be discarded from further traversal. This process of diagnostic validation implicitly excludes configuration/case (a) of fig. 9 and the three left-hand configurations from diagnostic validation. As far as it is meaningful, it can be considered in the further forensic (forensic) evaluation of the SML tree.
Diagnostic validation may require an access and perform a hash operation at a bad interior node in a path join (union) from a failed (bad) leaf to the root. In the reverse untampered tree, this may implicitly exclude the right configuration/case (c) with a bad parent. The subset of bad leaves of the independent and ideal distribution (i.i.d.) constitutes the leaf's score f ∈ [0, 1 ]]. The number of dead leaves was 2df. The expected number of bad internal nodes E can be calculated as explained belowinner (inner)(f)。
One problem addressed herein is bi-directionally coloring the binary tree generated by random i.i.d. selection of leaves (e.g., bad pairs (vs.) good interior nodes), and coloring the path that connects it to the root. This random selection of leaves and paths may correspond to a random selection of a string of i.i.d. bits of length d. For pairs from N =2dExpected number of leaves colored after k selections in set of leavesAnd (6) performing calculation. In a recursive manner, the data is transmitted,and isSolving it yields:
since all substrings of the selected bit string are statistically independent, the same argument is applied at the internal node of level d-1. Thus, the desired number of colored internal nodes is obtained from the sum d-1What remains to be found is to select a desired number of k, corresponding to some desired number of colored leavesWhere 0 ≦ f ≦ 1 is the target score for the leaf. Solving the equation for k, yieldsIn which N-2 is insertedd. Thus, the expected number of bad internal nodes, E, depending on f can be calculatedinner(f)。
FIG. 10 shows 2d-a fraction of 1 internal node, where d =16, in which a hash operation can be performed according to the above description. This represents the number of hash operations required to positively determine a bad component. The reference case for linear SML may need 2d+1 hash operations to recalculate the final PCR value. This situation is generally represented by the upper vertical axis of fig. 10.
The situation may be slightly different with respect to comparison with the reference value. The tree traversal for diagnostic validation may be passed down with bad internal nodes that fail the comparison with the corresponding internal nodes of the reference tree. Because of this, the two child nodes of the bad node can be compared in each case, so that the complexity in comparison can be the number Einner(f) Twice as much. Linear SML may require all 2 sdThe measurements are compared with reference values.
If h is the cost of the hash operation at the validator and c is the cost of comparing two hash values (160 bits for SHA-1), then the total validation cost in the linear case is (2)d+1)h+2dc=2d(h+c)+h>2d(h+ c). This obtains the same information from the linear SMLs as diagnostic validation by the tree SMLs with minimal effort. On the other hand (including the root in the count) for tree SML, the cost is (E)inner(f) +1) (2 c + h). If it is notWherein λ = c/h<<Tree validation is more efficient 1. Even with a large margin, λ<0.01, a boundary of 0.99 can be generated for r.h.s (right hand side). Then for d =16, tree validation may be more efficient for a fraction of bad leaves up to 85%.
Diagnostic validation of tree-shaped SMLs may perform better than with linear SMLs in terms of hash operations, even better than linear SMLs for large fractions of bad components. Diagnostic validation of tree-form SMLs is greatly advantageous for low-score failed components. Tree validation may be more efficient when bad leaves are unevenly distributed, e.g., appear clustered.
Where both linear and diagnostic tree validations of direct comparison are available, linear validation is not possible if the final PCR recalculation fails, since then the comparison of the individual measurements does not yield reliable information — each measurement can be faked (fake) in the SML to hide the one that breaks the hash chain. One advantage of a tree is that validation data can be generated even when the computational complexity for the validator is reduced.
For the tree composition algorithm itself, operations on the verification data registers may run within the hardware protected TPM environment in order to achieve the same level of security as a trusted boot process compatible with the TCG standard. Although some of the operations in most of the tree formation algorithms listed above are not standard TPM functions that can be performed on standard compliant PCRs; in effect, normal extended operation E1May be an internal standard function, SvAnd SmMay be implemented by a PCR read operation.
Extensions of the TPM to the minimal modifications required to turn the PCRs into tree verification data registers are discussed, while the tree composition algorithm may run outside the TPM. Next, TPM internal commands for tree formation are described. Another implementation described is a software-based implementation of tree-form verification data, where the root registers may be trusted application-managed soft registers, the current state of such registers being protected by "real" registers, such as PCRs. Finally, a test implementation, "etembra", of tree formation with a TPM simulator integrated in a TPM software simulation environment is described.
The minimum required method is employed to implement tree formation and to initiate changes to the standard TPM, enabling PCRs to be used with algorithms 1 and 2. This approach allows for the implementation of the basic operations outlined above by the TPM commands or modifications thereof. The core of the algorithm, including the bookkeeping tasks in registers representing the current state of internal nodes, may be implemented as a trusted software root for performing tree formation in a system integrity measurement process, such as authenticated or secure boot.
Operation SvAnd SmNo problem arises and can be implemented separately by TPM _ PCR read command or directly in the tree formation software. EiMay occur to each right of the lowest level of the tree and may expand V containing measured values from the sibling (spreading) to the left of the measurement in V. Thus, E1May be included in the standard TPM _ Extend operation defined by equation (1). E2May occur to the right within the tree and, in turn, may be modeled by TPM _ PCRRead followed by TPM _ Extend.
Operations M and V may occur within the tree, respectively, on the left of the lowest level. They present problems for two reasons. First, PCRs may not be written directly, the natural method of resetting them through TPM _ PCR _ Reset as the first step in M or V may be problematic because only more than 16 PCRs of a standard TPM may be Reset, and can only be Reset from the correct location. It is necessary that enough of the PCRs be resettable and that they respond to the location where the tree formation software is executed as trusted code.
Second, even after reset, operations that can modify the PCR, TPM Extend, cannot directly copy the value into the register, but rather perform (1) with the existing value of the reset PCR, which is a 160-bit binary 0x00 and the input value, which produces a different result than the input value. One option to avoid exposing new commands that are written directly or to shift (shift) values between PCRs may be to increase the PCRs with a reset flag indicating that they are in the original state after reset. Then TPM _ Extend may be modified to write TPM _ Extend directly to the PCR when the flag is true, and then set the flag to false.
Recognizing that M and V always occur on the left side of the tree, if the right sibling is empty (nil), then the output is generated deterministically based on the two siblings involved, and the third option will deviate slightly from the definition of the Merkle hash tree. The correct configuration of the values in each basic triangle in the SML tree can then be shown in fig. 11.
I.e. V or M can be modeled by TPM _ PCR _ Reset followed by TPM _ Extend to obtain 0x = H (0 x) in a first step. The right sibling may then expand normally in that register and the result is written to the SML. The manner in which the nil node values are processed consistently in the intermediate stages and in the final determination of the tree is also described below.
In many cases, the hash tree stored in the SML may be incomplete, i.e., contain empty leaves and interior nodes. During the continuous measurement process, such nodes, with values denoted nil, can be processed programmatically by operations M and V, meaning that the nil siblings on the right side can be ignored. This occurs at lines 8 and 18 of algorithm 1 for the intermediate stages of tree formation, and at line 29 of algorithm 2 for the completion of the tree after the last measurement.
In general, i.e. across the constraints of a standard TPM, nil may be a two-sided unit for operations, i.e. x nil = nil x = x, and nil = nil (equation 3)
This convention indicates rule/case (d) as described above. It is a reinterpretation of the usual extension operations and can also be used to eliminate operations M and V in the algorithm. That is, M and V may be replaced by a register V reset to nil, followed by the operations V ← V M, V ← V', respectively.
For the implementation of this convention, nil may be expressed as an extra flag of the PCR register, and as input and output. For PCR, the nil flag may be set by a specific reset command. When nil is encountered as an input to the PCR for an extended operation, then the TSS or TPM modified logic may prevent the hash operation (1) from being performed and written directly to the PCR.
The tree-formed partitioned TPM/software implementation compromises (compromises) in the security level of the generated root verification data register values. The tree verification data may be generated by a TPM internal implementation of the proposed algorithm. To this end, TPM modifications may operate as follows. The modified TPM may expose a command TPM _ Tree _ Extend having the same input parameters as the general TPM Extend command. The TPM may maintain flags for PCRs indicating which of them are currently designated tree roots, which are occupied and locked, and which may be used by the algorithm as intermediate V. Moreover, the TPM maintains the additional data mentioned above. In the simplest case, the internal logic may prevent the use of multiple PCRs in parallel for tree formation. When TPM _ Extend can output an update of the target PCR value, TPM _ Tree _ Extend can return the variable numbers 1.... d of the updated verification register data in order so that they result in the natural order described above. These return values may be the output of the SML write operations of algorithms 1 and 2. When d values are returned, the receiver can know that the tree has been exhausted and the corresponding root V is locked. Another option may include TPM _ Tree _ Extend to return all intermediate V on each call.
A method is described for protecting the integrity of the secure boot process of a trusted platform using a Merkle hash tree in the same way as is traditionally done with PCR. Efficiency and flexibility gains due to the use of tree-like validation data in platform validation have been demonstrated. This may be effective, especially in remote validation and management of platforms via a network. Given a small scale and low complexity tree formation algorithm, the algorithm may be a straightforward implementation operation within the TPM if the specification is modified accordingly. This may be a viable approach for future TPM production.
In terms of generalization, trees are of course not the most common structure, and therefore integrity protection using cryptographic digests can be applied. For example, some researchers have expanded the hash to provide identification of a directed graph. Others apply a variant one-way function, such as multiple-set hash, to uniquely identify complex data structures, such as RDF graphs. In accordance with these principles, it is contemplated to generalize tree-form verification data to, for example, directed acyclic graphs and dependency graphs. While there may be interest in complex platform management and protection tasks, each such generalization will result in increased complexity and lost efficiency of the binary tree used for validation. This generalized application is therefore not performed at this stage.
However, with the proposed TPM integrity measurement functionality described above, the single command extension of TPM _ Tree _ Extend may be the starting point of a flexible TPM-based Tree authentication data management architecture. Secure updating of the subtree root may be enabled, such as, for example, for dynamic platform management, ultimately referencing internal nodes of the tree SML with the same security assertion (assertion) that TPM _ Quote provides to the remote validator for PCR values. This further development is described below.
The structured measurements help identify components that fail the integrity check. In one such embodiment of structuring, the measurements may be organized in linear chains, and the validation data may be obtained as: vi=f(Vi-1,mi) I =0, 1.., n-1, where n is the length of the linear chain and m is the length of the linear chainiDenotes the ith measurement, ViRepresenting the verification data obtained from the ith iteration of the linear chain. Final value V obtained by processing n measured hash values of a componentn-1May be used as authentication data. For each Vi: i =0, 1.. n-1, there is a reliable reference value Ri. If in the verification process, the final value Vn-1Indicated as being different from its reference value Rn-1Then this indicates toOne less intermediate value VkWith its reference value RkAnd not matched. Such a mismatch may be due to a failure of the integrity of a certain component, or simply to correctly calculated (and/or integrity-intact) verification data VjWith corrupted or erroneous reference data RjA mismatch in comparison therebetween. Since the measurement log is chain (V)0To Vn-1) The first occurrence of a mismatch of the verification data to its reference value can be found using algorithm 1 shown in fig. 12. There are n elements, the indices range from 0 to n-1, and integer division is performed on the index operation.
Reference values { Ri: i =0, 1.... n-1} is trusted and uncompromised. In this case, any mismatch indicates a failure of the integrity of the ith component, or the ith component itself may be integrity complete, but its measurement m may itself be compromised. In either case, in the operation verification system, the ith component may be declared untrusted and may be remedied before loading. Algorithm 1 finds the first occurrence of a failure, but cannot find subsequent failures.
To optimize the search of the failed integrity check module, verification data may be generated in the form of a hash tree of Stored Measurement Logs (SMLs) as described in the tree verification process above. The stored measurement logs are organized as a balanced binary tree and an algorithm is proposed to generate the tree using TPM commands. The algorithm is generic and can be extended to balanced n-ary trees, where n > 2.
The concept of tree-form verification as described above is extended to tree-form verification that considers the trees to be unbalanced. Further they can be extended to take into account interdependencies between software components. The term device is used to indicate any entity whose software/firmware/hardware integrity is to be verified. Note that the entity may be a home gateway, an M2M gateway, an access point, an h (e) NB, an HNB, a relay node, a mobile device, a tablet device, or any other gateway or device.
Without loss of generality, some software components may be subject to more attacks than others. For example, in a device, the communication protocol stack may be hacked more frequently than the components that implement the decorative feature. This argument also applies to the function. In remote validation and semi-autonomous validation, integrity measurements are sent to a platform validation entity in the network. Such validation entities may observe and maintain a history of failed components or functions.
Moreover, it is generally assumed that when a device (and/or its software components and/or functionality) is attacked and/or its integrity is compromised, the impact or cost to the stakeholder of the device may vary depending on the components/functionality.
The above aspects necessitate component/functionality verification strategies where the frequency and/or cost of attacks or the severity of the impact can be considered and reflected in the design of verification data structures and verification algorithms.
First, however, a case where the attack frequency (or the probability of occurrence) is different is considered. Based on the frequency of integrity failures of a software component or function, a Probability Distribution Function (PDF) of integrity failures on the software component or function may be estimated and used in validating the construction of the data structure. The probability distribution is a discrete distribution function and can be represented as a vector (P) for a device with n components/functions0To Pn-1). Such a probability distribution function can be generated by averaging all observations or by performing a windowed averaging in which the averaging is performed with the most recent sufficiently rich samples. In addition, the averaging may be a non-weighted average or using a weighting function W [ n ]]Weighted average of (2). Examples of weighting functions are, but not limited to, exponential, gaussian, etc. The PDF varying in the time domain may also be obtained using overlapping or non-overlapping sliding windows. This probability distribution vector (or vectors) may be communicated to the device via a message or configuration. Without loss of generality, we assume a set of temporally fixed PDFs { P }0,...,Pn-1}。
At the device, using the distribution of software components (PDF) and hash integrity measurements, an optimal hash tree may be constructed using Huffman's algorithm, arithmetic coding, or similar such optimal coding tree construction algorithms. Fig. 13 shows an example of a tree structure using Huffman codes. The advantage of constructing such a tree is that the most frequently attacked components have a shorter code length, thereby reducing the expected value of the search time. Thus reducing the overhead of searching for failed components or functions. Also, components that are expected to be under the most frequent attacks will be searched earliest.
For the Huffman algorithm or arithmetic coding algorithm to work, the probability cannot be zero. Those components or functions with a probability of attack of zero can be discarded when constructing the tree. This results in a reduced size tree, reducing traffic overload and search time. Alternatively, if all nodes are involved, then a very small non-zero value δ may be assigned to a component or function with zero probability of attack. In general, the resulting trees will be unbalanced due to the uneven probability distribution of the attack.
In a network, such a tree may be pre-constructed and populated with derived hash values by a certified authority (authority) based on the production time settings of the device or after an authenticated, authorized and integrity verifiable software/firmware update, and securely stored for reference. In this device, every time the device boots securely, this tree can be reconstructed using the PDF of attack frequency by applying an optimal code tree algorithm (such as Huffman code, arithmetic code), if not pre-constructed and stored, and the nodes and roots are populated with the evaluated hash values (or extended PCR values).
The algorithm may be developed to populate the nodes and tree roots of the tree using measurements of the software components at the device. The device may then send such a validation tree to a platform validation entity in the network and compare with the reference tree to establish the trust status of the device and uniquely identify the failed node. The algorithm is described herein.
To reduce traffic overload, a pruned tree of a particular depth d, including the root and nodes of depth d, d-1. The depth d may be evaluated based on a threshold value. Another method of pruning is to discard any nodes whose cumulative probability of attack occurrence is less than a specified threshold. Yet another, possibly more reasonable, approach may be to prune all remaining nodes of the tree when the cumulative distribution value of the nodes already contained in the tree has exceeded a specified threshold. Fig. 14 shows tree pruning for depth d = 2.
Fig. 15 shows a system diagram and communication interchange. FIG. 15 shows that, at 1, based on the attack history, a discrete probability distribution of the attack on the component/function is generated. At 2, the Huffman coding algorithm is used to generate the code tree. At 3, the probability distribution of the attack is passed to the device. At 4, a Huffman coding algorithm is used to generate a coding tree at the WTRU. During secure boot, the tree is populated with trust values at 5. At 6, a trust tree is sent from the WTRU. At 7, the tree is verified with a reference and searched for a failed node. At 8, corrective action is taken.
In the above description, the use of a resulting unbalanced tree can be seen by considering a non-uniform attack probability model for the component. In addition to attack probability, the cost or impact of any attack may also be considered. Assume that any ith component (i =0, 1.., n-1) of the device is associated with { P }i,CiIs associated with, where PiIndicating the probability of occurrence, CiRepresenting the cost or impact on the stakeholder. Then, the components may be ranked by "expected cost (impact) due to attack/damage", where ranking is performed by the following comparison:
E(cost)i=Pi×Ci. Then can be based onA normalized fractional expected cost is determined.
Once the components are ordered (from highest normalized expected cost to lowest), the same Huffman coding or arithmetic coding can be used to form the tree. A verification tree formed in this manner may give the greatest weight (for searching) to the component whose failure may have the greatest expected cost/impact on the relevant stakeholder.
The tree structure itself may not be able to fully capture the interdependencies of the software structure. For example, there may be shared libraries used by different components. To indicate this interdependency, links are therefore added between nodes and nodes/leaves of the initial optimal code tree after the tree is built. Thus, the tree may be extended to capture the interdependencies between the software architecture or components of the device. Leaves can be mapped to software components, nodes to libraries and documents. The interdependencies of the software modules and the shared library may be mapped by alternate child links. Thus the tree becomes a generalized tree. The algorithm for populating the nodes with measurements may be modified to additionally traverse alternate child links. Fig. 16 shows a tree with such alternative child links, where the software component or function represented by module a relies on or uses the shared library represented by module D, and similarly the software component or function represented by module B uses leaf component/function C.
If the device includes a trusted execution environment, such as, for example, a TrE, to ensure that trusted operations and/or nodes and/or roots of the algorithm are trust-filled with the obtained hash values, the algorithm may use trusted operations and instructions provided by the TrE.
If such a TrE is constructed using a Trusted Platform Module (TPM) and an upper Trusted Software Stack (TSS), appropriate command extensions for the TPM and TSS may be considered.
As described above, any optimal code construction algorithm may be used to construct the tree. One example presented is the Huffman algorithm used to construct trees.
In the following description, a leaf represents a software component or function. For simplicity, components are mentioned, but the argument may also apply to the function. A collision-avoiding hash function H may be used to compute the measurements of the components. The function F is used to compute the metrics of the node by using the metrics of the children of the node under consideration. The algorithm Populate _ metric with the node set as the root of the optimal code tree can be called (call). The leaf metrics of the tree may be initialized to the measurements of the corresponding components.
Algorithm 2 shown in fig. 17 performs a post-ordering traversal of the tree and computes the metrics of the internal nodes and populates the nodes with the computed metric values. To modify the algorithm for PTM, function F is a PCR extension operation. In the algorithm described below, Reg _ Number is a subscript index indicating the PCR register Number.
Algorithm 3 as shown in fig. 18 is a modification of algorithm 2 (fig. 17) to act on the PCR register and use the PCR Extend operation. Algorithms 2 and 3, shown in fig. 17 and 18, respectively, may be modified to act on an n-ary tree. However, the Huffman algorithm may be modified to generate trees accordingly. Instead of adding the lowest 2 probabilities to the left and right children, the n lowest probabilities are added. The label of the corresponding branch (branch) contains log2 (n) bits. For example, for a 4-way tree, 4 probabilities are added and 4 branches are labeled 00,01,10, 11. As shown in FIG. 19, algorithm 4 is an update to algorithm 2 for an n-ary tree. As shown in fig. 20, the algorithm 5 may be similarly updated.
To build a tree generalized to handle alternate child links as described above, algorithm 3 as shown in fig. 18 may be updated. Algorithm 5 is an update of algorithm 2 that takes into account the alternate child links. A binary tree is initially constructed using the Huffman algorithm. And then add interdependencies as replacement child links. Tags 0 and 1 attached to the left and right children are still used to find the code of any leaf. Alternate child links may be used to populate the metrics in the node. The alternate child link itself may not have a tag attached.
After populating the nodes with metrics, the validation tree may be sent to a validation entity in the network. A comparison of the integrity tree and the reference tree may be performed. This may be performed according to algorithm 6 shown in fig. 21.
Algorithm 6 as shown in fig. 21 is valid for a (usually unbalanced) tree that does not replace child links due to the presence of inter-node dependencies.
The same algorithm can be used if trees with alternate child links are also processed. In this case, the algorithm is run without any special handling of the proxy child link relationships. When run in this manner, the algorithm still obtains the same pruned tree of failed components. However, the replacement child link provides additional visibility into failures because when a component is identified as having an integrity check failure, the replacement child link relationship is in depth with knowledge of which other nodes (which may not be directly above the failed component) may also be affected by the identified failed component according to the algorithm.
In another variation of the above algorithm, these and similar fault detection algorithms may be executed in an "interactive" manner. In one embodiment, referring back to FIG. 14, for example, the sender may send measurements for nodes located only up to some internal depth D ≦ D, where D is the depth of the longest branch. The verifier may verify those internal nodes.
If a failure is detected, the verifier may request more data corresponding to the sub-tree below the failed internal node to be sent from the sender. The sender sends the data and the verifier can then check the integrity of the measurements at a deeper level. The method may involve more of this interaction than just two interactions.
If it is difficult to transmit all tree SMLs of all nodes of a tree at once due to, for example, communication bandwidth or payload limitations, it may make sense to use an "interactive" protocol such as that described above. The fully formed binary tree contains twice as much data as the linear chain structure of the SML at most. These additional amounts of data that need to be transmitted depend on the depth and fullness of the tree formation, but in some cases even the fully formed binary tree may be modest, e.g., some Kilobyte (kbyte) more measurement. However, if the tree is annotated with more expressive semantic validation data, such as detailed certificates, RIM certificates or OMA DM information or the like, and thus is large in data size, then the interaction protocol may be considered more reasonable.
The particular, specific selection of validation system optimization criteria that 1) validate the assurance that the internal SML is compromised and 2) quickly search for compromised components results in a validation data structure formed from the binary tree. However, the confirmation data structure formed by the binary tree results in costs such as 1) the computational cost added to fully construct the tree at both the device and the verifier, 2) the storage cost added due to the need to store the internal SML (up to twice the amount of linear SML). A tree may not be an optimal structure if there are a large number of interdependencies among different components of the device.
Other types of system optimization criteria may result in a validation structure that may be different from the tree. For example, a linear SML structure may be the best choice if the interest lies in the cost of absolutely minimizing the internal storage medium rather than losing the ability to detect the fast fine granularity of the failure.
Another possible optimization criterion may be ease/speed of quickly determining the integrity of the vicinity (such as small rectangles or loops) of one or a small number of components known to be at risk, where neighboring components are known to have highly unstructured interdependencies. In this case, for example, a non-tree-like validation data structure may appear that looks like an interconnected ring or grid. For the more general processing of the present subject matter, graph-theory studies may be used.
The secure operation of the protected authentication data register with tree formation is described below. Functionality is conceptually added to Trusted Platform Modules (TPMs) to handle Platform Configuration Registers (PCRs) that represent the roots of hash trees that protect the integrity of the measurement logs (SMLs) of the tree store. This enables verification and updating of the SML's internal nodes, even to prove their values with the same security level as for normal PCRs. As an important application, the demonstration of the SML subtree is shown how the demonstration of the platform properties can be achieved.
More trusted functions may be added to the operation of the tree-shaped verification data. It is shown that the internal nodes of the tree SML, which are protected at the root in the verification data register, can be verified for integrity and updated with new values in a controlled manner that maintains the overall security level. A variant of the TPM _ Quote command for internal tree nodes is introduced that can prove its integrity exactly as TPM _ Quote does for normal PCR values. For a defined set of commands, the integrity measurement functionality of the TPM may be supplemented by a comprehensive set of commands that operate the tree PCRs and SMLs. Using them, tree-form verification and/or validation data can be used far more flexibly and expressively than normal, linear chain TPM PCRs and SMLs.
Basic system models and annotations are also described. The TPM command extensions described above are also provided and defined, as well as certain related structural extensions and basic usage classifications for these operations. As an important use case for the import command, a protocol for validation of root nodes of subtrees in the tree SML by trusted third parties and an exhibition of a brief evaluation of the tree platform validation method are also provided, and the prospect of future work is also described.
The minimum elements and capabilities of the platform that are needed later are described. TCG terminology and certain concepts are used and described herein, but the embodiments described herein are not limited to platforms and secure hardware elements that comply with the TCG standard, such as TPMs, SMLs, and systems designed according to PC client specifications, for example.
The tree-forming variant of the extended operation defined above operates within the TPM, taking a single measurement as input, otherwise inactive (inert) to systems external to the TPM. This is not the case for the update function described herein. The latter is a certain number r of authentication data registers V protected inside the TPM,the above operation also operates on hash tree data stored outside the TPM stored in less protected storage.
That is, the hash tree contained in the Stored Measurement Log (SML) may be managed by a Trusted Software Stack (TSS) that is authorized to access TPM functionality necessary for an update operation. The TSS invokes the TPM via an authorized integrity-protected command interface. While TCG is used as a general term for practical reasons, the concepts presented herein may not be limited to TCG, TPM, and platform. A set of hardware protected authentication data registers and extension operations may be used. The extension operation may be defined by a normal TPM extension operation:(equation 4), where V denotes a verification data register, H is a collision avoidance hash function (SHA-1 in the case of TPM), and m = H (data) is a measurement value. In the following, it is free to use with any register V as argument, so no confusion arises.
The SML may contain a binary Tree of depth d resulting from binary one-way operations, such as, for example, a Merkle hash Tree, generated by the TPM _ Tree _ Extend command described above. The natural coordinates of the internal nodes and leaves are binary strings of length 1. Using n as an internal node or leaf, and writing n to<n>=(n1,.....,n1)∈{0,1}x1Used as a binary representation of the coordinate n. Make it<n>=nk1, as<n>The kth number of (2). So no confusion occurs, a node can be identified by its value in the SML (e.g., a 160-bit hash value) while distinguishing from its coordinates. In addition to this, n = (n,<n>) A value coordinate pair for a node.
A trace (trace) T of n is an ordered list of all internal nodes on the path from n to the root, comprising n, i.e. T (n) = (T)1,......t1) Wherein t isk~(n1,.....nk) In that respect (formula 5)
The natural partial order of a node is written as m ≦ n, corresponding to n ∈ T (m). If and only ifAnd when the node is in the node set M, the partial order is expanded into the node set N by setting M to be less than or equal to N.
The reduced tree R of n is a list of all siblings of its trajectory. This is easily expressed in natural coordinates: r (n) = (r)1,.....r1) (ii) a Wherein(formula 5). WhereinRepresenting a binary negation.
Hash chain operationsIn a variant where the arguments are made sequentially dependent on binary parameters, fixed length input hash values x, y are used. For c ∈ {0, 1}, it can be set as follows:
< x/y > = { x y for c = 1;
< x/y > = { y x for c = 1;
the chiral (chiral) Merkle-Damgard operation is a version of an extended operation that allows left and right siblings in the tree to be distinguished and their parent nodes computed in the correct order. Regardless of implementation issues, the (extended) TPM can internally execute < | > =.
In some cases, the hash tree stored in the SML may be incomplete, i.e., contain empty leaves and interior nodes, denoted by nil. For coherent processing of nil nodes in the Merkle hash tree, it is assumed that nil is useful for operations two-sided unit, i.e., x nil = nil x = x, and nil = nil (equation 6).
This is a re-interpretation of the usual TPM extend operations and can also be used to model write-through V e V by first resetting V to nil, then performing V x on some values x. To implement this convention, nil can be expressed as a marker that verifies the data register and the inputs and outputs of < | >. For V, the nil flag may be set by a specific reset command. When nil is encountered as an input to an extend operation on V, the TSS or TPM modified logic may prevent the execution and direct writing of the extension to the PCR.
An operational extension of a standard TPM operating in security with a tree-form SML is described. The goal of protection is to achieve the same level of assurance for the internal nodes and leaves of such SMLs as the traditional verification data register values protected in the PCR. The updating of the root by the new node value is first described, followed by further display of the structural and command extensions used with the tree verification data.
The strategy for secure updating of internal nodes or leaves of the SML tree is as follows. First, the authenticity of the current value of that node can be verified. This is performed by recalculating the tree root protected in register V (which remains fixed for the remainder of this section to simplify the representation) using the data contained in the reduced hash tree associated with that node. This verification must be a protected operation within the TPM, called TPM _ Reduce _ Tree _ Verify _ Load. The validation also loads the validated reduced Tree data into a set of validation data registers for use with a subsequent update operation TPM _ Reduce _ Tree _ update. The function uses the new values for the nodes to be updated and updates the parent nodes up to the root V with the data of the reduced tree. The two commands may be used independently for various purposes, such as independent node integrity verification. They may also be combined into a single node and root update command for convenience.
Let n be the node of the SML tree with depth d at level l ≦ d having the root protected in the validation data register V ∈ V. The first step is to update V with a new value for n, verifying that the reduced tree r (n) in SML is not tampered with.
This verification may also be performed by a TPM-protected operation in order to maintain the security level of V. To this endTSS with argument (, (<n>N R (n)) calls TPM _ Reduced _ Tree _ Verify Load. Select l +1 available registers from V, which are called B1,......B1And V*. Algorithm 1, shown in fig. 22, shows how SML nodes are verified and their reduced tree is loaded into a set of verification data registers.
The chiral extensions used primarily in this algorithm ensure the correct order of the child nodes in the computation of the parent element of trace n. The TSS obtains the computed trajectory t (n) and the validation status as return values. In algorithm 1, as shown in FIG. 22, there is an additional verify data register 1+ 1.
A simple variant of algorithm 1 in fig. 22 may be run using a single verification data register by processing the reduced trees one after the other without storing the reduced trees within the TPM. The auxiliary command TPM _ Reduced _ Tree _ Verify may be useful for ordinary verification of the TSS or SML of another party. This is shown in algorithm 2 shown in fig. 23. The serialization of r (n) required for this variant can be implemented using an input buffer implemented in a software layer (e.g., a TPM device driver) below the TSS, or by corresponding TPM internal logic.
Like the raw tree formation algorithm described (fig. 4 and 6), algorithms 1 and 2 (fig. 22 and 23, respectively) use non-standard operations, in particular chiral extension. Since the output purpose of the chiral extension is always to verify the data register, the operation can be performed by: another argument is loaded into another verification data register (if not already present, like algorithm 1 of fig. 22) and relies on the chirally extended intermediate argument before TPM internal operations with register swapping. This may ensure the same level of protection for all arguments.
The verification performed by algorithms 1 and 2 of fig. 22 and 23, respectively, is of limited significance because it guarantees the integrity of the input node values with respect to the input reduced tree. In case the integrity of the SML tree is compromised, more detailed information is desired. The level of the tree at which integrity corruption occurs may be obtained at least by executing a validation policy via the downward tree traversal described above.
The command TPM _ Tree _ Node _ Verify shown in algorithm 3 (fig. 24 a) returns the level at which the incorrect reduced Tree and/or trace element first breaks the integrity chain from root to n. It is not allowed to determine which siblings have broken the chain. Further diagnostics are possible when a reference tree is available as described above.
For node n, which may be updated with a new value of n', TPM _ Reduce _ Tree _ Verify _ Load may be executed. The command TPM _ Reduced Tree _ Update, as shown in algorithm 4 (fig. 24 b), is called with an argument n' and can act on the result determined before determining the node coordinates (n) and TPM _ Reduce _ Tree _ Verify _ Load of the register V to be updated. To achieve this binding of command sequences, various methods may be employed.
First, the TPM may store and manage state and additional data for tree operations as described below. Further, the sequence of commands TPM _ Reduce _ Tree _ Verify _ Load and TPM _ Reduced Tree _ Update may be cryptographically bound, for example, by a rolling nonce that the OIAP/OSAP authorization command session executes as TPM protected. Finally, two commands can be added to a single Update command TPM _ Tree _ Node Vrified _ Update with arguments (< n >, n, n', r (n)). The node update command returns a new trace n "and a new value V, which the TSS then uses to update the SML.
These registers V acquire the status by means of the association of the verification data registers with certain nodes or roots of the hash Tree, and the associated commands TPM _ Tree extended (defined above), TPM _ Reduced _ Tree _ Verify _ Load, TPM _ Reduced _ Tree _ Update. Particularly important states may be:
the root (AR) is activated, representing the root of the SML Tree currently under the TPM _ Tree _ Extend operation construct.
Complete Root (CR), representing the root of the Tree, is the complete result of the measurement of many components (i.e., TPM _ Tree _ Extend operations). The AR may switch to CR when the tree is complete, i.e. contains 2d leaf measurements, or may be triggered by the TSS if it is desired to close the tree at some stage. V in CR state may be protected from further updates with TPM _ Tree _ Extend, but may be accessed by TPM _ Reduced _ Tree _ Update or even normal TPM _ Extend operations according to policy and authorization.
Tree Build (TB), representing registers used to build an activation Tree in another AR by a TPM Tree _ Extend operation.
Reduced Tree node (RT), representing the result of TPM _ Reduce _ Tree _ Verify _ Load, register BkOne of them. The RT V may be protected until the corresponding TPM _ Reduced _ Tree _ Update or another authorized command occurs.
When more than one tree is managed, the V' states need to be associated with their respective trees, for example, using a Unique Identification (UID). Furthermore, it may be necessary to store node coordinates for each or some of the registers. These data may be maintained in a Verification Data Allocation Table (VDAT) internal to the TPM and managed by a Tree Data Management Unit (TDMU).
TPM protected verification of node values may enable new core semantics for certified platform validation. In particular, variants of TPM _ Quote (Quote) that prove a certain node value may be defined. This command TPM _ Node _ Quote may be called with the same argument as TPM _ Quote plus the argument of TPM _ Reduced _ Tree _ Verify. Algorithm 2 shown in fig. 23 is then executed, but in addition a copy of n is saved in another PCR V'. Upon success, TPM _ Quote is executed on V'. The receiver of such a reference should clearly obtain a signature on top of the internal nodes of the SML tree. One possibility would be to change the fixed string contained in a signed TPM _ Quote command, usually "QUOT", that is, "TREQUOT".
Proving the node value with this command provides the node with the same security assertion as referencing the verification data register (PCR) value using TPM _ Quote. However, it may be burdened with the additional semantics that the value corresponds to some internal node of the SML tree, i.e. it effectively proves the state of some subtree for which n is the root. To communicate this semantic explicitly to the validator, additional data may be contained in an AIK (attestation identity key) signed attestation package, e.g., a "tree node" string. If the root register is a controlled SML root register due to the TPM _ Tree _ Extend command, i.e. it is in the CR state as discussed above, the meaning of such an attribute may be significantly strengthened if such an attribute is assigned by the TPM _ Tree _ Node _ Quote. Such control may be part of the reference generation.
For acknowledgement of the attestation message, the validator may need the value n of the referenced node n = (n, < n >). More information sent to the validator may not be needed and therefore the above description of TPM _ Tree _ Node _ Quote follows the principles of minimal inspiration. A variant of the command may also sign the node coordinate (n) if the position of the node in the SML tree is important for validation. The extended validation data sent to the validator may include a reduced tree of n and a root verification data register.
As an alternative embodiment, it is possible to assign the validator the task of verification of the reduced tree. This method is used elsewhere to prove the integrity of the web page delivered by the web server and binds the proof to that of the server state using the ordinary TPM _ Quote command. This leads to a variant implementation of TPM _ Tree _ Node _ Quote as follows.
The command receives as arguments a node value n, a node value r (n), and a selector for the root V. The TPM signs this (concatenated) data after the CR state of the control V and fixes the string attribute using "retdtrequuot". It is worth noting that for methods to prove web page integrity, an alternative approach is to bind the reduced tree and the reference from the TPM by inserting a hash of these extra data into the random number (nonce) input of the TPM _ Quote command, which is typically used to guarantee freshness (freshness) of the reference.
The first and second implementation variants for referencing internal nodes represent opposite possibilities, in the sense that the first implementation variant places the verification load with the platform and the second implementation variant places it with the validator. Thus, the two have different actual efficiency ranges. For example, many distributed validation platforms as in M2M communication for the first implementation variant, and many distributed validators (such as the Web client described above) for the second implementation variant. However, since the complete state indicated by V is shown to the validator, the second variant has a fundamental disadvantage for the information disclosure of the platform. This may compromise privacy.
The various extensions of the TPM integrity measurement functionality described herein may be grouped into the following categories. TPM _ Tree _ Extend is used in the continuous measurement process to build a specific SML Tree with PCR protected root. TPM _ Reduced _ Tree _ Verify _ Load, TPM _ Reduced _ Tree _ Verify, and the last TPM _ Tree Node _ Verify are commands for platform internal diagnostics. In addition to using TPM _ Reduced _ Tree _ Verify _ Load as a preparatory step to TPM _ Reduced _ Tree _ Update, they may be used to Verify certain characteristics of the platform represented by the SML subtree before other events occur.
TPM _ Reduced _ Tree Update and TPM _ Tree _ Node _ Verified _ Update may be used for controlled updating of sub-trees. The specific use of the internal node update operation is described as follows:
updating of individual system components. In which case the new value updates the leaf.
Updating of the system modules represented by the subtrees. In which case the root of the new sub-tree updates the internal nodes of the original tree.
Finally, TPM _ Tree Node _ Quote is a command that makes the Tree SML available for platform validation by the remote party. It presents the possibility of using the primary new element of validation of tree data, i.e. proving a sub-tree representing only a defined part of the system state. Specific use cases are described below.
Command extensions may be used separately in the tree validation. The following describes a method that merges command extensions to validate sub-trees in a modular fashion.
Validation using tree SMLs and validation data registers adds semantics to remote attestation. The possibility of proving the sub-trees of the SML makes the performance far beyond traditional remote proving. Tree-like validation data is one way to embody other proposals to prove added semantics to the platform, such as association with and characterization of validation data, as in PBA. One problem with trusted computing is the association of semantics to platform attestation. The TCG specification defines dual-sided remote attestation, in which executed code is measured as it is loaded. The measurements are stored in the PCR as verification data, and the TPM certifies the data by signing the data with a TPM-protected Attestation Identity Key (AIK). Since a digest of the complete configuration is transmitted, the verifier can know the configuration of the machine (always if system dynamics is considered). The data communicated for validation lacks expressiveness to enable multi-faceted and efficient remote platform validation. The need for semantic proofs has long been recognized, where it was proposed to limit the scope of a single proof to virtualization subsystems of limited complexity, allowing for proofs of complex, dynamic, and high-level program characteristics.
Properties and Property-based Proofs (PBAs) are described herein. PBA allows the security features of a verified platform to be guaranteed to the verifier via a Trusted Third Party (TTP), called Subtree Certificate Authority (SCA). The SCA issues certificates that map the platform configuration to features (in particular desired/undesired functionality) that can be implemented in the configuration. PBA shifts the infrastructure problem of platform validation to SCA, similar to the case of the TCG privacy CA, except for the role of extending the TCG privacy CA. Validation of subtrees is one way to overcome the described problems. A related embodiment is a hardware-supported update performed by a TPM command that re-seals (re-seal) data for another platform configuration based on an update certificate.
Unlike validation data, validation data is all data submitted to another party, the validator, and is used to assess the trustworthiness of the platform state. The process of submitting validation data to the validator, implemented as remote attestation from, for example, the TCG, and its evaluation by the validator, is properly called validation. The validation data may contain validation data, such as a referenced validation data register (e.g., PCR) value. Validation may include policy evaluation and action triggering by the validator in addition to cryptographic validation of the validation data.
The tree verification data and validation data associated with the SML tree provide structured data that, in addition to being used in the methods described above, is also used to enhance the semantics of the platform attestation. Another approach is provided for implementing PBA-related concepts using tree-like validation data.
It is shown how the SCA replaces a node in the SML tree with a meaningful trusted declaration about a subtree of the component under test, the subtree certificate, where the latter node is the root of the subtree. This process implements partial validation of the platform by the SCA and gets a trusted assertion of the available validation data ingested (ingest) into the platform. This can be used later on by another validator to validate the platform more fully. Next, a general base protocol that can be implemented for variant implementations and subtree validation of use cases is described. Various variants are also described.
Subtree validation is a process by which a trusted platform (which may be, but is not limited to, a combination of TPM and TSS to simplify the system model) obtains credentials from SCA for the internal node values of the tree SML. To do so, the platform submits a signed declaration to the SCA indicating that the node value is contained in the SML tree with the root value protected in the verification data register. The TPM is one possible implementation and any authentication data register may be used. Based on this phenomenon, the SCA may issue to the platform a certificate with additional attributes about this node value, which may then be ingested into the SML tree. The process uses protected tree operations.
The platform may have an activation AIK a with certificate C issued by trusted Privacy CA (PCA)a. The communication between the platform and SCA is encrypted and integrity protectedTo mitigate the attack of human intermediate intervention. An internal node is selected for validation. In this protocol, no failure cases and no negative responses are mentioned. The subtree validation protocol proceeds in at least 5 stages as shown in fig. 26.
Stage 1 generates a reference on s. To do this, the TSS calls TPM _ Tree _ Node _ Quote using the argument (<s>S, r(s), a) (for simplicity, fig. 26 does not show all arguments that can be called) and receives the returned P = Siga(s)。
If the tree root, register V, is selected for validation, then instead TPM _ Quote is used on V. At stage 2, the TSS generates a proof package Q. It contains information for verifying SCA, at least:when not from CaWhen it is apparent that a is a common part of apubMay be included in Q. Also, the value of s can be SCA known and then deleted from Q.
More information may be included, such as the node coordinates < s > when part of the reference. Q is sent (encrypted with the SCA's public encryption key and integrity protected) to SCA. This phase is similar to that in the remote attestation specified in the TCG.
Phase 3 contains the activity of SCA. First, SCA can verify Q by verifying the signature of P and/or tracing the certificate chain up to the root certificate of PCA. If SCA realizes that s is a node value that it can validate, it generates a manifest M for its. This manifest may contain additional information about the platform state associated with the presence of the subtree with root s in the SML, such as a timestamp, platform functionality, module identification of component mergers from the loading of leaf measurement representations of the subtree, and/or another platform characteristic. A manifest is validation data added by subtree validation that provides the validator with the semantic meaning of the node value s. The SCA may generate a certificate for s. This certificate CsBy binding it to AIK a, the property represented by M is bound to the platform. This may be in two kindsThe method is realized in such a way that:
in the first case, the SCA signature manifest and the node value of the AIK signature, thereby establishing an indirect binding to a. If the platform displays a node value s, then C can be verifiedsBinding to a. In the second option, the binding is done directly by: certain data bind (a) that makes the SCA signature uniquely identify a, such as the common part of a, the serial number CaOr CaThe fingerprint of (1). In the semantics of the public key infrastructure, by binding CsIs a reaction of with CaAn associated attribute certificate. Finally, the SCA generates a pack R comprising at least MsAnd CsAnd bind (a) in the second case, and return R to the platform.
Stage 4 provides for updating the SML with some data derived from R. The SML update may result in a binding association between the subtree certificate and the validated node location < s > in the tree.
This may cause the platform to claim to the validator that it is declared by CsAnd MsThe certified nature already exists in the platform configuration. It is contemplated that various SML updates will be CsThe manner of binding to the represented subtrees each applies to a different specific use case.
Generating a set of new nodes U ═ { U ═ of1,...,ukE.g., values and positions in the SML tree. First, the set may hold U ≦ s so that the subtree under s may be reached by the update. This may be performed because the old SML tree node n ≦ U is strictly below U, i.e., n/∈ U is updated invalid and is no longer verified with respect to the updated root verification data register. Next, U is independent, i.e., U, U '∈ U. U ≦ U'.
No dependencies are a property that ensures consistency of tree updates performed by U in a one-way (upstream) flow of information embodied in the Merkle hash tree. In particular, it makes the update result independent of the order in which the elements of U are processed.
Stage 5 is just an SML tree update. Repeat U ∈ U, call TPM _ Tree _ Node Verified _ Update with argument (U, n, U, R (n)), where n is the old SML Node value at location < U >. This returns a new track T (u), with which the TSS updates the SML. Performing the update of the tree in the manner described above maintains a consistent level of security for the SML and root verification data registers. That is, operation is performed within the TPM. When U contains many elements, it may not be efficient to perform the Update in the manner described for phase 5, since the TPM _ Tree _ Node _ Verified _ Update in this case will verify multiple overlapping reduced trees and thus lead to redundancy in the (complex) hash calculation. A more efficient update algorithm, referred to as an "efficient safe node set update," is described below.
As described above, updating internal nodes of a large set of U's using TPM _ Tree _ Node _ Verified _ Update may be futile because many redundant hash operations for verification may occur according to the reduced Tree overlap of U elements. The batch update strategy can be applied to improve the naive (naive) algorithm of phase 5 of the subtree validation protocol using the TPM _ Tree _ Extend command described above. It relies on the observation that subsets of the update set U span subtrees that do not depend on old SML values, i.e., their roots depend on nodes in U. So that the tree roots spanned by these sets can be pre-computed without expensive validation.
Some definitions are provided. Let U be { U ═ U1,....,ukIs an update set with no dependencies. A node in an SML tree is said to be U-intrinsic if a) a node in the SML tree is an element of U, b) a sibling of a node in the SML tree is in U, or c) a sibling of a node in the SML tree is U-intrinsic. This recursive definition captures nodes whose updated values depend on U and not on SML nodes in the complement of U. SubsetsIs the only intersection of the trajectories of all elements of V. SubsetsThe spanning subtree is the union of the element trajectories of V, ignoring nodes strictly above the span root. The subset V is called U-intrinsic if all elements of the subtree spanned by the subset V are U-intrinsic.
From these settings, a more efficient update of the SML with U can be performed as follows:
1) identifying (mutually disjoint) U-intrinsic subsets
2) Repetition of Vi,i=1,…,k。
a) Standardization ViBy:
i) truncated ViA prefix given by the coordinates of the span root, an
ii) is paired with ViThe depth of the subtree spanned is post-corrected by zero until all coordinates have the same length.
b) According to its normalized coordinate pair ViAre sorted alphabetically to produce an ordered list
c) Filled with nil valuesAll gaps (in normalized coordinates).
d) The free verification data register V' is selected.
e) Pressing in sequenceWith sequences targeting VUses TPM _ Tree _ Extend on the element(s).
f) Removing V from Ui。
g) The insertion of (V',<vi>) To U, where viIs ViThe span root of'.
3) For the remaining elements of U, the normal Update process as described above using TPM Tree Node Verified Update is applied.
Variants of the subtree validation protocol may combine the roles of PCA and SCA for AIK and subtree validation in the operation of a single protocol, respectively. The advantage is that explicit AIK authentication C may not be requiredaBecause the generation, activation and use of the AIK is bound to one session. The integration of such protocols is straightforward.
Binding the received subtree certificate to the platform state means that it is bound to the tree SML in the correct configuration, i.e. the position of the certified subtree root. As described above, this is necessary for meaningful sub-tree certificate-based validation in the context of an overall platform configuration. C is to besOne particular goal bound to the SML tree is integrity protection because, for example, later replacement with a different certificate is avoided. Binding may be achieved by updating portions of the tree with data that uniquely authenticatably identifies the sub-tree certificate. A wide range of data items may be generated from the sub-tree certificate and input into various locations of the SML tree.
In one example, SML updates may be trivial and U may be empty. This is possible if a consists of the first option described above, displaying s. Then s may remain in the SML tree and whether the sub-tree below it also remains dependent on the use case. The binding associates the actual node value s signed via C.
In another example, consider the situation with respect to a certificate of originThe data of the certified platform characteristics may be protected by the updated tree, e.g., for forensic purposes. That is, three data entries s, MsAnd CsAn update set may be entered. When the node value s is already in the correct data format, the other two are first processed into M (M)s) And m (C (s)). Operation m may be the generation of a hash value by the Root of Trust for Measurement (RTM) of the platform, or another suitable one-way operation. If some data entry already contains the appropriate unique identification data in the appropriate node value format, it can be extracted directly and used as a node update value. One particular example may be a certificate fingerprint contained within c(s). The three updated nodes may then be configured in an update set to generate, for example, a configuration of updated nodes in the SML tree as shown in fig. 27.
The root of the updated subtree is inserted into the old position of s and has the value k = (m (C)s) ◇m(Ms) S. This configuration provides independent integrity protection for the subtree certificates and manifests, and independently preserves the old node values. In this configuration, the pair is composed of CsThe proof of the platform properties of the representation may still be done without revealing information about s by referring only to the inner node on the left side of the subtree.
Variants of certificates bound for subtrees exist in large numbers. The platform may also want to include (an integrity protected value of) data generated in itself, such as an internal timestamp from a secure clock. The implementation here may be case specific.
For proof of the properties represented by the subtree certificate for the validator, the platform may use the TPM _ Tree _ Node _ Quote to reference any Node in or on the updated subtree that protects the desired validation data, which suitably contains at least the manifest and the certificate. The platform then submits validation data, at least all data required for verification of the purported property, again containing at least M, to the validator as neededsAnd Cs. The validation data already protected by the submitted reference does in principle not require additional integrity protection.
The validator may verify the platform binding of the validation data. It may be important to validate that this specificity, i.e., validation platform, is the same as the subtree validation performed on SCAs. One way to achieve this is to use the same AIK, a in the subtree validation as used in the validation. The platform can then also submit apubAnd if necessary, CaOr as part of the validation data. Whether or not C is requiredaMay rely on the semantics of the sub-tree certificate, i.e. the SCA may have checked the AIK certificate and CsIts authenticity can be declared. Corresponding information may be placed in the manifest. Reusing the same AIK partially compromises privacy, and other methods of solving the problem may also be considered.
One step of using sub-tree validation is to discover the sub-tree for which the platform can obtain credentials from a particular SCA. Without going into the details of the interaction between platform, SCA and validator two, two subtree discovery methods are described below. The first places the workload found by the subtree on the platform, while the second assumes a "dumb" platform, placing more load on the SCA.
In one example, the SCA sends some sub-tree discovery data to the platform, in the simplest case a list of node values that are ready to be validated. The platform may search its SML tree for these values and perform sub-tree validation on each identified node. This baseline process presents a variety of refinements, particularly enriching the discovery data and extending the discovery process to a negotiation protocol between the platform and SCA. For example, the discovery data may contain a root node location in the case of a verifiable root, which corresponds to the fact that in the case of SML generated during authenticated boot-up: the components loaded during the later seed tree build are loaded during the definition phase of platform startup.
This absolute positioning of nodes is difficult to practice for complex platforms where the configuration may change dynamically. Thus a more refined case may also express the relative position of certain pairs of roots, for example, that can be verified. The SC may declare an expression, such as if, for example, "r" precedes s (i.e., r may depend to the left of s in the ordered SML tree), the expression declares s verifiable. This may be interpreted finally as a function operating on the platform if another function is made operable before it.
A more fundamentally different variant of the model is that the discovery data does not contain subtree roots, i.e. internal nodes, but leaves, i.e. measurements. The "bottom-up" discovery process may require the platform to make "empirical guesses" as to which nodes are verifiable based on the leaf measurements that the received SCA purports to know are trustworthy values. One approach is to find a set of spanning roots of subtrees whose leaves are all in the discovery data. The platform may then reference the subtree root and send it to the SCA along with its SML subtree. In general, the SCA may have to validate the SML subtree and decide whether it is ready to validate that root, as this may still depend on the ordering of the leaves. In some cases, the platform may want to obtain the credentials of the subtrees for which the SCA knows certain leaf values, i.e. the leaf sets of the respective subtrees may have gaps with respect to finding data. If the platform has other trusted data, such as RIM certificates obtained from a party trusted by the SCA, the platform can submit these data in phase 2 of sub-tree validation to the auxiliary SCA, along with its decision to validate the subtree root.
In the case where the device is unable to perform local discovery of subtrees, an implementation that moves the computation to SCA may be used. The platform selects an internal node n with the aim of obtaining certificates from the SCA for any suitable sub-tree below n. In the case where the platform wants to have all verifiable nodes verified, node n may be equal to the root of the complete tree (V). The next two steps may be the same as described in stages 1 and 2 above, i.e., if V is selected, the platform executes TPM _ Tree _ Node _ Quote or TPM _ Quote, respectively. The proof package is packaged with the SML subtree under the root of the reference. The SCA receives this information and then, using tree traversal techniques, verifies the integrity of the tree and finds one or more (disjoint) subtrees S of the set S with verifiable roots at the same timei。
SCA then repeats phase 3 of the protocol as described above for all siE.g. S generates a certificate. Since the protocol allows for multiple node updates that are merged into the update node list U, the updates of all discovered subtrees can be performed in a single protocol run. One variant could be for the platform not to send TPM _ Tree _ Node _ Quote or TPM _ Quote in the first step, but to provide the SCA with SMLs starting from the selected Node n. The SCA then searches for potential candidate subtrees to be validated, and then requests the platform to provide the TPM _ Tree _ Node _ Quote to the root Node of the identified subtree. This is a trade-off in the sense that the platform is allowed to send SMLs without having to perform encryption operations in advance. Nevertheless, the platform may provide a suitable reference to provide integrity protection for the sent SML prior to validation of the SCA.
Described above are systems, methods, and apparatus for implementing TFV. Described below are variants and/or extensions of TFV.
The way the validator traverses the tree-form SMLs submitted by the platform may be stepped down as described above. Described below is a variant process in which a validator first looks for a known good value of a subtree root in the SML tree to identify a known good portion of the validated platform before traversing the SML tree to find a failed leaf component.
Maintaining a reference tree for the SML tree for the entire platform can be difficult. Such trees can be large and sensitively dependent on platform configuration, such as, for example, the order of loading of components. Tree traversal validation assumes a static measurement order to determine components that the diagnostic validation described therein identifies that do not conform to the expected reference configuration of the validated platform. In fact, since the Merkle-Damgard transform is neither commutative nor associative (it is not a multi-set hash function), the tree root values depend on the exact order of measurement. Since the linearly generated PCR values remained the same, the performance comparisons described above were still reasonable. The problem of tree root order sensitivity is further addressed below. The methods described therein may be combined with those described herein.
In a naive solution (negative solution), the state of an event may be necessary to maintain a reference tree for the valid (functional and security related) equivalent configuration of the platform. A finer grained data set than the full reference tree may be desirable for validation.
Instead of maintaining a database of known good values for the root of the SML tree and corresponding reference trees, the validator maintains an extended database of known good subtree root values for subtrees at different depths. These values are stored as pairs (r, d), where r is the node value of the subtree root and h is the depth of the subtree. The depth D ranges from the depth D of the full SML tree to 0, where D =0 means that the corresponding known good reference value is actually a leaf measurement. When a certain level L is reached in the diagnostic tree traversal, and after encryption verification, the validator compares the node value of the received SML tree at that level with the known good value { (r, D) | D = D-L } by a breadth first search. Subtrees with matching roots are marked as known and excluded from further diagnostic traversals, which then proceed to level L + 1.
For validators, a database that holds known subtree roots also allows for different validation strategies as opposed to trees going down from the root. The validator may find (a subset of) the root values of the known good sub-trees in the SML tree by any search procedure it considers valid. The validator may then verify the integrity of these subtree roots. To do so, the validator may use a reduced tree of the subtree root node and a validation process of that value described elsewhere. If the value is integrity verified, the corresponding sub-tree may be marked as OK and deleted from subsequent diagnostic confirmation.
Identification of known sub-trees like this leaves space for different configurations of the platform, especially different load sequences of known components and failed components. For example, there may be two subtrees a and B each having 8 measurement leaves with another subtree between them containing a failed component. If the validator recognizes that A and B are known, but will typically accept them in the order B, A, and in either instant order, or with another known sub-tree C in between, the validator may have to decide whether to accept the platform configuration as valid according to policy.
As an extension of the described variant, the validator may dynamically provide the reference tree database through interactive learning based on previous platform validation results. One possibility is to perform a diagnostic validation of the platform first validation and generate a known good sub-tree from the subsequence of leaf measurements, which corresponds to a good platform configuration, according to the validator component (leaf) trusted reference values and policies as described herein. As more and more platforms validate, the validator can build a statistically weighted database of known good sub-tree and platform configurations based on criteria such as commonality and risk (associated with the configuration). For example, a commonly occurring tree may be searched first.
The described method is similar to the subtree discovery method described for the subtree validation method. It is a method that can be applied by SCA entities together with or separately from learning strategies.
The way the validator traverses the tree SML submitted by the platform is stage-by-stage down. For this purpose, the entire tree is submitted to the validator in one communication step and evaluated against the reference tree. An alternative embodiment is described herein that sequentially transmits tree nodes at different levels one by one.
The entire SML tree may be submitted to a validator. This is often not optimal, especially when the proportion of failed leaf measurements is small. Then many subtrees will be transmitted as part of the entire SML tree, but the node data they contain may not be used for validation because their subtree roots are correct with respect to the corresponding nodes in the reference tree.
The basic method of minimizing the data transferred in the TFV is to transfer only these data to the validator, as described elsewhere, which is actually required for the current step in the tree traversal validation. The validator first receives the root validation data register of the SML tree, which is evaluated as an interaction tree traversal validation of level L = 0. If its value is consistent with the root of the reference tree, the validation ends. Otherwise, the validator requests the two child nodes of the root from the platform. After receiving them, the validator detects the integrity failure condition c) in fig. 9 described herein and stops, or recalculates the root from the child. Messages containing child nodes, in principle, do not need to be cryptographically protected for integrity and freshness, since their content is verified against the parent node by recalculation of the extension operation. The root, in turn, is assumed to be transmitted protected by a TPM _ Quote or similar operation, such as a digital signature.
Comparing the two children at L =1, the validator can find out which of the two children is correct with respect to the reference tree. They may be discarded from further searches. The validator then requests the children of all "bad" nodes at tree depth 1 (i.e., 2 or 4 child nodes at tree depth 2). For each of them, a comparison of the verification and the reference tree is performed, for example. The process continues through iterations. Thus, bad internal nodes that need to find a failed leaf measurement are actually passed to the validator. As mentioned before, this can be done very efficiently, with short message size and no encryption overhead. The amount of data relative to the size of the tree SML depends on the proportion of the failed leaves. Which is shown in fig. 10 described herein.
Various combinations of tree verification data and other methods exist for TFV adaptation: first, validation data is provided for frequently changing, dynamic platform (partial) states, and second, a scalable space is provided for validation data.
PCR or verification data register, is just a (hardware) protected memory space, which is a well known lack of platforms. As described herein, hash trees in principle remove this limitation. But the secure generation process of a tree-like SML with root protected verification data registers, e.g. PCRs, can make the tree a fixed depth and therefore a limited capacity for leaf measurement. A problem arises when a platform has more components to measure than a platform that fits a tree-shaped SML.
Another problem may arise if verifying that data protection frequently changes partial platform states such as the recording of storage areas. The TPM Extend operation, which is a hardware protected operation to update PCR values, is slow. PCR is therefore not suitable for protection of frequently changing data. Thus PCR may not be suitable for direct runtime memory protection. Various systems apply Merkle hash trees for this purpose, but their roots cannot normally be stored in hardware protected registers because their updates would be too slow to keep up with the dynamics of the platform.
The basic idea for protecting a dynamically changing platform state represented by some data w (t) that depends on (discrete) time t is a cache re-update policy for the associated validation data register R. Some trusted software APP _ R protects this state sequence in R. APP _ R collects the intervals of the state W (kL +1), of length L, W ((k +1) L), buffers them before they are measured using the digest function, and extends them to R using a PCR extension operation or similar linking operation, e.g., based on Merkle-Damgard transformation.
Various enhanced security requirements apply to APP _ R. In particular, sufficient protection is applied to APP _ R to prevent modification, and tampering must be applied (methods are known, e.g., virtualization or integration into the secure kernel). Furthermore, the operation of APP _ R may be open to the evaluation of external validators. The minimum requirement for this may be the trustworthiness of APP _ R, e.g. verification during secure boot, registration by measuring APP _ R before loading, and binding it with the actual sequence W protected in R in a way verifiable by the validator.
The method of providing the desired characteristics is as follows. The sequence w (t) may be partitioned into intervals of length L:
a digest function is applied to the partial sequence to produce interval measurements:Mk=m(W(L(k-1)+1)||...||W(Lk)),k>0. As a variant, it is useful to incorporate time stamps from secure sources in the interval measurements, for example:
Mk=m(W(L(k-1)+1)||...||W(Lk)||TPM_Tick(now))
adding a timestamp to the measurement interval value allows reordering of the (sub-) structure without losing information about the order of loading of the components. Furthermore, it allows for "aging" of the measurement, wherein the validator can request regeneration (renewal) of the measurement value after a given period of time. The interval measurements are protected by extending them to the verification data register, for example by an extension operation: r (k +1) = R (k) Mk +1, resulting in a sequence similar to normal linear chained SMLs
Where the immediate state may also be protected in the SML, i.e. the SML protecting W looks like:
the desired binding to the trustworthiness state of the management application APP _ R may be implemented as follows. The secure or authenticated boot device of the platform measures APP _ R before loading and extends the measured value m (APP _ R) into a verification data register Q (which may be initialized with the value of the previous PCR P in the transitive chain of trust). Next, the sequence of the actual recording register R is initialized with P and an initialization value R _ init, such as a session random number or a time stamp or both. This results in a structure
P, Q and R may be one if the verification data register is missing. Verification of W may be performed on the extended SML, which may include: [ (W (Lk +1),.., W (L (k +1))), Mk+1,R(k+1),R(k),...,/R(0),Rinit,Q,m(APPR),P]And P, Q, and R. The SML may also contain the value of the timestamp, if any.
The actual hardware protected verification data registers may be decoupled from the root of the tree SML and then may be stored in less protected memory space, but protected by the former. To this end, the specific software TFV _ APP manages a hash tree protected by the root W, which may still be a normal memory space, with leaves V _1, …, V _ (2 ^ d-1) in an internal tree of depth d. The state sequence of w (t) is in turn protected by a hardware protected actual verification data register R, for example by using the interval method of the last subsection. The virtual register V can be used as the root of the SML tree and managed by TFV _ APP. For them, TFV _ APP may expose commands described elsewhere to external applications as if they were root authentication data registers. The structure of R is:
and the coupled W represents the tree root as shown in fig. 29.
ViThe verification of (t) may be extended over the following verification data: vi(t),R(Vi(t)),W(t),RN+1(t),RN+1(t-1),RN,RN-1,m(TFV_APP),Q({RN-1,RN,RN+1}) where Q is PCR RN-1,RN,RN+1And according to the following:
i. tree reduced about itR(Vi(t)) and root W (t) vs. Vi(t) conducting an inspection
Checking R according to (2)N+1(t)=RN+1(t-1)◇W(t)
if R is additionally transmittedN+1(0) W (0), check RN+1(0)=RN◇W(0)
Checking RN=RN-1◇m(TFV_APP)
v. check m against reference value (TFV _ APP)
Check quote Q.
In addition, a timestamp may be applied as described above. Suppose TFV _ APP occurs to the root RN+1Similar to:
where W (0) is some initialization value.
The safe formation of SML trees has natural capacity limitations. One way to remove this restriction is a simplified configuration file (profile) application of the subtree validation method described above. The SML tree of a certain fixed depth is filled completely, the platform (self-) validates the tree and inserts the obtained certificate as a first measure into the next empty SML tree. This results in a left unbalanced multi-tree structure as shown in fig. 28.
Register V may protect the root of the tree SML, SML _ 1. The independent way to safely proceed to the next tree is as follows. The platform calls TPM _ Quote (V, a _1) using AIK a _1 to obtain signed declaration Q _ 1. TPM _ Quote represents a signed command that can be executed within a TrE (e.g., by a TPM) specifying a TP on the authentication data register V. Thus, once the signature and PCA certificate of a1 are verified, Q _1 asserts to any verifier that the value V was once contained in the verification data register V. The platform saves the increased SML _ 1: SML _ 1: = SML _1| | V | | | Q _1| | Cert _ PCA (a _1), i.e., the original SML _1, the value of its root, the reference and the corresponding AIK certificate are used for later verification. And (6) resetting the platform V. By the above mentioned properties, SML1 is semantically equivalent to SML1 and the register V containing its root. The setup register V for the next tree is now free.
The latter tree may be tied to the former tree to ensure continued internal verification by the root of trust, i.e., continued integrity guarantees are measured for the contained leaves. One exemplary method for this is to start (e.g., by modifying the platform RTM) the next tree, SML _2, with the first measurement value bound to the last tree. Various examples of accomplishing this will be described below.
For example, the last root value V may be used directly as the first measurement. This may not preserve the sequence of times, since the next tree does not then contain any hint (hit) of any confirmed semantics applied to the last tree. According to another example, measurement m (Q _1) may be used as the first leaf of SML _ 2. According to another example, m (V | (Q _1) | | Cert _ PCA (a _1)) may be used as the first leaf of SML _ 2. According to another example, m (SML _1) may be used as the first leaf of SML _ 2.
The last three options are semantically equivalent, but have different practical significance in terms of validation and/or generation complexity. Continuation of the validation of the tree-shaped SMLs may continue iteratively without limitation. Instead of an internal continuation passing the attestation, a trusted third party may be invoked to attest to the SML _1 tree.
The validated continuation of the tree may be extended with additional semantics by the called TTP. In particular, if SML _1 is submitted with a root validation request and verified, i.e., consistency is checked by root recalculation. The hash value of SML _1 can then be embedded directly into the certificate obtained from the TTP. This then provides a guarantee to the validator: when the validation is submitted for the TTP, SML _1 has not been tampered with. This makes the root value V of SML _1 obsolete in principle. The validator needs to verify the global hash over the SML _1 blob (blob) and the signature of the TTP certificate over it to check the integrity of SML _ 1. This is advantageous because it means that for validators that want to use the reference tree on SML for tree traversal validation, direct comparisons with reference tree nodes can be used without recalculating internal node hash operations. This can speed up the search for failed components. As a further variation, a TPM tick (tick) stamp or TTP timestamp may be added to the validation.
Some methods and options for binding more expressive (expression) validation data to the verification data of the tree-shaped SML are described below.
By itself, the expressiveness of an SML of arbitrary structure is limited to bare measurements of component code and data. Typically, this does not yield enough information for the validator to efficiently process the validation and efficiently manage the platform based on the validation results. For this reason, the TCG enforces the concept of SML into an event log that also records component identifiers, load times, parameters, and related information, and the TCG IWG introduces the concept of IML, a complex, expressive data structure (which is tree-structured in nature) to capture the associated information collected in the authenticated launch.
However, such rich and structured validation information may lack binding to verification data of an appropriate associated structure, such as a tree SML. Particularly for tree-shaped SMLs, a problem arises as to how context information or other useful validation data about the component and platform characteristics can be securely bound into the hash tree containing the actual verification data.
During the actual measurement process, i.e., the startup phase of the platform, metadata can be added to the validation data and essentially bound to the SML tree by building a small sub-tree for the component under test. An example implementation is described above (however, with respect to a large number of components and their measurement leaves). In the context of TFV, the measurement process may be extended to include metadata in the SML tree as follows.
1. The agent performing the measurement and SML build (equivalent to or in cooperation with the RTM) loads the component measurement manifest, containing the specifications of which data must be collected and included in the SML tree for that component.
2. If the next leaf in the SML Tree is the right sibling (whose least significant bit of the binary coordinate is "1"), the agent writes a nil to the SML Tree and protects it using TPM _ Tree _ Extend. This ensures that the component starts a new independent sub-tree.
3. The component itself is proxied and the measurement is written to the SML and extended to the root PCR with TPM _ Tree _ Extend.
4. For each item, the agent repeats on the manifest: collecting required metadata; appending it to the confirmation data sequence; making measurements of metadata, such as encrypted digests; inserting the metadata measurement into the SML tree, and; it is extended to the root PCR using TPM _ Tree _ Extend.
5. The agent fills the sub-tree for the component by appending nil values until the length of the measurement sequence for the component reaches a power of 2.
This results in a component sub-tree structure of the form shown in FIG. 30, where m (.) represents a suitable measurement function (which may be different for different forms of metadata).
The metadata measurements may be inserted elsewhere, such as internal nodes. The insertion of metadata may be programmatically separated from the measurement process, which may help to speed up the startup of the platform, for example. The metadata may then be collected through a different process and securely stored (at least in some form of integrity protection) until it is inserted into the SML tree. The measurement proxy prepares enough free space in a sub-tree by filling it with a nil value depending on the number of elements in the component measurement manifest. After the Tree SML measured in conjunction with the bare component during boot is complete, the metadata may be measured and then inserted into the appropriate location using the TPM _ Tree _ Node _ Verified _ Update command.
Some useful types of metadata that may be included in a component sub-tree and describe its potential usefulness are described below.
Component identifier: a component identifier, including, for example, the component manufacturer, name, version, structure, release date, etc., may be the most basic metadata to include. Although in principle the measurement value of the component itself has uniquely identified it, a separate identifier adds semantics in an important way. It states that the platform attempts to load named components independently of the measurement results. Thus, this provides the validator with independent information about which component's integrity verification failed, particularly if the component code or data is compromised. This information is very useful because it allows the correct CRV to be found for the component in question. Order criteria as a component are also useful for the above-described method. Event logs or IMLs according to the TCG standard may already include component identifiers, but they are typically not protected by validation data. Where they may be bound to the verification data of the SML tree, thereby obtaining a higher level of assurance of integrity protection provided by the SML tree. The component identifier may already be in a format for inclusion (e.g., a 160-bit string), or may be measured to obtain such a format.
The RIM certificate: the appropriate identification of the TRV may be useful metadata for the SML tree bound to the component. It may be implemented, for example, as a fingerprint of the corresponding RIM certificate. It can be used, for example, (validating the verifier) to look for the included rim (trv) in its own database or via the TTP, based on the correct certificate, or to obtain certificate status information. In this case, the RIM certificate suitably need not be incorporated into the platform validation data and may not be communicated to the validator. If the boot process of the platform is secure, e.g., implemented as a secure boot process, the inclusion of the corresponding RIM certificate information obtains additional semantics. That is, it declares the TRV certified by that particular RIM certificate for comparison with the component measurements. In this case, the actual measurements of the components in the tree SML may be outdated, depending on, for example, the needs of the forensic evaluation.
Component context: the component context may include a description of the runtime environment into which the component is loaded.
Time stamping: the protected timestamp added as metadata adds context to the measurement by providing the validator with the load event of the component. This allows the validator to verify the loading order of the components, which is important if the dependencies of the components have to be validated. It also allows the freshness of individual measurements to be evaluated, and if the data structure allows this, the validator can request a new measurement for the component. This may be done based on a measurement aging process, where the measurement may have a defined life cycle after which the validator may request a new measurement of the component. Furthermore, applying a separate timestamp to the measurement allows reordering of the structured data and enables the validator to get the original boot-loader (boot-loader).
Component start parameters, such as run time/lead time settings, may be set differently depending on the current situation or running environment of the device.
Component security policy: when an enforcement entity is included in the platform, the security policies associated with each component and enforced by the entity are important data for the validator to assess the trustworthiness of the platform. By organizing the component measurements and metadata in the subtree, its validation can be delegated to a TTP, such as the SCA described above, for the subtree validation method.
The geographic location information at the time of component invocation adds another dimension (location) to the measured expressiveness. This metadata field is particularly useful for mobile devices, which often change location, and applications may also be location-dependent (e.g., asset/goods tracking reporting software). Instead of reporting the component measured at a particular time, this may also include the location of the measurement.
For practical implementations of TFV, the component subtree may contain data that is mostly of a "static" nature to avoid the recognition problems described below. If the desired metadata includes frequently changing data or data that is quite unique to the platform, it makes sense to reserve for these data a specified sub-tree of a fully independent SML tree or component SML tree protected by its own root verification data register. The position in such an ephemeral metadata SML tree may be bound to the original SML tree simply by order, repetition of component identifiers or component measurement leaves, or by any other means linking the content of the ephemeral metadata SML tree to the corresponding component sub-tree in the original SML tree.
Like linear chained SMLs and synthetic PCRs, tree-shaped SMLs and root verification data registers are sensitive to small changes in the sequence of measured values. Thus, problems arise in efficiently identifying known tree and subtree roots because the values of the known tree and subtree roots depend on the exact leaf order of the measurements. Different possibilities for solutions to this problem are described. They contain measurements to identify the measurement leaves according to an order known to the platform and validator, thus reducing the combinatorial complexity resulting from sequence sensitivity. Methods and technical procedures for identifying leaf measurements according to technical standards for effective TFV are also described.
The PCR values resulting from the Merkle-Damgard transform, i.e. the combined result of the one-way functions of the non-associative and non-commutative operations, are very sensitive to the order of the inputs. This principle is highly desirable from a security point of view, and one property of this transformation reflects the ability to detect minimal changes in the input data. The properties are also kept for the tree linear chaining SML and root verification data registers, PCR, respectively. When interpreting the contained information, the complexity is introduced by the information that preserves the Merkle-Damgard transformation properties.
The ordering complexity problem is particularly manifested in TFV when the validator attempts to evaluate the platform against the submitted tree SML. A general tree traversal process for searching for failed components is discussed herein. It behaves very effectively when compared to an evaluation of linear SML even for a high proportion of failed (i.e. validator unknown) leaf measurements. However, failures in such validation models may also be caused by modification or sequence failures, i.e. measurements in unknown positions relative to the reference tree used by the validator. A single displacement (permatation) of two known measurement leaves results in two unknown measurements. Inserting or omitting a measuring leaf at a certain position invalidates all the following ones, i.e. the leaf positions to the right of that position. Not adequately considering this problem may lose the efficiency gain of TFV.
It is fair to say that this is not really a problem when compared to linear SMLs, since they have the same ordering sensitivity, and tree validation is not more complex and/or computationally expensive than the linear case. The same problem may arise for the resulting PCR of linear SMLs as a requirement for a reference PCR value to be maintained for all permutations of the input measurement, i.e. as a spatial complexity. TFV, on the other hand, may lose some of its advantageous properties due to ordering problems.
The Merkle hash tree may be used for runtime verification of memory in various research and demonstration systems. The ordering problem does not arise and static assignment of memory addresses to the leaves of the memory protection tree is commonplace. The order in which both the recovery (if only part of) platform and validator are known is one idea behind the method described below.
This problem to be considered is of an information-theoretic nature, since it takes into account the asymmetry of information between the platform (sender) and the validator (receiver of the tree-shaped validation data). The sequence of the measured component is the piece of information that can be shared between the two parties in an efficient manner, where efficiency means in particular that neither the validator needs to keep an excessively large database of reference validation data (SML trees), nor does the validator have to perform expensive on-the-fly pre-calculations.
Efficient pre-sharing of information is one way to meet the described needs. For this purpose, the code will be used by the platform and validator, established by the convention that both parties adhere to in the TFV procedure. The methods described below include the implementation of this general strategy. The method described below is not limited in its applicability to the case of tree-shaped verification and validation data. Although their benefits may be limited, they can also be applied directly to the conventional case of linear SMLs. The efficiency gain brought by the traditional ordering of validation data for platform validation is increased due to the inherent structuring of the latter data (assuming that the proper implementation of the validation data is order dependent). Thus, the sorting method may be applied to other structures, such as graphical (direct or aperiodic) or annotation skip lists (which may also be an optimized structure for fast searches).
In addition to the described sorting method, there is another method to solve the problem of identifying the verification data by the validator. That is, the validator may learn a typical platform configuration over time. This is less effective for platforms that change configuration frequently and confirm less frequently, such as personal computers, because the configuration may be different for almost every boot cycle, or updates are common every week, while a VPN for a remote workplace that reconnects to the company occurs only on certain days of the week. The configuration of such a PC, via its PCR value transfer, is certified to the network by means of remote certification and additional protocols (such as TNC), which may be different each time. On the other hand, mobile devices, M2M gateways, and special embedded devices like henbs and multimedia STBs have more static configurations while connecting to the network — thus confirming more frequently. Moreover, updates to them are managed by network entities that have knowledge of the known device configuration. For such devices, it may be sufficient if the validator applies a learning strategy of the device configuration. For example, the validator reference SML tree may be established at the first validation of the device from a known TRV (e.g., RIM with corresponding credentials). Such a tree may not actually be used as a reference for devices having the same configuration. The subtrees of these learned reference trees can even be found by identifying their root values at the first validation of another platform and used appropriately for the validation and composition of the newly learned reference trees.
There are two basic options to provide the desired ordering of the tree-shaped SMLs. One is to apply a known order (pre-ordering) before the measurement is affected by RTM, which results in the formation of a manipulated (rigged) SML, or the measurement process remains unchanged and the ordering is applied later in a secure way to the complete SML tree (post-ordering).
One implementation of a pre-ordering variation is the actual load order of the program, thus conforming to a given SML ordering. If this is possible on a particular platform, it may instead leave the secure start-up and measurement process undisturbed. Another option is to store the measurement sequences in a secure cache and insert them into the protected SML tree only after the platform boots up. However, this approach may require that the measurements be securely sent to a cache, where they may then be integrity protected until all measurements have been performed. The sorting operation may then have to act on the stored data, not allowing them to be modified. This option may therefore have the additional requirement of introducing security holes if not adequately satisfied.
Post-sequencing is interesting because it allows "lazy execution" as a background task when the system is already running. The time limit applied is that the SMLs can be sorted when they are used for validation. Since there is now a straightforward way to safely shuffle (shuffle) the leaves of the tree-shaped SML, the post-ordering can operate on a second empty SML tree, which is protected by another root verification data register. The step of requesting a post-ordered SML tree built from the original SML tree is performed by a trusted agent on the platform as follows. The post-ordering agent repeats over the sequence of component identifiers for the desired SML sequence, and: 1. the leaf location of the next identified component is identified (for which an identifier of the actual measured component is required; they may be recorded in the event log of the measurement process, or directly attached, such as, for example, in the form of a digest, as a sibling of the measurement leaf of the component). 2. The leaf measurements are verified against the root verification data register using, for example, the TPM _ TreeNode _ Verify command, described elsewhere. 3. Upon success, measurements are inserted into the sorted Tree using TPM _ Tree _ Extend. And 4. pre-ordering may not be feasible for all components of the platform, e.g., due to dependencies.
Extensive post-ordering may not be desirable because it incurs computational costs. A mixed form of pre-ordering and post-ordering is therefore suggested. In particular, a subset of the platform components may be pre-ordered, while another, possibly smaller, portion of the components undergoes post-ordering after measurement. This can result in at least two SML trees from the measurement, which are then added to a complete measurement tree, e.g. by subtree validation.
While pre-and post-sort are located on the platform and are operations performed prior to the transfer of validation data, a third, different option is to impose some computational burden on the validator to obtain the correct reference tree sort. To this end, the platform may communicate a sequence of component identifiers indicating a measurement ordering of the tree SML or its sub-tree. Alternatively, the platform may send a generic list of leaf measurements through which the validator can also uniquely identify the component (if known) at each leaf. The validator may then use the time interval between receipt and actual validation of the sequence of component identifiers to build the reference tree in a sequence that may be expected. This may require fewer hash operations than building a complete reference tree for the platform, at least if the portion of the leaf that is replaced is not large. In this case, many subtrees of the platform SML tree may already be included in the validator's tree and may be reused. Upon receiving the ordering information from the platform, the task of the validator is to identify the sub-sequences of the leaves corresponding to the known sub-sequences in which the corresponding known sub-trees can be inserted into the correctly ordered tree.
This procedure is valid if most of the complete SML tree used for validation has a fixed sequence by convention, and the smaller sub-tree has an unknown sequence to the validator. In this case, the platform may submit the sequence of the sub-tree and its position in the complete tree together with or before the complete SML tree for validation, and the validator may build the sub-tree of unknown sequence from the reference value and insert it into the correct position in its reference tree.
One implementation is to use a lexicographic ordering of components, where the ordered data may be a reference measurement of the component, such as a RIM value (not the actual measurement obtained by the RTM, which may be different if the component is damaged). Another sort criterion may be a unique identifier of the component or its digest value. This data may also be included in the tree SML according to the method described above. A variation of lexicography is to assign a code to the measurement and/or other data and use the resulting code as a lexicography criterion.
The tested components may be ranked according to their probability of failure. To accomplish this, the validator may define a failure probability rating for the known component, for example, based on an estimated failure rate from a number of validated platforms. Components of a certain failure rate level may then be passed to the platform and may be incorporated into the reserved sub-tree for the next level. This may result in an SML tree that is optimized for tree traversal validation because it contains subtrees (components with low probability of failure) that are likely to be dropped from the search in the traversal.
If the components of the platform have a hierarchical organization, the measurements may be ordered accordingly. As an example, if the platform is composed of functions containing subordinate modules, the subtrees may be reserved for functions and contain measurements of their respective modules. Any other ordering that may be applied may be used inside the subtree belonging to the function. In turn, the functional subtrees can be ordered for incorporation into the platform's full tree SML.
In a hierarchical organization of system components, there may be dependencies that affect the resulting measurement sequence, e.g., according to functional module relationships. A very common situation is that a module belongs to more than one function (like a shared library or similar). Such a module may be loaded during secure boot of the platform and thus measured once. One way to correctly capture such modules in an SML tree organized by functional hierarchies is as follows. The agent performing the secure boot may have knowledge of all loaded modules and thus be able to identify the duplication-for example if the module identifier appears again in the specified measurement sequence. When building a hierarchically ordered SML tree, the agent, when encountering a repetition of a module, first looks for where it first appears in the SML tree, then verifies the actual measurement at this first-appearing location, and then inserts it into the tree at the new location.
A special case occurs when the platform loads a component whose integrity is correctly verified (against the reference value) but is unknown to the validator. Out of order insertion into the SML tree may result in failed components and subsequent placement failures of components. If the validator and platform share knowledge about which components are known to the validator, the platform may be able to sort unknown measurements into independent subtrees while preparing subtrees of known measurements in an order known to the validator. The two trees may then be combined for tree traversal validation, e.g., such that the known SML tree is the left sub-tree and the unknown component tree is the right sub-tree. The latter is prepared in the reference tree of the validator as an empty tree containing the nil value. That allows the tree traversal validation for the known (left) subtree to proceed undisturbed.
The above-described systems, methods, and apparatus generate and use a hierarchical structure, such as a tree structure, for validating data and validation. Described below are further embodiments for using such a tree structure in network validation.
Tree validation (TFV) is an enabler for complex type network side platform validation. TFV is an implementation of structured validation, which means that structured data is used by a validator to evaluate the state of a platform. The structure of the data may reflect the structure of the platform to some extent, thereby facilitating validation. Structured validation is an implementation of the abstract platform validation and management process defined in Platform Validation and Management (PVM) discussed elsewhere. A platform that allows for efficient and effective remote PVMs presents a modular layered architecture. Classes of data for platform validation operations are introduced.
In describing the PVM, the basic concepts of data transferred between the platform and the validator and other entities that facilitate platform management functions may be defined. They can be classified into three categories according to security-related characteristics and semantic content.
There are four categories of data for PVMs that are generated by the platform and used throughout the PVM process. They are inherent to the validation platform. A different additional category is trusted reference data for comparing validation data with known good values. It may be located at the validator for use in validation of the platform or within the platform where it may be used for secure boot or transmitted to the validator as validation and/or management data. This is illustrated in fig. 25, where the interrelationship between concepts is described.
Validation data is data that verifiably identifies the state of a platform, or a portion of the state, with a well-defined level of assurance. The validation data is generated internally to the platform during a validation process (e.g., during secure boot-up and/or during runtime of the platform). One protection goal of the verification data is integrity, which may be maintained during its generation (i.e., verification) and at least throughout an operating cycle (e.g., a boot cycle) of a platform. Another protection objective is freshness. One way to define (and increase) the level of assurance of the verification data is to separate a part of the verification data as protected verification data with a special level of protection, e.g. by hardware protection, in which case the storage space is referred to as a verification data register.
The authentication data may require a protected binding, e.g., encrypted, to protect the authentication data. The strength of binding and the strength of protection define the level of assurance of the authentication data. Some implementations of verification data and verification are described below.
The SML may comprise a sequence of 160-bit SHA-1 values generated from measurements of the loaded component during authenticated boot-up. These hash values uniquely identify the loaded component. The level of assurance of the SML value may depend on: the security of the RTM that performs the measurement and the integrity protection of the SML, for example, performed by a verification data register (e.g., PCR). One variant is the tree SML of TFV.
The PCR of the TPM typically represents a protected authentication data register. One common method for internal verification of a TS is authenticated startup and using the capabilities of the TCB to assess the trustworthiness of loaded or started software or hardware components at TS initialization (e.g., at power up). Authenticated startup is achieved by starting specific functions of RoT and TCB before other parts of the TS. These parts operate as rot (rtm) for measurement. This means that the components that are started or loaded later are measured, i.e. they and their started state and configuration are uniquely identified, e.g. by forming an encrypted digest value on the (binary) representation of the hardware component embedded code and the loaded program. Depending on the specific requirements, the measured values may be stored in a secure storage means, such as a PCR, which forms a protected part of the verification data.
Secure launch is an extension of authenticated launch. This is very important for devices that may have some stand-alone and offline functional requirements, such as set-top boxes or mobile handsets. A common feature of devices equipped with secure booting is that they can operate in a set of trusted states when they cannot communicate assertions about their trustworthiness to the outside, for example, before network access. In secure boot, the TS is equipped with a local authenticator (authentication entity) and a local enforcer (enforcer) that manages the boot process, which establishes a combination of Policy Enforcement Points (PEP) and Policy Decision Points (PDP) to control the secure boot process. The local verifier compares the measured value of the newly loaded or started component with a Reference Integrity Measurement (RIM) value, which is located in the TCB or protected by the TR within the TS, e.g. they are located in a protected memory space, and decides whether they are loaded or started, respectively. Thus, it is ensured that the system boots to a defined trusted state.
In an extension of secure launch, authenticated launch, the platform is equipped with a local verifier (verifying entity) and a local enforcer that manages the launch process, which establishes a combination of PEP and PDP to control the secure launch process. The local verifier compares the measured value of the newly loaded or started component with the RIM value, which is either in the TCB or protected by the TR within the TS, e.g. they are located in a protected memory space, and decides whether they are loaded or started, respectively. Thus, it is ensured that the system boots to a defined trusted state. The sequence of RIM used in secure boot contains authentication data. The measured sequence can also be extended into a verification data register as protected verification data. In this case, RIM certificates (more precisely their encrypted fingerprints) may also be a protected part of the authentication data, with the protection provided by the TTP that issued them and the underlying certificate infrastructure.
Some researchers have proposed a peculiar variant of platform validation for WSNs, called "software attestation". It is included in the execution of the probe code, which is sent by the remote validator to the platform, which checks the platform state, e.g. the contents of the memory. In conventional zero knowledge proof, the information returned to the validator as validation data may be the runtime of the probe. The ambiguity ensures to some extent that an attacker may be able to generate a return signal after a measurable delay.
Unlike the validation data, the validation data is a superset (superset) of validation data of data that can be collected and submitted to another party (validator) and is used to assess the trustworthiness of the platform state, in particular by checking it against the contained validation data. The process of submitting the validation data to the validator is, for example, implemented as a remote attestation from the TCG and its evaluation performed by the validator, properly called validation. The validation data may contain validation data, such as a referenced validation data register (e.g., PCR) value. Validation may include policy evaluation and action triggering by the validator in addition to cryptographic validation of the validation data, using, for example, additional management data that may be associated with the validation data.
Similar to the validation data, the validation data identifies all or part of the system state, but in addition provides more information so that validation is efficient for the validator and determines the level of granularity of the validation result. Examples for validation data are: named system properties such as the name and version of components generated by the PBA, their state and parameters, platform certificates for TS as defined by TCG, and vendor certificates for system components; the name of the TTP, where further data can be retrieved, such as certificate chain information; component-subcomponent relationships, such as those captured in a nested XML data structure, specified by the TCG IWG; the platform identity used for validation, referred to as the validation identity, is implemented, for example, by an AIK key pair.
One aspect of validation is binding the validation data to the corresponding verification data. This binding conveys the assurance level of the verification data to the validation data in the spirit of the chain of trust of the conveyance. Thus, such a binding may be verified, for example, by a digital signature. The concept of validation data may be limited by the requirements of the binding. Validation data is data that can be verified by a validator using verification data, particularly for data integrity. Therefore, the confirmation data is not as arbitrary as the general management data defined below.
Remote attestation is an implementation of the initial phase of validation, i.e., the secure transmission of validation data to the validator, signed by the platform. One example of such a binding, in turn, is a TPM _ Quote operation that signs a PCR value with an AIK. By verifying the signature, and recalculating the PCR from the SML and associating the SML measurement values with the RIM of the known component named in the validation data, the validator can verify that the named component is the component that the platform has measured during, for example, authenticated boot-up.
The management data contains and supplements the validation data. It adds expressiveness to other data specifically for platform management based on validation data and results. The binding of management data to validation data is logical, i.e. elements of management data are symbolically linked to associated elements in validation data, and vice versa. The trustworthiness of the management data (e.g. by its source) may be evaluated separately from the validation of the platform, especially in case the management data comes from a TTP.
Typical examples of management data are: inferring policy for management behavior from the validation results; the location of the code update may be retrieved; event reporting data; user notification data; a service credential provided to a validation platform subject to a validation result.
Trusted reference data is data used to compare validation data with known good values. Those values that constitute the trusted reference data are referred to as Trusted Reference Values (TRVs). Their best known examples are RIMs as specified, for example, in the MPWG specification of TCGs. They may indeed be used either a) in a secure launch by the platform itself to ensure that the components of the measurement conforming to the TRV are launched, or b) by a validator for comparing validation data with known good values to assess the platform state in validation.
In this way, the trusted reference data is made trusted by some security assertion about it, which can be verified by the validator or agent using the TRV in question. Such verifiable assertions may be implemented, for example, by digital certificates issued by TTPs, which in the specific example result in so-called RIM certificates. The trust assertion for trusted reference data may also include additional information, such as external evaluations regarding the component or platform (e.g., according to common standard EAL).
Split validation describes a concept that allows the validation task to be distributed among two (or more) networked entities. This idea is focused on the validation process itself, not on a specific structure. However, an architecture model was chosen to propose a general concept, mainly the architecture where the M2M device acts as an M2M gateway for M2M devices connected to it, the M2M gateway being connected to the MNO's network. The concept itself is not limited to that architecture and may be applied to different architectures that exhibit some layered architecture (e.g., henbs and connected UEs/WTRUs, clusters of devices with master nodes in the cluster, etc.).
The general approaches described herein, such as subtree validation procedures and/or discovery options, are the basic techniques to implement the split validation described herein.
Assuming a client-server model in which a client C wants to access a service S, if C is in a trusted state, the server provides the service to C. C may be able to communicate confirmation data v (e.g. measurement values) to S.
In this case, S may have means to deduce the state of C from the received confirmation data v, which may be implemented by a reference value r, with which S may compare v. The reference value r is not limited to a reference metric that provides a known good value for v. The reference value provides a mapping between the received data v and the trustworthiness of the device. Thus, r can be seen as a mapping of V to access policies for services, which means that r can be any type of reference, allowing S to derive a statement on the trustworthiness of C from the received V.
In some cases, C may connect to S via a gateway device G (e.g., an M2M gateway, a HeNB that is a gateway for the WTRU). If G is able to perform the initial part of the confirmation of connected C, i.e. G is equipped with r for connected devices, the load for the confirmation task may be distributed between G and S.
The distribution of acknowledgments between two networked entities is referred to as split acknowledgments. For split validation, it must be possible for G to validate a single connected C, and S can trust G to perform this validation in a secure manner. Furthermore, it may be easier to perform an efficient split validation if the validation data v is shown as an internal (hierarchical) structure.
As one implementation of split validation, the entity may be able to use tree validation (TFV) data. G may be able to generate its own structured validation data. G may be able to perform tree updates including removal and addition of subtrees. G may be equipped with a Measurement Agent (MA) that can integrate the validation data of C into the validation tree of G. These subtrees can be validated (and thus validated and signed) directly by G or by TTP.
The MA within G collects measurements from connected Cs and integrates them into the tree of G as new subtrees. The collection of measurements may be performed during the local representation (signify) validation of C to G (e.g., by performing SAV, hybrid validation, remote validation on C to G links) or after authentication in one additional protocol step. This step can be tied to one device authentication in C to prevent replay and forgery of the validation data. However, this authentication may not be needed to represent the authentication used by C to access S (e.g., MNO credentials for accessing the network), since this authentication may be performed in an additional step when G establishes a connection to S for this particular C. As shown in fig. 31.
After collecting the relevant data, G contacts the TTP that issues the subtree certificate to G. G can then incorporate the certificate into its own validation tree, forming a new and updated (up-dated) validation tree. As a variant, the TTP may also be part of G, for example implemented as an application running inside the TrE of G or integrity protected by the TrE. This is shown in fig. 32.
In the last step, G sends an updated acknowledgement tree to S, including certificates that replace some or all of the subtrees for the connected devices. For the purposes of example S may be decomposed into two subunits, a validation entity VE and a service entity SE. VE can acknowledge data received from G (autonomously or with the help of TTP), while SE is responsible for the actual service being provided to C (and optionally to G). In a variant where G cannot authenticate C with credentials to access S, SE may also perform authentication, e.g., S may perform 3G authentication, while G authenticates TrE in the device to verify authenticity of the received validation data and device/TrE identity of the connected C. This is shown in fig. 33. After successful validation, the service may be provided to C.
In split validation, G may keep the privacy of the connected S vs C, since S may receive certificates that replace the subtree of the connected device. S may trust the TTP and the MA in G, where trust for the MA may be derived from the acknowledgement data received by G, as the acknowledgement data contains measurements of the MA.
In another variant, split validation may be extended to support dynamic updates, including support for roaming C between different Gs, involving C sub-tree switching between Gs. Another variant for split validation is partial validation of a single C at the gateway G, replacing only part of the subtree with certificates.
The methods and algorithms described herein allow for efficient implementation of operations for segmentation validation. They enable split validation to be used based on extended versions of the TPM that allow devices to build, report, update, and reference trees or subtrees. Moreover, the algorithm validates the valid replacement by the subtree that specifies replacing the subtree node with the certificate.
One problem in split validation is to determine which cs can be validated by TTPs (and thus replace their sub-trees at G) and which parts of cs can be validated by S, i.e. which reference values are available at the TTP of the sub-trees that G can use to validate the connected cs. In general, such solutions to find problems can be divided into three types: TTP-based, gateway-based, and shared discovery.
TTP-based methods are applicable to G that lack the computational power to perform complex analysis to validate the discovery of a suitable sub-tree. In this approach, the TTP receives the complete tree from G and returns the result in the form of a set of subtree certificates for the subtrees that the TTP can verify. The TTP may then discard the remaining data that it cannot verify. The certificates are sent back to G with indicators indicating where in the tree they fit. The MA at G performs the ingestion of the certificate, i.e. replaces the currently certified sub-tree with the received certificate. G may not have to perform discovery anymore because the TTP passes an indicator for the subtree location in the message with the certificate.
In a variant process, G does not transmit the complete tree, but a portion thereof to the TTP, allowing the TTP to search for potentially verifiable sub-trees in the measurement subset from the connected devices. This mechanism may be useful, for example, if the tree validation data shows a certain structure, which allows G to decompose the tree into sub-trees based on certain device characteristics (e.g., device classes). Different TTPs can identify a particular group of devices depending on the device class.
In gateway-based discovery, G has metrics to decide which cs can be validated by the TTP, and the TTP can receive the necessary data. This approach allows minimizing the traffic between G and TTP, since all the transmitted data belongs to verifiable sub-trees. However, G may be pre-provisioned with an appropriate metric, which may allow G to discover the subtree on the right and its TTP that can validate it. Furthermore, in this approach G can search its own validation tree to find the appropriate sub-tree, putting more computational effort on G.
In the shared discovery model, TTP and G work together to determine which devices, and therefore which subtrees, can be validated. The result of this discovery step protocol is a list of subtrees that can be verified. Such discovery protocols may include the exchange of additional (meta) data such as device class, capabilities, device profiles, dynamic parameters (such as load and bandwidth). This approach minimizes the amount of data, although the initial communication is required to determine the set of verifiable sub-trees, since no unnecessary data is transferred from G to TTP. The traffic load may be more than in gateway-based discovery, but may be less than in TTP-based discovery. G can pass the results of the negotiation phase, which allows G to perform gateway-based decisions later. In the shared discovery model, G may be prepared in advance with a set of verifiable subtrees, which may then be updated in the shared discovery phase.
In one variant, G fully acknowledges certain cs, where the selection of C may be based on CSG (closed subscriber group), e.g. in case G is a femto cell. After S, the remaining C was completely confirmed. The set of C to be acknowledged at G may be based on static or dynamic data, while the set of devices to be acknowledged by S will be the remaining C or may have overlapping parts, meaning that some devices may be acknowledged twice.
G and S may identify facets defined in the facet set that may be predefined or dynamically adjusted and may overlap. The set of aspects may include: integrity of device profiles, integrity of boot code/OS/drivers, integrity of capillary network functions/components, integrity of WAN connection functions/components, integrity of advanced applications, and integrity of trusted reference values used at G.
The validation may be dynamically partitioned based on the load. In this dynamic approach, G and S or G and TTP compare the computational load on themselves and on the links between G and connected C and between G and S or G and TTP. Based on the load, the TTP or S confirms the set of devices that are confirmed directly at G. There may be a predefined set that is then dynamically updated.
In a knowledge-based segmentation, the segmentation may be determined between S (or TTP) and G based on the knowledge set and the validation strategy. The knowledge set may include several factors, such as behavior (observed in the past, or anticipated in the future), credibility (history in the past, current values, or anticipated in the future), cost and benefit analysis, validation importance, etc., which may allow S and G to derive or update validation policies.
In one example, h (e) NB may use the segmentation acknowledgement of the UE. In the case of a HeNB, the architecture may include an h (e) NB that can confirm some or all connected UEs by the following procedure, as shown in fig. 34.
As shown in fig. 34, the WTRU sends integrity measurements to the h (e) NB at 1. At 2, the h (e) NB integrates the received data and acknowledges some (or all) of the WTRUs. At 3, the SeGW/PVE performs the validation of the h (e) NB tree including connected devices. The PVE identifies devices that have not yet been identified and issues a reference credential at 4, which may be sent back to the h (e) NB to be included in future communications, e.g., to enable the h (e) NB to identify the WTRU in future connection attempts.
Another variant may be in a corporate deployment scenario, where the h (e) NB is able to validate a subset of the components of each WTRU, such as corporate software, which may not be exposed to MNOs (SeGW and PVE). The PVE may perform network-side validation of the underlying platform components of the WTRU.
In the case of a rogue device connected to a h (e) NB, the h (e) NB may include the integrity tree of the UE for reporting purposes. PVEs on the company network or validation network entities may be able to verify the received data and can then instruct h (e) the NB to block access to the device or to perform remedial steps such as replacing components or software, performing antivirus updates, etc. This is shown in fig. 35.
Since the M2M architecture is somewhat the same as h (e) NB, split acknowledgement may provide some benefits for the M2M case. In a typical M2M deployment, smaller but more devices may be connected to a single gateway, providing access to the network. Also in the M2M scenario, multiple different devices may be connected via a single gateway, and thus the gateway may not be able to acknowledge all devices (or all device types). Performing split validation may allow the network to offload certain workloads to the M2M gateway.
In a further variant, the M2M GW may group connected devices based on their type, device class, device characteristics, or connection profile (protocol, connection network, bandwidth) and then provide group credentials for the device validation tree. This is shown in fig. 36.
Another variation is an end-to-end approach, where multiple M2M Gateways (GWs) present a cluster structure, which may allow them to communicate over better links (e.g., more bandwidth, less latency, etc.). Each M2M GW has its own backhaul link to the network. However, local communication may be cheaper (e.g., via WiFi, bluetooth, ZigBee, MBUS, etc.) than using a backhaul link (e.g., 3G), meaning that offloading traffic to the local network provides certain benefits. The local network between M2M GWs is called a local area switching network (LAEN). All M2M GWs on LAEN can authenticate each other, which allows M2M GW to trust messages from other M2M GWs on LAEN. Communications on the LEAN may be protected for integrity and authenticity. In case of privacy requirements, a fully encrypted LEAN may be established to provide confidentiality protection.
If an M2M GW is not able to validate a subtree (e.g., no TTP was found to validate the subtree or the device is unknown to the M2M GW), the M2M GW deduces the subtree on the LAEN (step 1 in fig. 37), with a request message to obtain the appropriate credentials. If another M2M GW is able to validate the subtree (either on itself or via a TTP, which may not be reachable or unknown by the first M2M GW), the M2M GW returns a certificate to the requesting GW (step 2 in fig. 37). The M2M GW may then integrate the certificate into its own acknowledgement tree. The M2M GW can also store the certificate (or a reference value required to generate the certificate, or an address of the TTP) for future confirmation of the connected device at the device level.
The exchange of subtrees may be performed using the Tree Hash Exchange (THEX) format, which allows the Merkle hash tree to be exchanged. The exchange format includes a serialized (serialized) representation of the hash tree. It uses a Direct Internet Message Encapsulation (DIME) format (text or binary) that takes into account multiple payloads. The Merkle hash tree serialization format includes two different payloads. The first is XML encoded metadata about the hash tree, and the second is binary serialization of the tree itself.
The location of network entities on the LAEN that can identify a given sub-tree (e.g., other M2M GWs) may use a Distributed Hash Table (DHT) structure to store network addresses of nodes on the LAEN that can identify a particular device type or class of device. The DHT concept known from the P2P network allows for a fast detection of relevant nodes for acknowledgement. Also, it is possible to store a hash table at each node in LAEN and update the hash table via an inter-node LAEN multicast message, whereby each node in LAEN can know where a particular sub-tree can be validated.
The concepts and methods developed above for split validation and other structured validation techniques such as sub-tree validation may be used not only for system validation, but also for other security-related functions.
For example, in a split authentication model for a network containing network entities NE, a gateway G, and a plurality of devices D, it is envisaged that the gateway G, when it interacts with the NEs such that the NEs can authenticate the gateway G, may include authentication messages for D to enable the NEs to authenticate the set of G and D simultaneously. In one variant, G is able to authenticate D' and includes a report on the authentication steps and results in a message that it sends after to convey the information needed by the NE to authenticate G. As mentioned above, it may also be possible to partition D 'according to several different criteria, so that G can authenticate against D' that NE can authenticate.
Similarly, other security functions such as key derivation and key management (including, for example, assignment, refresh, and objection), authorization, and access control may also be accomplished in a similar manner.
The techniques developed for subtree validation may also be used for these other security functions in a similar manner.
The methods, algorithms and apparatus may be applied in any technical field, including: security/privacy/rights management, hybrid networks, cooperative communications, and may be used in or for User Equipment (UE), wireless transmit/receive units (WTRU), handsets, data cards, laptops/netbooks, gaming devices, infrastructure devices, base stations/node bs, femtocells, access points, BSCs/RNCs, gateways, application servers, systems, session layers, presentation layers, application layers, DSPs, software, hardware, ASICs. The methods and algorithms also apply to: applications, system information, platform security, overall system security, call admission, home enodeb (henb), HNB, femto cell, hardware or software implementation, charging management, user management, policy control, quality of service (QoS), security, trust, decoding, registration/AAA, etc.
Although features and elements are described above in particular combinations, each feature or element can be used alone or in various combinations with or without other features and elements. The methods or flow charts provided herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable storage medium for execution by a general purpose computer or a processor. Examples of the computer readable storage medium include read-only memory (ROM), random-access memory (RAM), registers, buffers, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks and Digital Versatile Disks (DVDs).
Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Field Programmable Gate Arrays (FPGAs) circuitry, any other type of Integrated Circuit (IC), and/or a state machine.
A processor in association with software may be used to implement a radio frequency transceiver for use in a Wireless Transmit Receive Unit (WTRU), a user equipment (WTRU), a terminal, a base station, a Mobility Management Entity (MME) or an Evolved Packet Core (EPC), or any host computer. The WTRU may be used in conjunction with modules implemented in hardware and/or software (including Software Defined Radio (SDR)) such as cameras, video camera modules, videophones, speakerphones, vibration devices, speakers, microphones, television transceivers, hands free headsets, keyboards, bluetooth modules, Frequency Modulated (FM) radio units, Near Field Communication (NFC) modules, Liquid Crystal Display (LCD) display units, Organic Light Emitting Diode (OLED) display units, digital music players, media players, video game player modules, internet browsers, and/or any Wireless Local Area Network (WLAN) or Ultra Wideband (UWB) modules, and other components.
Further work may continue in this direction and consider a specific architecture of platform validation, called tree validation (TFV), with tree validation and validation data. One conceivable option is to effectively organize the database of reference trees with the SCA and/or validator in a manner that takes into account the modular construction of subtrees using known component substructures (e.g., dependent programs that are loaded in order, or components with uniformly associated security policies). Architecture and methods of subtree discovery, expression of dependencies between validated platform components, and management (updating, revising) of platforms based on the results of TFV are the subject of ongoing research.
The systems, methods, and apparatus described herein may be implemented in a communication system, such as the communication system described below and shown in fig. 38, 39, and 40.
Fig. 38 is a diagram of an exemplary communication system 100 in which one or more disclosed embodiments may be implemented. The communication system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to a plurality of wireless users. Communication system 100 may enable multiple wireless users to access such content by sharing system resources, including wireless bandwidth. For example, communication system 100 may use one or more channel access methods, such as Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), orthogonal FDMA (ofdma), single carrier FDMA (SC-FDMA), and the like.
As shown in fig. 38, the communication system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a Radio Access Network (RAN) 104, a core network 106, a Public Switched Telephone Network (PSTN) 108, the internet 110, and other networks 112, although it is understood that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. The WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include User Equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a Personal Digital Assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like, for example.
Communication system 100 may also include base station 114a and base station 114 b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the core network 106, the internet 110, and/or the network 112. By way of example, the base stations 114a, 114B may be Base Transceiver Stations (BTSs), node bs, evolved node bs (enodebs), home nodebs, home enodebs, site controllers, Access Points (APs), wireless routers, and so forth. Although the base stations 114a, 114b are each depicted as a separate element, it should be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
The base station 114a may be part of the RAN 104, and the RAN 104 may also include other base stations and/or network elements (not shown), such as Base Station Controllers (BSCs), Radio Network Controllers (RNCs), relay nodes, and so forth. Base station 114a and/or base station 114b may be configured to transmit and/or receive wireless signals within a particular geographic area, which may be referred to as a cell (not shown). A cell may also be divided into cell sectors. For example, the cell associated with base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, one for each sector of the cell. In one embodiment, the base station 114a may use multiple-input multiple-output (MIMO) technology, and thus, multiple transceivers may be used for each sector of the cell.
The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which air interface 116 may be any suitable wireless communication link (e.g., Radio Frequency (RF), microwave, Infrared (IR), Ultraviolet (UV), visible light, etc.). Air interface 116 may be established using any suitable Radio Access Technology (RAT).
More specifically, as described above, communication system 100 may be a multiple access system and may use one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104 and the WTRUs 102a, 102b, 102c may use a radio technology such as Universal Mobile Telecommunications System (UMTS) terrestrial radio access (UTRA), which may use wideband cdma (wcdma) to establish the air interface 116. WCDMA may include communication protocols such as High Speed Packet Access (HSPA) and/or evolved HSPA (HSPA +). HSPA may include High Speed Downlink Packet Access (HSDPA) and/or High Speed Uplink Packet Access (HSUPA).
In one embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may use a radio technology such as evolved UMTS terrestrial radio access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-advanced (LTE-a).
In another embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may use radio technologies such as IEEE802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA20001X, CDMA2000EV-DO, interim standard 2000 (IS-2000), interim standard 95 (IS-95), interim standard 856 (IS-856), Global System for Mobile communications (GSM), enhanced data rates for GSM evolution (EDGE), GSM EDGE (GERAN), and so forth.
The base station 114B in fig. 38 may be, for example, a wireless router, a home nodeb, a home enodeb, or an access point, and may use any suitable RAT to facilitate wireless connectivity in a local area, such as a business, residence, vehicle, campus, etc. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a Wireless Local Area Network (WLAN). In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a Wireless Personal Area Network (WPAN). In another embodiment, the base station 114b and the WTRUs 102c, 102d may use a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE-a, etc.) to establish the pico cell or the femto cell. As shown in fig. 38, the base station 114b may have a direct connection to the internet 110. Thus, the base station 114b may not have access to the internet 110 via the core network 106.
The RAN 104 may communicate with a core network 106, which core network 106 may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102 d. For example, the core network 106 may provide call control, billing services, mobile location-based services, prepaid calling, internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in fig. 38, it should be understood that the RAN 104 and/or the core network 106 may communicate directly or indirectly with other RANs that employ the same RAT as the RAN 104 or a different RAT. For example, in addition to connecting to the RAN 104 that is using E-UTRA radio technology, the core network 106 may also communicate with another RAN (not shown) that uses GSM radio technology.
The core network 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to the PSTN 108, the internet 110, and/or other networks 112. The PSTN 108 may include a circuit-switched telephone network that provides Plain Old Telephone Service (POTS). The internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP) in the TCP/IP internet protocol suite. The network 112 may include wired or wireless communication networks owned and/or operated by other service providers. For example, the network 112 may include a connection to another core network in one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.
Some or all of the WTRUs 102a, 102b, 102c, 102d in the communication system 100 may include multi-mode capabilities, i.e., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU102 c shown in fig. 38 may be configured to communicate with a base station 114a, which may use a cellular-based radio technology, and a base station 114b, which may use an IEEE802 radio technology.
Figure 39 is a system diagram of an exemplary WTRU 102. As shown in fig. 39, the WTRU102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a Global Positioning System (GPS) chipset 136, and other peripherals 138. It should be understood that WTRU102 may include any subcombination of the foregoing elements while remaining consistent with an embodiment.
The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, or the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functions that enable the WTRU102 to operate in a wireless environment. The processor 118 may be coupled to a transceiver 120, which transceiver 120 may be coupled to a transmit/receive element 122. Although fig. 39 shows processor 118 and transceiver 120 as separate components, it should be understood that processor 118 and transceiver 120 may be integrated together in an electronic package or chip.
Transmit/receive element 122 may be configured to transmit signals to a base station (e.g., base station 114 a) or receive signals from a base station (e.g., base station 114 a) over air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In one embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive, for example, IR, UV, or visible light signals. In another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and optical signals. It should be appreciated that transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
Further, although transmit/receive elements 122 are shown as separate elements in fig. 39, WTRU102 may include any number of transmit/receive elements 122. More specifically, the WTRU102 may use MIMO technology. Thus, in one embodiment, the WTRU102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
Transceiver 120 may be configured to modulate signals to be transmitted by transmit/receive element 122 and demodulate signals received by transmit/receive element 122. As described above, the WTRU102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers that enable the WTRU102 to communicate via multiple RATs, such as UTRA and IEEE 802.11.
The processor 118 of the WTRU102 may be coupled to and may receive user input data from: a speaker/microphone 124, a keyboard 126, and/or a display/touch panel 128 (e.g., a Liquid Crystal Display (LCD) display unit or an Organic Light Emitting Diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. Further, the processor 118 may access information from, and store data in, any type of suitable memory, such as non-removable memory 130 and/or removable memory 132. The non-removable memory 130 may include Random Access Memory (RAM), Read Only Memory (ROM), a hard disk, or any other type of memory device. The removable memory 132 may include a Subscriber Identity Module (SIM) card, a memory stick, a Secure Digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, a memory that is not physically located on the WTRU102, such as on a server or home computer (not shown).
The processor 118 may receive power from the power supply 134 and may be configured to distribute and/or control power to other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel cadmium (NiCd), nickel zinc (NiZn), nickel metal hydride (NiMH), lithium ion (Li-ion), etc.), solar cells, fuel cells, and the like.
The processor 118 may also be coupled to a GPS chipset 136, which the GPS chipset 136 may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to or in lieu of the information from the GPS chipset 136, the WTRU102 may receive location information over the air interface 116 from a base station (e.g., base stations 114a, 114 b) and/or determine its location based on the timing of signals received from two or more neighboring base stations. It should be appreciated that the WTRU102 may obtain location information by any suitable location determination method while remaining consistent with an embodiment.
The processor 118 may be further coupled to other peripheral devices 138, which peripheral devices 138 may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connectivity. For example, the peripheral devices 138 may include an accelerometer, an electronic compass, a satellite transceiver, a digital camera (for photos or video), a Universal Serial Bus (USB) port, a microphoneMobile device, television transceiver, hands-free earphone and BluetoothA module, a Frequency Modulation (FM) radio unit, a digital music player, a media player, a video game player module, an internet browser, and so forth.
Claims (20)
1. In a Wireless Transmit Receive Unit (WTRU) including one or more components and having a secure environment including a plurality of secure registers, a method for generating verification data usable for validation of the WTRU, the method comprising:
for each of a plurality of components of the WTRU, obtaining a value representing a measurement of the component of the WTRU;
generating a Measurement Log (ML) and storing the ML on the WTRU, wherein the ML contains records of the component measurement values and other component specific data;
generating verification data from the component measurements for each component and storing the verification data in one or more of the secure registers within the secure environment; and
organizing said verification data and said ML into a tree structure, wherein said security register containing said verification data defines a root of said tree structure, said ML defines internal nodes of said tree structure, and said measurements contained in said ML define leaves of said tree structure, and further wherein
Wherein the tree structure is formed using a security extension operation of the secure environment.
2. The method of claim 1, wherein the tree structure is a binary tree structure.
3. The method of claim 1, wherein the secure environment is a trusted platform module.
4. The method of claim 1, wherein the measurement log is a stored measurement log.
5. The method of claim 1, wherein the tree structure is further formed using register shift operations.
6. In a Wireless Transmit Receive Unit (WTRU) including one or more components and having a secure environment, a method for generating verification data using a plurality of secure registers within the secure environment, the method comprising:
obtaining a value representing a measurement of a component of the WTRU;
generating verification data from said measurements and storing the verification data in one of said registers within said secure environment;
storing the measurements to leaf nodes in a tree structure; and
performing one or more expansion operations within the secure environment to expand the values stored in the leaf nodes to a root node of the tree structure, wherein the root node includes data in the secure register in which the generated verification data is stored.
7. The method of claim 6, wherein the tree structure comprises at least one internal node on a path from the leaf nodes to the root node, wherein the at least one internal node forms a root of a subtree of the tree structure, and wherein the expanding values stored in the leaf nodes to the root node of the tree structure further comprises:
performing one or more expansion operations to expand the value stored at the leaf node to the at least one internal node, wherein the at least one internal node comprises another secure register within the secure environment; and
performing one or more other expansion operations to expand the value from the at least one internal node to the root node of the tree structure.
8. The method of claim 6, wherein the tree structure is a binary tree structure.
9. The method of claim 6, wherein the secure environment is a trusted platform module.
10. The method of claim 6, wherein the step of performing one or more other expansion operations to expand the value from the at least one internal node to the root node of the tree structure is based on a distance of the at least one internal node to the root node.
11. A method for validating tree-shaped verification data generated by a wireless transmit/receive unit (WTRU), wherein the tree-shaped verification data includes verification data elements, Measurement Logs (MLs) and component measurements organized into a tree structure, wherein the verification data elements define a root node of the tree structure, the MLs define internal nodes of the tree structure, and the component measurements define leaf nodes of the tree structure, the method comprising:
receiving tree-shaped verification data organized in the tree structure;
traversing the tree structure starting from a verification data element at a root of the received tree-shaped verification data;
as part of traversing the tree structure, comparing values at a branch node and a child node of the branch node of the received tree structure with values at the same node location in a reference tree;
determining whether to confirm the WTRU or a single component in the WTRU based on the comparison of the node values.
12. The method of claim 11, further comprising:
recalculating a branch node value by applying an extension operation to a child node value of a branch node or a child node of the branch node if the node value does not match a corresponding node value of the reference tree; and
comparing the recalculated node values with corresponding node values of the reference tree.
13. The method of claim 11, wherein the reference tree is stored locally on a validator.
14. The method of claim 11, wherein the steps are performed by a gateway device associated with the WTRU.
15. The method of claim 11, wherein the steps are performed by a trusted third party associated with the WTRU.
16. The method of claim 15, wherein the tree verification data is received from a gateway device associated with the WTRU.
17. A method for validating a node value of a Measurement Log (ML) generated by a wireless transmit/receive unit (WTRU), wherein the value of the ML is stored as a node of a tree structure comprising a root node, an interior node, and a leaf node, the method comprising:
receiving a certification package indicating a node value to be certified by a sub-tree certificate authority (SCA);
identifying the node value as a node value verifiable by the SCA;
generating a manifest associated with the node value, wherein the manifest includes acknowledgement information associated with the node value;
generating a certificate for the node value, the certificate being a secure environment configured to bind the confirmation information to the WTRU; and
issuing the certificate with the manifest, wherein the certificate and manifest are provided to the secure environment of the WTRU, which stores the certificate in the ML of the WTRU.
18. The method of claim 17, wherein the certificate is used to replace a node value of the ML of the WTRU.
19. The method of claim 17, wherein the attestation package includes a signature, the node value, a certificate issued by a trusted certificate authority, and a public attestation identity key.
20. The method of claim 17, wherein the SCA is a gateway device associated with the WTRU.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US61/311,106 | 2010-03-05 | ||
US61/314,395 | 2010-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
HK1183182A true HK1183182A (en) | 2013-12-13 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102986163B (en) | Method and apparatus for providing security to equipment | |
JP2014112892A5 (en) | ||
US11736288B2 (en) | Traceable key block-chain ledger | |
WO2020076535A1 (en) | Storing and verification of derivative work data on blockchain with original work data | |
CN105515776A (en) | Method and apparatus for providing security to devices | |
CN101344903A (en) | Multi-instance dynamic remote attestation method based on TPM | |
Aslam et al. | FoNAC-an automated fog node audit and certification scheme | |
WO2018136087A1 (en) | Multiple remote attestation service for cloud-based systems | |
Sardar et al. | SoK: Attestation in confidential computing | |
HK1183182A (en) | Method and apparatus for providing security to devices | |
Baciu et al. | Secure TinyML on Edge Devices: A Lightweight Dual Attestation Mechanism for Machine Learning | |
Di Lorenzo | Formal verification of security properties for remote attestation protocols | |
Haifeng et al. | A Hierarchical Provable Massive Data Migration Method under Multicloud Storage | |
Shenavar | Attestation of Distributed Applications | |
Lew | Building Trust in Software and Certificate for Vehicular Networking | |
Appiah et al. | Secure IoT firmware updates against supply chain attacks | |
Turcanu | Providing Trusted Computing Services for Multi-access Edge Cloud Computing | |
Rodrigues | Modeling Attacks in IoT to Assist the Engineering Process | |
Kirkpatrick | Trusted enforcement of contextual access control | |
Kirkpatrick | GRADUATE SCHOOL | |
Wittemeier et al. | Open RAN Threat Analysis of the O-RAN SC Reference Implementation | |
Kirkpatrick et al. | Thesis/Dissertation Acceptance | |
Gupta | Secure platforms for enforcing contextual access control |