[go: up one dir, main page]

HK1166911A - Platform validation and management of wireless devices - Google Patents

Platform validation and management of wireless devices Download PDF

Info

Publication number
HK1166911A
HK1166911A HK12107351.4A HK12107351A HK1166911A HK 1166911 A HK1166911 A HK 1166911A HK 12107351 A HK12107351 A HK 12107351A HK 1166911 A HK1166911 A HK 1166911A
Authority
HK
Hong Kong
Prior art keywords
validation
pvm
tre
component
rim
Prior art date
Application number
HK12107351.4A
Other languages
Chinese (zh)
Inventor
A.U.施米特
A.莱切尔
I.查
Y.C.沙阿
S.B.帕塔尔
D.F.豪利
D.G.格雷纳
L.L.凯斯
M.V.迈尔施泰因
L.J.古乔内
Original Assignee
交互数字专利控股公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 交互数字专利控股公司 filed Critical 交互数字专利控股公司
Publication of HK1166911A publication Critical patent/HK1166911A/en

Links

Description

Platform validation and management of wireless devices
Cross Reference to Related Applications
The present application claims the benefits of U.S. provisional application No.61/158,242 filed 3/6/2009, U.S. provisional application No.61/173,457 filed 4/28/2009, U.S. provisional application No.61/222,067 filed 6/30/2009, and U.S. provisional application No.61/235,793 filed 8/21/2009, the entire contents of which are incorporated herein by reference as if fully set forth herein. This application is related to concurrently filed U.S. patent application No.12/718,572 entitled "Method and Apparatus For H (e) NB Integrity verification and Validation," which is hereby incorporated by reference in its entirety.
Technical Field
The present application relates to communications.
Background
The existing technologies or standardized technologies of the mobile communication network cannot provide a method for authenticating and confirming (valid) the integrity of devices to the network or a method for managing and standardizing the devices. Also, a device that needs to be connected to a network cannot confirm whether the network to which it is connected is actually a valid or trusted provider network.
Disclosure of Invention
Methods, components, and apparatus for implementing Platform Validation and Management (PVM) are disclosed. The implementation of the PVM enables the platform to confirm the functionality and operation of the entity to remotely manage the device through a device management system, such as a home node B management system. The exemplary PVM operation places the device in a secure target state before allowing connection and access to the core network.
Drawings
A more detailed understanding can be obtained in the following description, by way of example, in conjunction with the accompanying drawings, in which:
FIG. 1 illustrates an example block diagram that represents domain separation for a trusted subsystem;
FIG. 2 illustrates an example block diagram that represents mediation of trust between platforms through organizational and technical approaches;
fig. 3 illustrates an example flow diagram of semi-autonomous acknowledgement with an enhanced home node B (h (e) NB);
FIG. 4 illustrates an example flow diagram of a four-step secure boot method;
FIG. 5A illustrates a block diagram of an example set of entities and their relationships and interfaces for Platform Validation and Management (PVM);
FIG. 5B illustrates another block diagram of an example set of entities and relationships between them and an interface for PVM;
6A, 6B and 6C illustrate signaling diagrams of an example validation method using a platform validation entity;
fig. 7 is an example block diagram illustrating a display h (e) NB communication scenario;
FIG. 8 is an example block diagram illustrating a "weak" trusted context in H (e) NB;
FIG. 9A illustrates an example block diagram and method of indirect device connection;
FIG. 9B illustrates an example block diagram and method of direct device connection;
FIG. 10 illustrates an example flow diagram for processing various certificates;
FIG. 11A illustrates an example validation method by implementing device revision (relocation) by a fallback code base after integrity validation fails;
FIG. 11B illustrates an example flow diagram in accordance with the method of FIG. 11A;
fig. 12 shows an example format for a referential integrity metric masked header;
FIG. 13 illustrates an example flow diagram of validation using virtual platform configuration registration values;
FIG. 14 illustrates an example diagram of module hierarchies when a component is loaded during full semi-autonomous validation; and
figure 15 illustrates an example functional block diagram of a wireless transmit/receive unit and a base station for providing, implementing and implementing PVMs, respectively.
Detailed Description
When referred to hereafter, the term "wireless transmit/receive unit (WTRU)" includes but is not limited to a User Equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a Personal Digital Assistant (PDA), a computer, or any other type of device capable of operating in a wireless environment. When referred to hereafter, the term "base station" includes but is not limited to a node B, a site controller, an Access Point (AP), a gateway, a Customer Premise Equipment (CPE), or any other type of interfacing device capable of operating in a wireless or wired environment. When referred to hereafter, the term "HMS" includes, but is not limited to, a home node B management system (HMS), a home enhanced node B management system (HeMS), both of which may be collectively referred to as an h (e) MS, a Device Management System (DMS), a Configuration Server (CS), and an auto-configuration server (ACS), or any other type of system that manages the configuration or functionality of a "base station. The terms "WTRU" and "base station" are not mutually exclusive. For example, the WTRU may be an enhanced home node B (h (e) NB). The term "theoretical information security" includes, but is not limited to, complete security, unconditional security, and near-theoretical information security, when referred to hereafter. The terms "trusted", "trusted" and "trustworthy" and variations thereof, when referred to hereafter, represent quantifiable and observable ways of assessing whether a unit can function in a particular manner.
Methods and apparatus for implementing Platform Validation and Management (PVM) are disclosed. The PVM enables the functionality and operation of a Platform Validation Entity (PVE) to remotely manage the device through a device management system, such as a home node B management system (HMS). The PVM operation places the device in a secure target state before allowing connection and access to the Core Network (CN).
PVM operation is self-contained and allows multiple variants and multiple implementations simultaneously in different technical environments. Although exemplary protocols such as Internet Key Exchange (IKE) may be mentioned for specific situations when an embodiment needs to be described, it should not be construed as a restriction or limitation to the overall scope of the invention. Although h (e) NB is used as an example in some places, PVM is not limited to h (e) NB. Technological modifications can be made directly without departing from the spirit, extending PVMs to machine-to-machine (M2M) and other wireless and/or network devices.
The description is top-down, as the structure from the beginning assumes the availability of most of the core concepts of trusted computing technology relating to, but not limited to, the technical standards specified by the Trusted Computing Group (TCG). For example, the embodiments described herein build the foundation for all operations and methods of PVMs based on secure boots performed by the trusted environment (TrE) and Reference Integrity Metrics (RIMs). This does not preclude the implementation of further changes based on lower confidence techniques. Other embodiments may also not use RIMs in multiple PVM steps.
Typically, in technical systems, PVMs contain trust statements (indications) in a comprehensive definition of trust integrated into the technical system, wherein the emphasis is on the way trust is established in the system. PVMs use dispersion and separation of tasks as a core paradigm. This enables scalable trust as required by evolving communication networks and the internet, where nodes are more diverse and connection times are shorter.
The following consistent operational explanations of trust are available for relationships and interactions between technical systems (e.g., PVMs) and between technical systems and humans: an entity is trusted if it predictably and observably operates in a desired manner for a predetermined purpose. This operational interpretation includes three salient features, namely predictability, observability, and ambiance.
Predictability represents a prior knowledge of the system that can be used to a) evaluate the risk of interacting with the system, and b) derive the system's condition during the interaction by inferring observations. Observability refers in particular to the fact that by means of this feature system conditions can be obtained during the interaction process. Which is closely related to predictability, from which, in conjunction with predictions, further decisions can be made regarding the state of the system and the next step of action. Ambient refers to information describing the environment in which the system is interacting, in which predictions and observations can be made. In general, the three realize the estimation of the credibility or vice versa, and can estimate the threat generated by the interactive entity.
A conceptual difference between trust and enforcement is created due to the lack of a method to establish operational trust. This problem becomes more pronounced as interconnected systems have diversified, no longer limited to client-server relationships. In this case, neither mandatory nor trusted operational aspects can be implemented in view of the state of the art (security) technology. The system lacks a) a universal technical approach to establishing operational trust, b) a first-line infrastructure for enforcement, and c) a method for sending information about trustworthiness and appropriate security level to external entities. Only the basic building blocks described above enable a trust and a forced dynamic balance, i.e. an adjustable trust, to be achieved in the system that reflects the real needs.
The PVM is also built from the building blocks described above. The building blocks of a trusted system construct its trust boundary and sometimes provide a way to extend this boundary and communicate this trust to external entities by making its performance and operation somewhat predictable and observable. The building blocks may include (hardware) security anchors, roots of trust (RoT), trusted (sub-) systems, and all relationships, secure storage and paths, authorizations, authentications, and secure boot processes and proofs. By combining the above approaches, a system and various components that combine trusted and mandatory features can be built in a number of ways, enabling technologies that can be tuned between these two extremes. The basic functional building blocks are described below.
Hardware security anchors are important to protect system performance. A hardware security anchor is the part of the system that is used to protect it from unauthorized access by hardware means known to be sufficiently secure for the intended purpose, effectively reducing the threat of attack on it. In particular, it maintains the RoT for its safe operation. The RoT is an abstract system component that enables a) securing of internal system operations, and b) informing external entities of the properties and/or identity of the system (individually or as a group member, e.g., make and model) in a secure and authenticated manner.
A system may contain multiple RoT for different purposes. For example, the RoT may be an asymmetric key pair incorporating its trusted third party digital certificate. Meanwhile, a symmetric secret in a Subscriber Identity Module (SIM) of a cellular network can also be considered as RoT in a closed trusted system implemented with a SIM card.
Second, the functional building blocks in a system that are considered trusted (i.e., operate in a clear manner for the intended purpose) constitute the Trusted Computing Base (TCB) of the system. The system components that the TCB contains cannot detect their operational trustworthy features when the domain calls the system and during operation, but only by out-of-band processes such as compliance and consistency detection and attestation. Such certification is typically performed by a separate evaluator, for example, on behalf of the manufacturer of the TCB-specific technology component or the manufacturer of the TCB as a whole, based on established security evaluation criteria. For such attestation to be useful, each component of the TCB should have information identifying it as being a component of such attestation techniques.
The system with the security anchor, RoT and TCB is called a Trusted System (TS). There is a slight improvement over the concept of a generally trusted platform, which refers to a "computing platform with trusted components, typically using embedded hardware, that use the trusted components to establish a trust base for software processes". When one or more trusted systems are present in the TS, these trusted systems are referred to as trusted subsystems (TSS). Examples include a virtual execution environment on a personal computer platform that receives a particular trust from a trusted platform module hardware (TPM) of a host. Another example is the provision of a trusted engine incorporating its TCB. Hereinafter, "TS" may be used interchangeably as an abbreviation for "TS or TSs" unless explicitly indicated otherwise. The TS may be implemented in various devices as shown in fig. 1.
Various capabilities, processes and structural components of the TS, collectively referred to as the term Trusted Resource (TR), are described below. TR can be generally classified as: 1) TR belonging to TCB; and 2) TR outside of TCB. Examples of the latter are trusted components of the operating system and trusted applications built on the TCB by using TCB capabilities. The assertion of the trust level of a TR in a TCB depends on the security defined by the TCB, while the trust level of other TRs can be derived at most from the trust level from the TCB. In this case, the TCB must provide specific internal TRs that can allow for the extension of the trust boundary (i.e., the total number of TS components that can be considered trusted in a given environment) to TRs outside the TCB, such as validation or secure boot as described below. The TR within the TCB typically shares the same hardware protection as the RoT, e.g., on the same tamper-proof chip. TR outside the TCB may be implemented in a logic unit within software. Note that trust boundaries, especially those involving TRs outside the TCB, may be short lived. Which may be present for a specific purpose for a period of time and may no longer be present thereafter.
A common model of a process for extending TCB trust boundaries is validation. Which is itself the TR used to perform the authentication process. It is identified as an authentication process and a corresponding TR authentication entity or authenticator to distinguish from the validation process performed on the TS by an external entity (i.e., the validator). Verification includes as a process a new component that appears in at least two different forms within a trust boundary. First, the verifier measures the new component at its initialization. That is, the component, the state and configuration of the component are uniquely identified. The measurement results are then stored. As an extension thereof, the verifier may compare the measurement result with a reference value and decide whether to extend the trust boundary. That is, the verifier may make and execute policy decisions. From an operational perspective, the predictability corresponding to the TS is verified because after the verification process is completed, it can be considered to be in a particular predetermined state. On the other hand, validation makes the attribute observable and thus trustworthy. This means that the presence reporting entity sends the verification result to the other party. Third, the intermediate steps performed by the reporting entity are proofs. Proof is a logical result of the verification, or a logical precondition of the validation. Attestation is the process of ensuring the accuracy of the measured information so that a trusted party-the validator-can use this information to decide whether it should trust the remote TS. Authentication, attestation, and validation are core concepts of operational trust that are tightly tied to the period of existence of the TS.
TSs are possessed by entities (personal or other technical systems) that are authorized to access a particular TR (e.g., by RoT) within a trust boundary. The ownership may exist implicitly by physically having the TS directly, i.e., the platform contains the TS, or explicitly by authenticating the owner, for example, through a specific credential (credential). In the context of the Trusted Computing Group (TCG) Trusted Platform Module (TPM) specification, providing such authentication data is referred to as obtaining ownership. An owner interacting directly with a TS is called a local owner, while an owner interacting with a TS by any means, such as transit through a communications network, is called a remote owner. When multiple TSSs are included in a TS, each TSs may have the same or different owners.
FIG. 1 shows the separation of computational domains for several TSSs 110, 130, 150, and 170. The TSSs 110, 130, 150, and 170 include dedicated Mobile Trusted Modules (MTMs) 112, 132, 152, and 172, respectively. The hardware security anchor of the Mobile Phone Working Group (MPWG) specification includes the mentioned RoT, TR (trusted resources 114, 134, 154 and 174) and trusted services 116, 136, 156 and 176. Typical software services and components 118, 138, 158, and 178 are located outside of the trust boundaries 120, 140, 160, and 180, respectively. So-called trusted engines 122, 142, 162, and 182 (all of which are located in a secure computing environment) rely on RoT to provide, among other things, separate and controllable communications between different TSSs 110, 130, 150, and 170, respectively. The TSS may share the TR with other TSSs, and even share the functions of the MTM, using inter-domain acknowledgement and authorization. The trusted engine and some MTMs may be implemented in software as long as there is at least one hardware-protected RoT from which the RoT of the software-based MTM is generated. Each TSS may be controlled by a local or remote stakeholder or owner. In the mobile device's presence cycle, not all stakeholder TSSs are present, and the presence process enables the (remote) stakeholder to initiate creation of a new TSS and take ownership thereof.
PVM relies in part on the establishment of trust. Between trust and enforcement, the main connection concept is task separation. Task separation is generally considered to involve mandatory tasks. But it has a natural connection to trust. A relying party will force delegation to other systems only if its operation is trusted. Establishing operational trust among TSs relies on a controllable exchange of information to achieve observability and pre-established predictability. The latter can only be done outside the TS.
The example model represented in FIG. 2 demonstrates the roles of external entities that provide organizational guarantees to the TS 200, 202. The TS 200, 202 includes a normal application 260, 262 that is outside of the trust boundary 270, 272. Within the trust boundaries 270, 272, there is a TCB 216, 218, which in turn includes a RoT 208, 210 and a TR 212, 214. The trust boundaries 270, 272 may further include the trusted operating systems 230, 232 or portions thereof that need to be protected and the trusted applications 234, 236.
The security attributes of the TSs 200, 202 are located in the hardware trust anchors 204, 206 and the RoT 208, 210. The above-described technical components cannot be detected when deploying and operating a system. Therefore, it needs to be subjected to security evaluation during design and development. This is performed by an independent authority that issues security certificates to the manufacturer of the security critical components once the evaluation is successful.
In addition to the RoT 208, 210 and trust anchors 204, 206, security processes may include other TRs 212, 214 in TCBs 216, 218 and involve different attestation authorities 220, 222. In order to ensure consistency of the evaluation process and the quality of the different certifying authorities, which in turn are assessed and certified by certifying authorities 224, which may be private entities owned by half countries or recognized by countries, for example. The authentication mechanism 224 may also provide connectivity information between the certifying mechanisms 220, 222.
The certifying authority 220, 222, or a technical entity notified thereof, issues credentials 226, 228 to the TSs 200, 202 for use by the TRs 212, 214. These credentials 226, 228 can prove that both their integrity and origin are verifiable. The most important examples are the master rot (EK) Endorsement (EK) key (EK) certificates issued by the manufacturer to the TPM, as well as platform certificates and certificates of other components. The credentials and secrets obtained from these certificates are then used to interact with external entities (in particular other TSs) also using encryption methods. Validation 240 of the TSs 200, 202 typically requires authentication and, in many cases, privacy. Also, having trusted secrets and credentials derived from TS credentials is important for the operating systems 230, 232 and trusted applications 234, 236 to establish security associations 242, 244, respectively, that is, channels that can provide authentication, confidentiality, and integrity for communications. Above the security associations 242, 244, applications within the extended trust boundary may establish a secure communication channel with the defined operational trust attributes.
Mediation entity 250 facilitates establishing trust between the various interactions shown in FIG. 2. An example of mediation entity 250 is a Private Certification Authority (PCA). Mediation entity 250 sends a base declaration of trust for the TS to another TS or relying party. The mediation entity identifies the TCBs 216, 218 or selected components (e.g., trust anchors 204, 206, etc.) as trusted and certified components. At this point, mediation entity 250 needs to know the certificate issued by the certifying entity, validate the certificate when it is received from the TS, and sign the assurance statement to the relying party. Mediation entity 250 may perform subsequent security associations and secure communications in a manner similar to a Certification Authority (CA) in a Public Key Infrastructure (PKI).
The building blocks required by the PVM for establishing trust are now described.
Essentially, validation is the recording and controlling of the state change of the TS to the desired granularity (granularity). At the same time, from start to finish, it is closely linked to the operation cycle of the platform on which the TS is located. Therefore, the actual authentication method is mostly integrated with the startup process and the operation cycle of the platform executed by one or more processors of the physical device (e.g., WTRU).
One method of internally verifying the TS is to authenticate the start-up and use the TCB's ability to assess the trustworthiness of the loaded or started software or hardware components at TS initialization (e.g., when the WTRU powers up). Authentication initiation is accomplished by initiating specific functions of the RoT and TCB before initiating other components of the TS. These components operate as rot (rtm) for measurement. This means that the component that is started or loaded later is measured, i.e. the component and its state and structure are uniquely identified after start-up by generating a cryptographic digest value (e.g. a cryptographic hash value), e.g. for the embedded code of the hardware component and the (binary) representation of the loaded program. Depending on the particular requirements, the measured values may be stored in a secure memory. The metric values, together with the data (e.g., software name and version) needed to trace back the system state from them, form a Stored Measurement Log (SML) of the TS. On a PC platform, authenticated boot may include all components from the BIOS to the Operating System (OS) loader, as well as the OS itself.
In one example of authenticated boot, the system state is measured by a reporting process, where the TPM acts as a central authority to receive the measurements and uses the hash value to compute a unique representation of the state. For clarity, the TPM may receive 1) the hash value of an application or document, i.e., the application's measurement computed by an external (software) implementation, or 2) the TPM may compute the hash value, i.e., the computed measurement itself implemented using an internal hash algorithm. To this end, the TPM has a plurality of protected Platform Configuration Registers (PCRs). Starting with system initialization at power-up, for each loaded or started component, RTM is used to report its measurement to the TPM, e.g., hash value by BIOS, and securely stored in SML. At the same time, the active PCR is updated by the extension process, which means that the measured value is appended to the current PCR value, a digest value is built from this data and stored in the PCR. This establishes a transitive chain of trust that contains all the components that were started and loaded. If a single PCR stores only one value, only "foot-print" integrity validation data can be provided. This value enables the validator to validate the chain of trust only by recalculating the footprint in conjunction with the SML.
Secure launch is an extension of authenticated launch. Secure booting is of particular importance for devices like set-top boxes or mobile phones that require standby and offline functionality. A common feature of devices that can be securely booted is that they need to operate in a set of trusted states when they cannot send proof about their trust to the outside, for example, before a network connection. In secure boot, the TS has a local authenticator (authentication entity) and a local enforcer to supervise the boot process, which establishes both a Policy Enforcement Point (PEP) and a Policy Decision Point (PDP) to control the secure boot process. The local verifier compares the measured value of the newly loaded or started component with a Trusted Reference Value (TRV) located in the TCB or protected in the TS by TR (e.g. located in a protected memory space) and decides whether it has been loaded, started or not yet started. In this way it is ensured that the system is started up to a defined, trusted state.
Trusted reference data is data used to compare validation data with known good values. These values that constitute the trusted reference data are referred to as Trusted Reference Values (TRVs). The best known example of this is the Reference Integrity Metric (RIM) defined in the MPWG specification of the TGG. Which may actually be used a) by the platform itself at secure boot to ensure that only those components whose metric values are consistent with the TRV are booted, or b) by the validator to compare validation data to known good values to evaluate the state of the platform being validated. The term RIM may be used as a non-limiting example to describe trusted reference data.
In this way, the trusted reference data is trusted by the security claims specific to it, which are validated by the validator or agent using the TRV under consideration. Such verifiable claims may be implemented, for example, by a digital certificate issued by a Trusted Third Party (TTP), such as a RIM certificate. The trust assertion for trusted reference data may also contain additional information, for example, regarding an external evaluation of the computer or platform (e.g., evaluating a protection level, EAL, according to a common standard).
The dual nature of the TRV needs to be taken into account. In one aspect, it is used for local authentication in a secure boot process. In this process, it is performed by the TRV of the provided infrastructure, which can update the measured component, for example by providing a new TRV to the TS that corresponds to the updated software. For external entities to validate the TS after secure boot, the received validation data (e.g. a so-called event structure) needs to be compared to the stored TRV and the associated TRV certificate verified. Thus, the TRV and corresponding certificate are important not only for authentication, but also for validation.
For validation, it is important to demonstrate the newness of the information. This requires extending the authentication process from startup to TS operation time, which is a technically difficult task in a complex open system.
The mentioned task separation also exists in the process of validating the TS. That is, based on the verification results, the trustworthiness of the system can be evaluated and corresponding policy decisions can be made during the validation process. In this process, the separation of tasks between the TS and the validator will result in three types of validation. The common basic concepts required for various types of validation will now be described first.
The validation process of the TS must be supported by a validation token that is exposed to the validator. The confirmation identity must come directly or indirectly from RoT, i.e. RoT (rtr) for reporting. If there is no mediator (mediator), confirmation cannot be performed. The provider of the validation token must declare the owner of the validation token as a TS. Providing an identity for validation is an extension of providing identity in identity management (IdM) systems. The provider must check the credentials of the TS (including some or all of the TRs in the TCB) to assess whether the TS is in a state for validation as trusted. Also, the confirmation identity must be provided in a security procedure, such as a security protocol in a dedicated secure channel. In the case of remote validation, the validation identity coincides with the global identity of the TS.
The use of a unique permanent identification for validation is important for security. The acknowledgments are frequently and arbitrarily made on multiple acknowledgers for various purposes. While the validation identifiers used may not each be readily correlated with the user identifier, they generally enable tracking of the behavior of the TS. For security reasons, using the same acknowledgement flag for a group or all TSs is not an option to solve the problem. This group identity is a single point of attack/failure, i.e., if one TS in the group is compromised, all other TSs cannot be validated. Another option is to use a short acknowledgement flag generated at a certain frequency or generated by the RTR for each acknowledgement, e.g. one per start-up period.
Autonomous validation is the process of implicitly completing validation of a TS by an external validator on the assumption that validation of the TS must have been completed locally (i.e., confined within the device, i.e., in a manner that is not dependent on an external entity). In this case, it is assumed that successful authentication has been performed before the TS allows further communication attempts with external or other operations. Thus, in this case, the authentication process is assumed to be absolutely secure, since no direct proof of the authentication is provided to the outside world. The outside world makes the following assumptions: depending on the specification and manner of execution of the TS, the TCB will prevent the TS that failed verification from performing other externally visible tasks, such as connecting the TS itself to the network, or obtaining an authenticated connection from a remote entity. Autonomous validation puts all mandatory tasks on the TS.
Autonomous validation uses a closed and unchanging system model, which is basically the trust model used in smart cards, to the TS. The TS uses the TCB to authenticate itself, with a binary "success" or "failure" value as a result. While validation is an indirect process in which the TS allows certain interactions with the outside, such as network attachment. A typical example is the release of an authentication secret, such as a key, by a smart card.
Security relying only on devices has been breached in the past and is likely to be further breached as, for example, mobile devices have become open computing platforms. Autonomously confirming that information of high-level security requirements is not substantially communicated; in particular, if the TS portion is damaged, no information about its status is externally available. Thus, the inability to mark damaged devices means that development will continue until controlled, without being noticed and without causing serious harm to other interested parties (e.g. network operators). Depending on the failure policy, autonomous validation may be implemented to react authentication to certain conditions, such as not allowing certain functions, or shutting down the device for reboot. This avoids network connections and seems to be advantageous. However, this is also an intermediary to denial of service (DoS) attacks. The device must not be connected to the network when in a compromised state, and therefore has little opportunity to revert to a safe state. Remote management is also difficult; especially since values (software, secrets) may be sent to the compromised device during the software download and installation process, a loss of security may result. Thus, autonomous acknowledgements tend to be maintained out-of-band. For example, a failure of a TR software update may result in a network connection not being available in the device state.
With autonomous validation, the newness of the proof data cannot be guaranteed by itself. To satisfy such a security attribute, autonomous confirmation is required to be automatically performed every time the system state changes. Since autonomous acknowledgments are not performed frequently during operation, such as during network attachment, TS states may change significantly during TS operation without observation by the acknowledgers. Thus, an attacker may take advantage of the gap, for example, to implant malware. Autonomous acknowledgements are very likely to be subject to such timing attacks.
In remote validation, the validator evaluates the validity of the TS directly from the proof it receives for validation. In this case the verification is completely passive and the complete SML has to be sent to the validator. The model for this case is a verification and subsequent validation by the authentication initiation. All policy decisions are located at the validator.
The prior art techniques for validation are remote validation, particularly for remote attestation of the TCG. In remote attestation, the SML and PCR, the remote attestation's validation and verification data signed by an Attestation Identification Key (AIK) are displayed by the TCG trusted platform to an external validator. The AIK is a short-time asymmetric key pair certified by a PCA that serves as a confirmation identity provider. Pseudonyms provided in remote attestation are not sufficient in all cases. TCG additionally defines Direct Anonymous Attestation (DAA) from zero knowledge attestation.
Remote acknowledgments are also deficient in that remote and autonomous acknowledgments are extreme cases in the many options that semi-autonomous acknowledgments incorporate. Remote acknowledgments, represented by remote attestation, present operational problems in terms of scale and complexity, as they place a fully computational load for acknowledgments on (central) access points connected to the network or service. In particular, the cost of validating SMLs is significant for platforms like personal computers that have a large number of software and hardware components of various versions and configurations. This also requires a huge TRV database, such as RIM, which together with the infrastructure enables the interested party to define the required TS target configuration. The same reason is also that remote validation is not practical for remote management of the TS (i.e. controllable and validating changes in configuration). Furthermore, the run verification is expected to be confirmed remotely, otherwise only the status after start-up can be displayed to the confirmer. The SML will "disappear" upon confirmation. Thus, if validation is not performed immediately thereafter, it makes no sense to run the validation, which requires remote validation very frequently. Finally, since the displayed SMLs may be almost unique to TS, the remote validation of complex open TS can only compromise confidentiality regardless of whether PCA is used. A similar economic reason is the possible discrimination by remote attestation, i.e. to allow users of other programs to turn to use the most recent versions of software of the main vendor, or to abandon service connections, due to the fear that only these versions of software can enter the TPV database (for example the RIM database). Some deficiencies, such as semantic or attribute-based certification, may be reduced by modifying the remote attestation approach with the purpose of exposing component features rather than implementations.
Semi-autonomous validation is another process of evaluating TS validity during validation, which is done locally within the device itself, without relying on an external entity, and with policy decisions during validation. In this case, however, it is necessary to send specific information, hereinafter referred to as "confirmation message", such as the verification result and the required proof, to the validator, which can make a judgment based on the contents of the confirmation message from the TS. The signaling from the TS to the validator must be protected to provide authentication, integrity and, if desired, confidentiality. One model of semi-autonomous validation is secure boot followed by signaling to the validator about the event structure and TRV indication. Semi-autonomous validation distributes authentication and enforcement tasks between the TS and the validator. In particular, in the secure boot process, the former makes decisions when loading components, while the latter enforces, upon validation, decisions about interactions allowed by the TS, based on the provided state evidence.
Semi-autonomous validation may provide advantages over the other two options. It may be more efficient to transmit acknowledgement information in the form of RIM indicators used in authentication. Such an approach may also be used to protect privacy, for example, when such an indicator represents a group of components having the same functionality and trustworthiness (e.g., version). This is similar to semantics and attribute-based authentication and may also combine semi-autonomous validation with the above-described improvements to remote validation. The interaction of enforcement in the process of validating the validator also enables remote management of the TS.
In the process of technical implementation, the "support of isolation and correction for ARs (access requesters) that cannot successfully obtain network connection permission due to integrity verification failure" can be implemented using correction. In theory, all up-to-date information about integrity may be provided to the AP as defined by the current authorization policy. Examples include OS patches, anti-virus (AV) updates, firmware upgrades, and other similar software or firmware updates. Implementing the specific concepts of remote management may require relying on an infrastructure that can efficiently represent and transmit TRV information (e.g., RIM information), as described herein for PVMs.
It is highly desirable here to emphasize the role of RIM certificates in semi-autonomous validation. RIM certificates are provided by a certifying authority that has directly evaluated or delegated the evaluation of the corresponding TR. Attestation methods and entities can be diverse and involve different levels of operational trustworthiness. This allows more flexibility for the semi-self validator, which obtains more detailed information about the TS. As described herein, RIM certificates are used as an example of data that may support device-on (on-device) validation of components. Although an SAV method based on RIM certificates is described herein, other SAV forms may be used.
For resource constrained systems, semi-autonomous validation is also the only operational validation option, since for resource constrained systems: a) which lacks the processing power required to make autonomous acknowledgements, and b) lacks the storage and/or communication capability of the numerous reports required to perform remote acknowledgements. For example, in the context of a wireless sensor network, sensor nodes are subject to both of these limitations. In these cases, one approach is to send a memory probe code to the sensor that computes a digest of the statically stored content (code and parameters) to generate a predictable result that is sent back to the base station for validation. It is clear that an attacker can circumvent this "authentication" by using the stored original storage content to generate the correct result. As long as the attack is done on the sensor itself, the sensor inevitably produces delays that are reinforced by randomization, self-correcting probe paths, and aliasing methods. Thus, if a significant delay in sensor response exceeds a predetermined threshold, the sensor fails.
In semi-autonomous validation, the validity of h (e) NB is evaluated internally during secure boot, without relying on external entities, and policy decisions are made during this evaluation, in particular as to which components to load/boot and which to not load/boot, depending on the measured integrity of the individual components. In semi-autonomous validation, the evaluation results and the required evidence are sent to a Platform Validation Entity (PVE), which decides itself according to the content of the validation message. The signaling sent to the PVE should be protected to provide authentication, integrity and, if needed, freshness and confidentiality. Semi-autonomous validation distributes integrity verification and enforcement tasks between h (e) NB and external validation entities (e.g., PVE). In particular, during secure boot, the h (e) NB makes decisions locally at component load/boot, and the PVE can enforce decisions on the interactions allowed by the h (e) NB upon validation based on the provided state evidence. Depending on the outcome of the PVE decision, either full access to the network and service is granted or more restrictive conditions may be provided, such as quarantining network access and forcing configuration changes.
A trusted entity called trusted environment (TrE) is very important for semi-autonomous validation. The procedure for semi-autonomous acknowledgement may be different. In one embodiment, h (e) NB may perform semi-autonomous validation of the integrity of h (e) NB as per flow diagram 300 shown in fig. 3. Before performing the device authentication procedure, the TrE of h (e) NB first checks (305) the integrity of certain pre-specified components (e.g., boot code) of h (e) NB. Thereafter, the integrity check result is recorded or stored, at least temporarily (310). This step may be initiated autonomously by the TrE itself after h (e) NB power up, prior to the first moment of authentication (e.g., to establish a secure backhaul link). It may be considered a "secure boot". The TrE ensures the integrity of the h (e) NB by loading and/or launching only the registration component to an integrity-proven state. If re-evaluation of the established trust is required (e.g., due to a change in the configuration of h (e) NB after a previously successful network connection session), then this integrity-verified launch status may be checked in two ways. In the first case, the check may be initiated autonomously by the TrE itself. Another may be initiated by a request from the network (e.g., a security gateway (SeGW) or a Platform Validation Entity (PVE)) that requires a TrE to complete later.
The TrE may then check whether a predetermined portion of the remainder of h (e) NB has reached a secure start state (315). This further check may be initiated by the TrE itself, or by a metrics component in the h (e) NB that is external to the TrE, but integrity protected by the TrE (320). In such post-stage checks, the integrity of other components, configurations or parameters of the remainder of the H (e) NB are checked when loaded or started, or at other predetermined operational events, as long as the measurement component is available. The secure boot check result is recorded or stored at least temporarily (325). Preferably, secure boot check results are recorded along with integrity check results using protected memory provided by TrE or by other means of integrity protection (e.g., key hash values).
In a further approach, in addition to the freshness provided in the existing PVE protocol, the results (i.e. individual measurement results) may be additionally time stamped with security to provide freshness and replay protection for the measurements themselves. Such freshness information may be implemented by including a time-stamped value in the measurement value by adding the time-stamped value before performing the hash function and then storing the result in a protected register (e.g., PCR).
The TrE then processes the inspection results to generate an acknowledgement message from the inspection results to be sent to the PVE (330). Upon receiving this message, the PVE uses the message to evaluate the trust status of the h (e) NB (335). In one processing embodiment, TrE issues a claim using a signing key, which is protected by TrE and thus protects the integrity of the claim, declaration h (e) NB has passed the autonomous validation check. The declaration may also include evidence that the PVE is available to evaluate the state or results of integrity checks performed by the TrE on h (e) NB predetermined components, and may also include evidence of any binding between the autonomous validation check and subsequent device authentication procedures. The TrE may also timestamp the declaration to ensure its novelty. This signed declaration proves that the messages obtained by the TrE from the reordered data or results and sent to the PVE are messages from the TrE of the h (e) NB after the secure boot process. For verification of the signature, a confirmation should be incorporated into the device authentication, or a separate TrE identification should be used. This signature enhances the security of the purely autonomous validation check by adding some traceability, since the result of the TrE autonomous check of the h (e) NB-initiated configuration is considered authentic.
The TrE sends the signed declaration to the PVE via the SeGW, after which the PVE may use the signed declaration from h (e) NB and decide whether to allow h (e) NB to proceed with authentication (340). The PVE can use the information in the signed declaration in a number of ways. In one embodiment, the PVE may check the integrity of the TrE itself against a separate static configuration and deny access to the connection if the check fails. In another embodiment, the PVEs can be configured to make finer judgments about access control. In particular, this indicates that access can be denied based on the presence/absence and integrity of the component/components inside or outside the TrE. In yet another embodiment, the PVE can be configured to obtain information about the integrity and security attributes of the h (e) NB components from a trusted third party, in accordance with the indication contained in the validation statement. This means that the PVE can be configured to be able to obtain information about the reference values (i.e. valid data) of the components on the device. Information about the actual integrity of the component is then derived by a process of comparing the validation data with the data received from the device. The PVE does not get assertions about component integrity directly from the TTP, but only from TRVs that can compare reported values. In yet another embodiment, the PVE may be configured to make configuration changes before access is allowed. Such a correction process may include a forced software update.
As described above, TrE is able to generate a trusted and accurate timestamp and can use a key in or protected by it for signing. In one embodiment, the external validator may validate the "time" when the TRE performs a local autonomous device integrity check. This means that at the first or last measurement, a time stamp is obtained. It can also be shown that the time stamp is used when starting to run the protocol at the PVE. It can also be shown that a time stamp is included in each measurement. The desired "time granularity" may indicate which manner to use. In another embodiment, the TrE may be configured to insert two timestamps, one obtained before the TrE performs the local autonomous device integrity check and one after said integrity check. Such a timestamp effectively "locks" the time frame in which the local autonomous device integrity check actually takes place, and by sending such a timestamp at the same time as sending data representing the local autonomous integrity check result or process, the TrE can enable the external validator to not only evaluate the integrity status of the device, but also learn about the time history of when and how the integrity of the h (e) NB was locally measured and verified by the TrE. Thus, from time 1) the time at which the assertion was obtained (as indicated by the second, later timestamp), and the validator's own timestamp when the timestamped acknowledgment message was received, and 2) the time at which the local autonomous integrity check was made (locked between the two times indicated by the two timestamps), the validator can use its own "time window" to determine how to process the signature assertion that it received from the TrE that relates to the integrity status of the device.
PVMs may be used to implement the strategies and methods described herein using the PVM methods, apparatus, and structures described herein. PVMs typically use maximum task separation between active entities. The method clearly defines the active area of each entity involved in the platform validation and management process. The PVM method has the advantages: 1) optimization can be implemented for each entity separately; 2) devices with PVMs may not operate simultaneously (with limitations); 3) currently, PVM methods operate without national distinction for the involved network entities; 4) entities can be maintained and managed individually; and 5) backup and failover are easier to perform. In particular, performance and availability are very important to efficiently perform validation and remote management of devices. In a particular case, there may be an event where a device component is heavily updated, or a large number of devices change a Selected Home Operator (SHO). The PVM structure may be configured to perform validation and management of one device by a single operator (typically SHO). As an exception, as described herein, special forms of PVMs may have an impact on roaming access and operator variation.
The PVM provides a systematic way to validate and manage devices when they first attempt to connect to a communication network and thereafter monitor their integrity, relying in part on security techniques based on trusted computing. The PVM may provide: 1) confirming the device before the network connection; 2) managing device configurations wirelessly (OtA); 3) secure boot is achieved by checking a TRV (e.g., RIM) at the time the component is loaded/booted; and 4) installing a new TRV (e.g., RIM) on the device for configuration changes — TRV acquisition (ingestion).
In the example embodiment of the PVM described herein, the following technical assumptions and pre-processing are made for the devices and networks it identifies. For the network, it is first assumed that all entities are operated by the same Mobile Network Operator (MNO) as part of the same Core Network (CN). Thus, no additional security protection (e.g., mutual authentication, integrity protection of messages, encryption) is performed between these entities for the purpose of establishing the channel and the actual communication. Additional security features will also be described if desired for particular uses. Regardless, the scope of applicability of PVM is extensible to these examples, as the PVM method can be used by entities other than the MNO's CN, and even by entities managed by another party other than the MNO.
For a device, the device may have multiple styles and multiple names. PVMs are available for h (E) NB and machine-to-machine (M2M) devices of evolved Universal Mobile Telecommunications System (UMTS) terrestrial radio access network (E-UTRAN) and for a variety of other network devices that meet certain conditions. These conditions are basically the same as for a Trusted System (TS). When using PVM, the respective devices are configured to use the PVM method, thereby becoming PVM devices.
As a precondition for the validation process, a validation requires an identification, which the device can authenticate. This authentication is needed to protect the PVM infrastructure from attack by certain fake devices and cannot be confused with the authentication of the device to the CN (either after validation, or along with the validation process). This means that the device is only allowed by the PVM after the PVM authenticates the device identity, thereby avoiding that unknown devices that cannot execute the PVM protocol impose, for example, DoS attacks on the PVM system.
For PVM purposes, the device identification Dev _ ID is an identification in the device that is bound to the trusted context (TrE), the Universal Integrated Circuit Card (UICC) or smart card, or the device itself (e.g. h (e) NB), and nothing is done. It is assumed that the device can securely manage authentication certificates related to the Dev _ ID, thereby being able to authenticate the Dev _ ID. The Dev _ ID may be a formal domain name (FQDN), a Uniform Resource Identifier (URI), a uniform resource locator (URI), a Uniform Resource Name (URN), a Media Access Control (MAC) address (e.g., an extended unique identifier (EUI-48), EUI-64), an IPv4 or IPv6 address, an IPv6 host identifier (e.g., 64LSBs) containing subnet addresses, an International Mobile Equipment Identity (IMEI), an IMEISV (e.g., gsm/umts), an Electronic Serial Number (ESN), a Mobile Equipment Identifier (MEID) (e.g., cdma), an International Mobile Subscriber Identity (IMSI), a Temporary Mobile Subscriber Identity (TMSI) (when a device is identifiable by a user due to a 1: 1 mapping between the user and the device), an IMS user ID (e.g., IP Multimedia Personal Identity (IMPI) or IMS user public Identity (IMPU)), a Mobile Station Integrated Services Digital Network (MSISDN), or any other alphanumeric or machine-readable format identifier, this identifier enables a reliable and unambiguous identification of a single device, for example, unique for each operator (e.g., globally or at least intra-domain).
The device may have a trustworthy TrE. The TrE in a device may be constructed from an immutable root of trust (RoT) in a secure boot process. The TrE provides a secure execution environment and other basic protected capabilities. The TrE may be a managed component, e.g., variable, such that only the RoT remains unchanged.
From a trusted computing perspective, a TrE can be viewed as a TCB built from TPMs or MTMs extended by some secure execution environment and a particular protected interface. As a non-limiting example, TrE of TCB built from TPM or MTM, other trust implementations are equally applicable.
For PVM, TrE provides a TCB that can be unconditionally trusted. However, as a variation of traditional trusted computing, the TCB constructed by TrE is not invariant across PVMs. This is because the TrE and its surroundings in the device are different in PVM. Different information specific to the two parts is sent to the infrastructure and used to validate and manage them according to different policies. The TrE is the most dominant correspondent of the PVM infrastructure and is believed to perform tasks related to the PVM correctly.
H (e) NB and TrE may perform device integrity checks at startup, before connecting to the core network or before h (e) NB connects to h (e) NB management system (HMS). The device integrity check may be based on one or more trusted reference values and the TrE. The TrE may be required to securely store all trusted reference values each time. The TrE may be required to boot securely. TrE may also be required to support single or multi-component integrity checks.
In a single component integrity check, the TrE may be required to load the full code needed for trusted operation of the device as a single component. Before starting the component, the TrE may be required to perform an integrity check (e.g., by comparing the component's cryptographic hash measurement to a stored trusted reference value) to determine the integrity of the component. If the single component passes its integrity check, the component may be started. If the integrity check fails, the component cannot be started.
In multi-component integrity checking, the full code base of the device required for trusted operation of the device may be partitioned according to device functionality and placed in sequence into multiple components. The TrE may be required to load each component in sequence and, before any one component is started, may be required to perform an integrity check (e.g., by comparing the cryptographic hash measurement for that component to a stored trusted reference value) to determine the integrity of that component. If a single component passes its integrity check, the component may be started and the TrE may continue with the integrity check on the next component. If the integrity check of any one component fails, that component cannot be started, but the TrE can continue to check the integrity of the next component.
For each of the component integrity checks, the TrE may be required to look up a corresponding trusted reference value from a secure memory that provides integrity protection for the TRV and compare the integrity measurement to the trusted reference value. The secure memory includes, but is not limited to, TrE's protected memory. If all components required for trusted operation of the device are verified, the integrity of the device is verified.
For secure boot, the secure boot process goes from RoT to a full functional state in several steps by building a chain of trust. Fig. 4 illustrates an example flow diagram of a four-phase secure boot method 400. At phase 1, TrE 410 is constructed from RoT 405 in secure boot. All loaded or started components are verified and only verified components can be loaded and started. Only if phase 1 succeeds does TrE 410 be controlled to perform phase 2 of the secure boot.
In stage 2, TrE 410 further validates, loads, and boots the components necessary to execute the PVM. This may include, for example, communication and protocol stacks and Radio Access Network (RAN) communication modules. All loaded and started components need to be verified, and only verified components can be loaded and started.
Only if phase 2 succeeds, phase 3 of the secure boot is started. In stage 3a, TrE 410 further validates, loads, and starts components. Only components that pass the verification can be loaded and started. In stage 3b, TrE further measures and loads the components.
It is assumed that the verification of the component is accomplished by obtaining its measurement (shown at 415) and comparing the measurement to the RIM stored in RIM memory 420 (shown at 425). As shown, fig. 4 contains RIM memory as an example or embodiment. However, as described herein, RIM and RIM certificates are just one example way of structuring data, and other ways of structuring data may be used. The description herein allows for validation of data using means and structures of embodiments other than RIM. The order of loading in all phases is considered to be managed by the locally available list. It is assumed that the differences between the components in 3a and 3b are governed by locally available policies. Alternatively, the loading and validation may be combined in one phase.
In fig. 4, the term "TrE" is used to describe the entity that contains the minimum functionality required for PVM functionality, including all the functionality required for secure boot, such as measurement 415, RIM store 420, verification engine 425 that compares RIM to actual measurement. It is clear that the description of TrE here is for simplicity, while TrE may be more complex and include other components, such as a key generator or Random Number Generator (RNG). The illustrated TrE may include all the functionality required to implement a secure boot. The RIM may be stored outside the TrE, but its integrity and (optionally) confidentiality is protected by the TrE. The engine for measurement and verification may also be implemented as an external component to the TrE. The TrE may then ensure the integrity of these components and provide a secure operating environment so that these components are not modified.
There may be finer granularity in stage 3 depending on the strategy. For example, if a component fails verification or a RIM is not available, the component may be loaded into a sandbox (sandbox) environment. The difference between phases 3a and 3b can be analogized to the difference between trusted and measured services in the secure launch of the Mobile Phone Working Group (MPWG) reference architecture.
A fourth stage may be added for components that fail verification in "user space".
Failure of a single or multiple components (communication module or other similar module) in phase 2 does not indicate that the device is not capable of communication. This stage may be understood as belonging to a particular class of component types. As long as the most basic components of phase 2 are loaded, the device can send its status and failed components to the PVM system. Such a design enables the device to perform PVM (and rework procedures) also if the internal verification of some components fails, without rebooting.
In another embodiment, a fallback code base (FBC) may be used to cause the device to execute the PVM when an attack is detected during secure boot. The device will restart using the FBC when an attack is detected and then enter a predetermined state in which the device can be modified.
During secure boot, the TrE records and protects the following information from tampering: 1) a list of loaded components (Clist); 2) parameters of the loaded component; 3) measurements associated with some or all of the components; and 4) uniquely tagged (e.g., encrypted) validation data for the results of some or all of the measurements (e.g., platform status). Some or all of the records are optional depending on the validation method used by the PVM. For example, no autonomous acknowledgement (AuV) is used.
PVM may use the following terminology. The term "authentication" may be used to refer to authentication inside a device component during secure boot, while the term "validation" is used to refer to a device checking process by an external entity. Thereby avoiding the introduction of the concept of "internal" and "external" validation. Authentication is performed in the usual sense of cryptographic checking or data matching, as explicitly indicated herein to avoid confusion.
The PVM uses at least a security gateway (SeGW), a Platform Validation Entity (PVE), and a Device Management Service (DMS). Since confirmation of important tasks within a device is performed by a TrE in the device, communication with other entities is typically performed by the TrE. Although other device components (e.g., network interfaces) required for communication here are not necessarily integral parts of the TrE, the TrE should be able to evaluate the integrity of these components to ensure end-to-end security.
Strict separation of tasks requires that each entity be confined within its core task. For example, the SeGW establishes a secure interface between the (un) trusted device and the MNO's CN. It serves as a barrier and network access control and enforcement instance for the MNO's CN. It also performs all security related functions required as a barrier, including authentication, encryption/decryption of communications with the device, security association, and session establishment, simultaneously. The SeGW may be used as an example of a network entity capable of establishing a boundary between the MNO's CN and the outside world (e.g., external devices). The device verification can be performed using the PVM method without the SeGW. For this purpose, the devices need to be connected directly to the DMS using a secure connection, for example Transport Layer Security (TLS).
For PVE, it acts as a validation entity in CN and performs integrity validation. It receives the integrity verification data and checks if the reported value is known and good. It issues claims to other entities in the CN regarding device integrity.
For the DMS, it acts as a central entity for managing device components, including software updates, configuration changes, OTA management, and failure mode correction. The DMS accomplishes this function by platform-based validation, similar to the enhanced version of the HMS.
In addition to the above entities, the PVM also includes a RIM manager (RIMman). RIMman performs the following functions, including: trusted reference data and TRVs are managed and provided for comparison in validation. It also manages certificates, in particular obtains external RIM certificates, verifies RIM certificates, generates (operator-specific) RIM certificates, and checks certificate validity by means such as revocation, time restrictions and trust relationships. That is, the RIM manager is the only entity authorized to manage the validation database (V _ DB). The V _ DB and RIMman are protected by the CN component. Writing to the V _ DB is limited only by RIMman, so PVEs cannot write the V _ DB. RIMman is of particular importance to security because it manages the (SHO-CN) external trust relationships required by PVMs. As described herein, RIMman is an embodiment that can be extended to include the reference values and attestation reference values for (hierarchically) structured data of other embodiments.
The PVM also includes a configuration policy manager (CPman) for performing management and provisioning of device configurations. It also manages policies, in particular external (e.g. obtained from a Trusted Third Party (TTP)) configurations and policies, and generates (operator specific) target device configurations and policies. That is, CPman is the only entity authorized to manage the configuration policy database C _ DB. The CPman is of particular importance to security because it manages the (SHO-CN) external trust relationships required by the PVM.
Fig. 5A and 5B illustrate examples of a minimum set of entities of a PVM, their interrelationships, and interfaces. Other entities such as an Authentication Authorization and Accounting (AAA) server and a wireless transmit/receive unit (WTRU) and its interfaces are also shown.
The PVM structure or system 500 of fig. 5A includes a device 505, the device 505 having a TrE 510. The WTRU 512 may communicate with the device 505 via an I-ue interface 514. The device 505 communicates with the SeGW 520 via an I-h interface 515. In general, the interface I-h 515 between the device 505 and the SeGW 520 does not require protection, and special measures can be taken to protect the authenticity, integrity and optionally confidentiality of the channel. I-h 515 may be used to establish a link between device 505 and SeGW 520 (and thus with the CN). For example, the SeGW 520 may communicate with the AAA server via the I-AAA interface 575. The operator may take appropriate measures to secure the interface.
During validation, the SeGW 520 can contact the PVEs 524 using the I-PVE interface 522. PVE 524 may use I-PVE interface 522 to send the validation result to SeGW 520. The communication between DMS 535 and SeGW 520 involving the configuration of the devices can be performed using an I-DMS interface 530. The PVE 524 may use the I-pd interface 532 to communicate with the DMS 535 and vice versa. The interface I-pd 532 may be used during device management procedures for, for example, device software updates and configuration changes.
The PVE 520 may use the interfaces I-V526 and I-d 538 to read the RIM from the V _ DB 540, while the DMS535 may use the interfaces described above to read the allowed configurations from the C _ DB 550. In the event of a RIM loss in, for example, the V _ DB 540, the PVEs 520 can use the interfaces I-r 528 and I-c 534 to communicate with the RIMman560, while the DMS535 can use the interfaces to communicate with the CPman 570. RIMman560 and CPman 570 may use interfaces I-rdb 562 and I-cdb 572 to read, write, and manage the validation and configuration policy database C _ DB 550, respectively, of database V _ DB 540.
Figure 5B illustrates a PVM 582, wherein the device 505 can be directly connected to the DMS 535. For example, in case of the fallback mode, the device 505 cannot execute the security protocol with the SeGW in this mode. In this case, the DMS535 may act as a first point of connection for the device 505 through the interface I-DMS _ d 584, and communicate with the PVE 524 through the interfaces I-PVE 586 and I-pd 588 to perform validation, or at least know which components failed during secure boot. DMS535 may make the corrections immediately upon receiving the information.
In general, the various components, e.g., device 505 including TrE 510, SeGW 520, PVE 524, and DMS535 are all preferably configured to use a PVM-maximum type of task separation approach between entities in the active state. This may be accomplished by using the PVM token to pass specific information between various entities, as described in detail below.
As described herein, PVMs may use various versions of validation. Described herein is semi-autonomous validation (SAV) operating with PVM. In this embodiment, the device contains TrE and RoT and is capable of secure boot. The device has a RIM that enables local validation of TrE components and TrE external components. In this embodiment, the device may be h (e) NB. As described herein, a RIM is merely one form and example of structural data, which is used herein as a non-limiting example.
The device can perform a secure boot through 3 phases, ensuring that each component is loaded at and only if the local validation of the component to be loaded is successful. In phase 1, the TrE is loaded by a security boot that relies on the RoT. In phase 2, all components outside the TrE that are required to perform basic communication are loaded. In phase 3, all remaining equipment components are loaded.
The device may then start network authentication with the SeGW. During authentication, one or more of the following data is transmitted: dev _ ID; a security policy of the device; information about the device module whose integrity has been checked by the TrE during secure boot; hardware/software build version numbers; a device manufacturer; a model and version number; attestation information of the device and the TrE; and TrE capabilities and attributes.
Different options may be used to send this data to the PVE (via the SeGW). This data may be sent in the notification field of the internet key exchange version 2(IKEv2) authentication protocol and then forwarded by the SeGW to the PVE. The PVE then checks the received information. The PVE checks if the Dev _ ID is blacklisted and if so, denies access. It checks whether the security policy does not match the policy required by the device. If not, a correction step may be performed. The PVE can check whether unmarked/unwanted modules and components are loaded.
In each of the above checks, the PVE may deny or restrict (e.g., isolate limited use or resources) network connections to the device if a positive answer is obtained indicating that the verification of the device TS failed. The PVE sends a message to the SeGW regarding the decision on the validity and trustworthiness of the device. The SeGW operates according to the message.
In the first way, the data is stored in a Trusted Third Party (TTP), and a pointer to the TTP is sent by the device, from which the PVE can obtain the required information. This pointer may be sent in the notification payload of IKEv 2.
In the second approach, all data may be included in the (possibly enhanced) device certificate during authentication as long as it is static. Any update of a component will cause a change in the measured values and therefore the RIM used in the secure boot will require new device credentials.
The embodiments described herein are remote validation or semi-autonomous validation (F-SAV) operating with PVM. In phase 1, a TrE may be constructed from RoT during secure boot. All components of the TrE may be integrity verified and loaded when the verification is successful. In phase 2, the TrE may verify the integrity of a predetermined portion of the remainder of the device and load it. The code that performs the integrity check may include, for example, code that the base OS, the base communication with the SeGW, and the schedule (format) to execute the PVM report message. The measurement may be stored in a secure memory of the TrE.
If the phase 1 or phase 2 check fails, the TrE may stop authenticating. If both phase 1 and phase 1 are successful, phase 3 may be performed. For example, the device remaining module code, e.g., including the radio access code, may be integrity checked, but may not be loaded. Acknowledgement data may be prepared and sent to the SeGW in an appropriate communication protocol. This data may be signed by a key stored by, for example, the TrE, to provide authenticity and integrity of the data. This data may include a list of stage 3 modules that failed the integrity check.
The data may be transmitted using the notification payload of the IKEv2AUTH _ REQ message. In addition to the protection of the entire message provided by the IKE security association, the data in the notification payload may also be signed by the TrE's signature key to provide the authenticity and integrity of the data. The notification load may include a list of stage 3 modules that failed the integrity check. The acknowledgement data may be sent using any other suitable payload or field of a suitable IKEv2 message, or any other suitable payload or field of a protocol other than the IKEv2 protocol (e.g., TLS, TR069, OMA-DM, HTTP, HTTPs, or other similar protocol) message.
The SeGW may forward the data to the PVE for a determination. The authentication process may continue, but the decision to authorize the network connection may wait until the PVE has checked the confirmation message and made or obtained a network-based policy decision regarding the module reported as failing the confirmation test.
In a third approach, instead of measuring and executing the code, the measurement and integrity check of the code may be loaded without loading the code. The SeGW may forward the acknowledgement message to the PVE, which may acknowledge the received list. When the device receives the results of a successful validation from the PVE, the remaining phase 3 modules can be loaded.
The process of measuring integrity and waiting for the PVE to decide whether or not to execute the code may include assuming that no further changes occur once the code has been measured and that the code may be executed if authorized by the PVE. Therefore, it is desirable to have secure memory for all component code in phase 3. Additionally, the execution environment may support authorized execution that allows code to be loaded first, and then executed after authorization. A large amount of code may be loaded and therefore the secure memory and execution environment should be of sufficient size.
The F-SAV may provide flexibility to the CN to learn what is actually done in the "local integrity check". The device may send an indication of the pass/fail of the code in stages 1 and 2 and optionally, if there are failed modules, a list of failed modules. The F-SAV may provide finer granularity and clearer awareness of device security attributes and acknowledgement measurements, may provide faster and better detection of the hacked device, may support network-initiated modifications for the hacked device, and may provide flexibility for the operator in device security management.
The TrE may also add a timestamp to the message to ensure freshness. An alternative to time stamping is to provide a nonce (nonce) by the network, which the TrE incorporates into the aforementioned message after the network access protocol has been initiated. This is also a feature that binds device authentication with validation.
The authentication failure correction may be to activate a fallback mode after the first failure of the integrity check, e.g. phase 1 or phase 2, so that the device has sufficient functionality to connect to the SeGW to inform it of the failure. This may then trigger an operation and maintenance (OAM) process to allow the device software to be updated based on the diagnostics. This fallback mode needs to have sufficient functionality to enable a complete rebuild of the code in a secure manner under the supervision of the TrE.
In the first approach, measurement message data (along with device attestation) may be sent in the notify field of the IKEv2AUTH _ Request. In a second approach, the measurement message data may be sent by an appropriate security protocol before IKEv 2-based device authentication is initiated. In a third approach, if the checks of any part of phases 1 or 2 fail, and if the failed module is an auxiliary function that is not important to the basic function of the device, the device can be left on/attached without loading these modules. At the same time, some OAM procedures may be scheduled to be performed to update the device software.
Described herein is a high level overview of the functionality in all relevant entities. A system architecture for h (e) NB devices is described in which validation and remote management play an important role. The method may be used directly for entities in h (e) NB network structures. The methods for platform validation and management herein can be readily applied or extended to other network connected devices by using more general methods, and role definitions based on task separation. If the entities are mapped according to their functionality, they may be implemented in other environments (e.g., M2M) in a similar manner.
In the embodiments herein describing PVM functionality, SAV is used. The SAV can protect the CN from attacks by malicious devices completely. In SAV, an isolated network may be efficiently established by the SeGW. There is no attack on PVEs and DSMs directly from the device, since only data restricted to its tasks is received, and only over a secure connection with or established by the SeGW. The validation process in the PVM need not be performed directly between the device and any entity of the CN. Only after successful acknowledgement using SAV is the connection allowed to the CN. This ensures that only devices that are certified to a secure state can communicate with the entities in the CN.
Fig. 6A, 6B and 6C show an example of an SAV validation method using a PVM infrastructure. The PVM infrastructure includes the entities described herein, including TrE 605, SeGW 607, PVE 609, DMS 611, V _ DB 613, and C _ DB 615. After mutual authentication (620), TrE 605 collects some or all of the following data: device information-e.g., Dev _ ID, manufacturer, device capabilities (including but not limited to communication capabilities (e.g., supported data rates), transmit power levels, signaling characteristics, and other capabilities), TrE capabilities and attributes (including RoT); TrE information (TrE _ info), including ID, certification information, manufacturer, build version and model, composition, serial number; verification data-including Platform Configuration Register (PCR) values; verify binding-e.g., signature on PCR value; component indicator (Clnd) -component Clist ordered list, which may also include component parameters; a timestamp (trusted or untrusted) (622). The acknowledgement message/data sent from TrE 605 to SeGW 607 may include the date described above (624).
The SeGW 607 checks/compares the received timestamp with the local time to detect a change (626). If the reported timestamp does not coincide with the local time, the SeGW operates according to the attributes of the reported timestamp. If the device's timestamp is a trusted timestamp and a change occurs, the SeGW 6070 should trigger a re-confirmation of the TrE and its trusted time source. In the case of an untrusted timestamp, the SeGW 607 appends its own trusted timestamp on the message. If the device cannot provide a trusted timestamp, the trusted timestamp may be added by the SeGW 607 to provide protection against replay attacks.
Upon receiving this message, the SeGW 607 may check 628 whether an authentication binding exists. This ensures the authenticity of the verification data. The SeGW 607 then creates a PVM token (T _ PVM) (630) and adds a timestamp to the T-PVM before transmission to ensure its freshness and prevent asynchronous message flow (632).
The SeGW 607 forwards the T _ PVM to the PVE 609(634), which PVE 609 in turn queries the V _ DB 613(636) with TrE information. If a determination of not trustworthy is returned to the PVE 609 (638), the PVE applies a timestamp to the T _ PVM (640) and forwards it to the SeGW 607 (642). After that, the SeGW 607 stops the device authentication, prevents the device from attaching to the network, and alerts the TrE 605 (644).
If a trusted decision is returned 646 to the PVE 609, the PVE queries 648 the C _ DB using the Dev _ ID, which then returns 650 the configuration policy to the PVE 609. The PVE 609 evaluates the policy configuration (652).
If the PVE 609 determines that the configuration is not authentic 654, the PVE 609 modifies the T _ PVM and adds a timestamp 656. PVE 609 then forwards the T _ PVM to SeGW607 (658), which then stops device authentication, prevents the device from attaching to the network, and alerts TrE 605 (660).
If the PVE 609 determines that the configuration is trusted and allows it (662), the PVE 609 retrieves (retrieve) the RIMs for all entries in the Clist or C _ List from the V-DB 613 (664). The PVE 609 recalculates the correct verification data from the RIM and compares the calculated verification data with the reported verification data (668). Thereafter, the PVE 609 modifies the T _ PVM and applies a timestamp (670). PVE 609 then forwards the T _ PVM to SeGW607 (672). The SeGW607 checks (or obtains from) the PVE validation result (674) in the V _ PVM. The SeGW607 sends to TrE 605(676) whether authentication of the device is denied or agreed. If the PVE validation result is a rejection, TrE 605 performs a restart and reconfirms (690).
Optionally, after the PVE 609 compares the calculated verification data with the reported verification data (668), the PVE 609 may send a list of failed components to the DMS 611 (678). A determination is made by the DMS 611 as to whether an update is possible (680) and if so, an OTA update is prepared (682). It is also ensured by the DMS 611 that there is a RIM for update in the V _ DB 613 (684). DMS 611 sends T _ PVM with a re-acknowledgement indication to SeGW 607 (686) and a re-acknowledgement trigger to TrE 605 (688). TrE 605 performs a reboot and reconfirms (690).
Details regarding the process of fig. 6A, 6B, and 6C are now described. To perform platform validation, the TrE is to collect the following data, include it in a validation message, and send the message to the SeGW: device information — e.g., Dev _ ID, manufacturer, TrE capabilities and attributes (including RoT); TrE information-including ID, certification information, manufacturer, build version and optional model, structure, serial number; verification data — which may include Platform Configuration Register (PCR) values or simply a list of components that failed local verification or a list of functions affected by components that failed local verification; verify binding-e.g., by signature of PCR values or failed components or a list of affected functions; component indicator (CInd) -component Clist ordered list, which may also include component parameters; and a timestamp (trusted or untrusted).
The indicator-ordered list of components and their parameters may include entries such as the following data fields: index, component indicator (component _ indicator) CInd, component parameters (component _ parameters). CInd provides a reference to a component, which may be in the form of a URN (e.g., URN:// vector. path. to/component/certificate). The component list may indicate the RIM for validation by, for example, pointing to the RIM certificate, RIMc.
In the device, the confirmation message may additionally include device information such as ID, certification information, manufacturer, model, version, structure, serial number, TrE capabilities and attributes (including RoT), security policies and modules of the device integrity checked at phase (1, 2, 3), Hardware (HW) build version number, and may include Software (SW) version number and integrity measurement data.
The TrE specific information may describe how the TrE is implemented in the device if the TrE specific information is needed. Likewise, TrE information may provide information about the device and separate information about the trusted context, e.g., whether the TrE is an authenticated IP component. The certification authority of the device may be useful information.
While the SAV is preferably validated using RIM, it is entirely optional. Which is used here as a basic example, and other ways may differ from this. For example, some validation methods do not recalculate verification data from a RIM, and some do not require a RIM at all even when performing PVMs.
If the validation information is bound to the authentication (e.g., via a secure channel), then the verification binding is also optional.
The SeGW will check/compare the received timestamp with the local time to detect the discrepancy. If the reported timestamp does not coincide with the local time, the SeGW operates according to the attributes of the reported timestamp. If the timestamp of the device is a trusted timestamp and there is a discrepancy, the SeGW will trigger a re-confirmation of the TrE and its trusted time source. In the case of an untrusted timestamp, the SeGW imposes its own trusted timestamp on the message. The SeGW may add a trusted timestamp as protection against replay attacks if the device is unable to provide the trusted timestamp.
The device information and the TrE information are optional. The Dev _ ID provides a reference to the device information and TrE information. Since not all MNOs are aware of the devices that will connect to the network and all TrE, all TrE information data, e.g. mapping, can be provided by a database that can be queried by the MNO to obtain TrE information for any particular Dev _ ID. The TrE information may be located in the TrE _ certificate. This TrE _ certificate should be signed by the seller of the TrE or TTP.
In the first way, if there is no verification data/binding in the acknowledgement message, the PVM can be executed in a simple way. This can only be done if the attributes of the TrE are verified. Policy decisions must rely only on TrE information and component lists.
This approach presupposes mutual authentication between the SeGW and the device. Otherwise, trust issues may occur if, for example, the device changes operator. For example, it may have previously received a fake RIM from a fake SeGW/MNO during a remote management procedure.
Using the URN as an indicator to the component is advantageous because it uses a unique identification to represent both the component and the location where the RIM or RIM certificate is available.
During device acknowledgement, the device sends an acknowledgement message to the SeGW. Upon receiving this message, the SeGW checks if there is an authentication binding. This step ensures the authenticity of the verification data. After that, the SeGW creates a PVM token (T _ PVM). The token T _ PVM may be used as a token ring (rolling token) to be sent from one entity to another entity in a communication. Each entity adds a timestamp to the token before it is sent to ensure its freshness and prevent asynchronous message flow. A timestamp on the token may provide a method for tracking the state of the token. The token is passed from one entity to another in the CN, even several rounds, and thus can be tracked by the entities.
Optionally, an entity ID may be added to the chain of data loaded with a timestamp.
The T _ PVM may include a Dev _ ID. The T _ PVM may also contain a new timestamp issued by the SeGW if the original timestamp does not exist or is not trusted. Alternatively, the T _ PVM may contain the original timestamp from the confirmation message.
The time stamp may be used to protect against replay attacks. This attack may be combined with or even replaced by a nonce or a single up counter. Timestamps may also be used to evaluate the timeliness of the validation data. Preferably, the two objectives are combined and this is also provided by the time stamp.
It is assumed that all communications between the SeGW, PVE and DMS are secure for integrity, authenticity and confidentiality. Therefore, no enforcement is taken to establish these security attributes for any internal messages. However, appropriate measures may be taken to protect all or part of the message, if desired. These measures may include encrypting the communication channel, mutual authentication, and signing on the message. The SeGW maintains a token database T _ DB containing all active T _ PVMs.
In a first approach, the T _ PVM may contain a communication secret for building a secure channel between the DSM and the TrE, such as a TLS certificate, for later device management by the DMS.
The SeGW obtains the following data from the acknowledgement message: acknowledgment data, TrE information, and Clist. Before sending these data together with the token T _ PVM, the SeGW adds a timestamp on the T _ PVM and forwards it to the PVE. The SeGW may examine the format of the acknowledgement message and portions thereof to mitigate the threat of a bad-form data attack. Otherwise, an attacker may attempt to modify the data in the acknowledgment message of the attacked TrE, such that a complete check of the data at the PVE may cause the system to generate an error or fail.
It is useful to separate between the Dev _ ID and the identity of the corresponding h (e) NB (h (e) NB _ ID). Although the association between the two is one-to-one, this separation is meaningful from a task separation (SeGW knows TrE, PVE knows h (e) NB), and possibly addressing/management perspective. In this case, there would be an intermediate step where the PVE uses the received h (e) NB _ ID to look up the Dev _ ID from the database HNB _ DB.
The PVE is the entity that decides the validity of the device. That is, in the language of the policy system, the PVE is a Policy Decision Point (PDP). In the strict task separation approach, it is the only PDP in the PVE system. It relies on the SeGW and DMS to enforce policies, e.g. to act as Policy Enforcement Points (PEPs). In the general description, the PVM does not know the question of how the policy was generated and where the policy was stored/managed, e.g., where the PVE obtained the policy from. In some more specific ways and secondary methods described below (particularly parametric validation and minimum validation), examples of some policy scenarios and operations are given. In general, the decision to validate a policy depends not only on the validity of the individual components, but also on other data contained in the Clist. In particular, the allowed parameters (ranges) and loading order (Clist is ordered) need to be evaluated.
There are some basic types of failure that occur in the validation process performed by the PVE. For example, the fail case F1 represents a case of "TrE failure". The PVE marks the device and/or TrE as untrusted by its authenticated Dev _ ID and the transmitted TrE information.
Another example failure case F2 represents three cases of "authentication data failure". Case F2a indicates that the integrity measurement/verification data does not match. This indicates that the device security boot process failed and/or that there was an erroneous and/or expired RIM and/or RIM certificate in the device, after which the device boots an invalid component. Case F2b indicates that a RIM is lost, i.e., a RIM for a component is lost, and needs to be obtained from elsewhere. Case F2c indicates that the RIM certificate expires.
Failure case F3 indicates a "Clist policy failure" for both cases. For case F3a, a single component is valid, but the configuration does not comply with policies, such as loading order, or undesirable components or parameters. Case F3b indicates that the configuration is unknown, so there is no "known good value" for Clist available.
The failure case F4 denotes "device authentication failure before validation", which is used when authentication is bound to validation and device authentication is before validation. The F4 case includes the F4a case indicating a device certificate expiration.
A method of detecting and handling the failure condition will now be described. For the failure case F1, the PVE uses the received TrE information to query the local validation database (V _ DB). The TrE information structure contains detailed information about the TrE's certification, manufacturer, structure, model, serial number. The validation database V _ DB stores information about which TrE can be considered trusted. For example, it may enforce policies to trust a particular vendor, model, or other similar identification. If the TrE is not trusted according to the evaluation result of the TrE information, the PVE may send a message containing the information to the SeGW. The SeGW may then operate appropriately according to the message. The PVE adds a claim (e.g., an additional data field) to the T _ PVM token that contains the reason for the denial of access, e.g., the wrong/untrusted manufacturer. The PVE adds a timestamp and signature on the T _ PVM. The T _ PVM is forwarded to the SeGW. The SeGW may then verify the timestamp (replay protection) and signature (prevent impersonation of the sender). The SeGW then denies network access and device authentication and blocks further authentication attempts.
In case network access and device authentication are denied, the authentication process needs to be stopped if the confirmation and authentication are binding.
In a first approach, a device blacklist based on specific attributes, such as manufacturer, device version, and other attributes, may be used.
PVEs can also use the Dev _ ID and TrE information to first trigger a V _ DB update procedure similar to the RIM update procedure for unknown tres.
For the failure case F2, the PVE obtains the RIM from the V _ DB for all components in the received Clist. Only certified RIM is stored in the validation database V _ DB. The corresponding RIM certificate must be securely stored in the V _ DB.
In one embodiment, the RIM certificate may be checked before querying the V _ DB, and may be discarded later. Alternatively, RIM certificates may be stored for security purposes. For example, since the MNO continuously obtains the RIM and its certificate from a trusted third party, the MNO can use the certificate to prove its compliance in device management to the inspector.
For the failure case F2a, the PVE may calculate the correct verification data from the looked-up RIM and match it with the verification data received in the acknowledgement message.
If the computed correct authentication data does not match the authentication data in the confirmation message, the secure boot process of the device may have been attacked, or a wrong RIM may have been stored in the device, and an invalid component may have been loaded in the secure boot process. The PVE may compare the measurements sent in the acknowledgement message or in the reply to the PVE's request alone with the RIM to detect the failed component.
A variety of options may be applied, depending on the F2a policy. In case of rejection, the PVE may send the result of the confirmation to the SeGW. The SeGW may reject the network connection or place the device in the quarantine network. In the case of an update, after receiving a validation result (T _ PVM) indicating that the validation data failed, the DMS may start a management process according to the management procedure to replace the component that failed the validation. The DMS may send the T _ PVM to the SeGW together with an indicator that the validation failed and the device will re-validate. The DMS may send the correct RIM to the device and trigger a reboot. When restarted, the device may re-authenticate and re-acknowledge using the new RIM. If the verification data is again erroneous, the device may not be able to recover through the remote management process. To prevent an infinite restart loop, the DMS may store the Dev _ ID with a timestamp indicating the time at which the remote restart trigger was sent. If the DMS receives a command to perform the update again, the DMS can check whether the Dev _ ID has already been stored. If multiple storage entries exist, the timestamp may indicate a short reboot period, indicating that the device cannot be recovered. The method described herein for handling the failure case type F2 is optional if RIM is not used in the validation.
In another approach, based on verification data, such as PCR values, the PVE may use a special portion of the database V _ DB that buffers the trusted configuration by the PCR values. The PVE may look up a verification data table for valid configurations, such as a hash table in the case of PCR values. If a match is found, the validation succeeds immediately. Storing pre-computed PCR values in the V _ DB for efficient configuration is very useful for device types that operate in the same configuration, where the hash values are the same. Rather than comparing all components to the RIM, a single composite hash value can be compared, thereby reducing computational overhead and speeding up the validation process.
If no policy failure occurs, the device is valid. The PVE may send this message to the SeGW, which may allow the connection to the CN.
For the failure case F2b, the RIM may be obtained from a Trusted Third Party (TTP). If the RIM of the component(s) is not stored in the V _ DB, the PVE sends a list of missing RIMs to RIMman. RIMman then attempts to obtain (certified) RIM from TTP. Clist contains a component indicator, CInd, (e.g., URN), by which RIMman can identify the component and obtain information about where to look up the corresponding RIM certificate. RIMman performs RIM acquisition for the new RIM, including verification of the RIMc stored in the V _ DB. RIMman performs updates to the V _ DB storing CInd, RIM and RIMc. RIMman informs PVEs of the update of the V _ DB, after which PVEs can retrieve the missing RIM from the V _ DB.
Alternatively, the RIM may be obtained from the device. If in the acknowledgement message the device indicates that it is capable of providing the stored RIMc (including the RIM) to the network, the PVE may request the missing RIM and RIMc from the device for acknowledgement. This can be used as a backup for retrieving the RIM. Since the device has used all RIM in secure boot-up, there is a full RIM in the device. If the PVE cannot find the RIM of some component, the PVE sends a list of missing RIMs, together with the T _ PVM, plus a new timestamp, to the SeGW. The SeGW executes a protocol with the device to look up RIMc. The SeGW adds a timestamp to the received RIMc and loads it on the T _ PVM and forwards the T _ PVM token to the PVE. The PVE forwards the found RIMc to RIMman. RIMman then verifies that the received RIMc was sent by the trusted entity and is valid. RIMman performs RIM acquisition on the new RIM, including verification of the RIM stored in the V _ DB. RIMman performs a V _ DB update, which is then notified to the PVE. Thereafter, the PVE can obtain the verified RIM from the V _ DB and then confirm. If the RIM of the component is still lost after the find and acquire steps, the PVE will not request RIMc from the device any more, but rather acquire the RIM from the TTP, as described above. Any RIM obtained from a device or TTP can, as well, verify trustworthiness in the form of a digital certificate.
The trust model between PVM components determines the sequence of operations to obtain the RIM from the device. PVEs do not trust RIM/RIMc from the device but wait for it to enter the V _ DB, which can only be executed by RIMman after checking the trustworthiness of the data. The PVE may also begin recalculating the verification data from the RIM received from the device at the same time that the RIM was acquired by RIMman, but must wait for the RIMman to decide on its trustworthiness.
Since it is sent only inside the CN, the RIMc may be sent in additional messages that are integrity protected. The message containing RIMc must be linkable with T _ PVM.
The process of acquiring the RIM may be performed by an external entity to the device and may be extended to a device and PVM fabric full acquisition process. This can be labeled as distributed RIM acquisition within the PVM structure.
All messages sent from the PVE to RIMman must be limited in format and content to ensure message integrity and mitigate attacks such as false messages. The message must contain a separate URN for the component indicating the location where the reference measurements can be retrieved.
For the failure case F3, the PVE retrieves the policy for the allowed configuration from the configuration policy database C _ DB. The configuration policy database C _ DB contains the allowed configurations according to Dev _ ID. The C _ DB is managed by CPman. The C _ DB may also contain policy actions, such as making desired updates to devices that have been disconnected for a period of time and have not made a determination. The PVE evaluates the policy received from CPman based on the information in Clist. If the evaluation results in either of the failure cases of F3a or F3b, a different operation may be used.
For rejection, the PVE loads a message on the failed configuration policy on the T _ PVM, and adds a timestamp and signature on the T _ PVM and sends it to the SeGW. After that, the SeGW verifies the timestamp (replay protection) and the signature (prevents impersonation of the sender). The SeGW then denies network access and device authentication (and blocks further authentication attempts). If the binding is confirmed and authenticated, the authentication process is stopped.
If Clist is unknown and thus not found in the C _ DB (failure case F3b), or there is no policy for a component in Clist (special case of F3a), then PVE calls CPman to look up the configuration policy from TTP. If CPman can obtain the new configuration policy, CPman updates the C _ DB and sends a message to the PVEs with an indicator indicating the updated configuration policy.
If the update contains a new component (cf. F3a) (by sending a message containing the new component identifier from CPman to PVE), the C _ DB can be kept consistent with the V _ DB. The PVE then forwards the necessary information about the new component to RIMman to obtain an update or new RIM for that component. Here, we want to be able to keep the management processes of configuration and RIM management separate from each other so that the components Cman and C _ DB and RIMman and V _ DB can be operated independently. If the policy requires an update to the device, the update process is triggered by the PVE.
As an example of a simple policy, the C _ DB may contain a list of allowed configurations. The PVE forwards the received Clist to the CPman, which in turn matches the Clist to the stored allowable configuration. If no match is found, a failure condition is detected F3 b. Since the current validation process may be a revalidation after a device update during device management, the update may need to be checked. During this management process, the device configuration may have changed and may need to be verified against the new configuration in the C _ DB.
Described herein is an example of a re-validation process. The device can be made to never restart once authenticated over the network, unless such unscheduled conditions of power down occur. Reconfirming the device may be a conventional part of the execution environment. Periodic re-validation may convince the network that the device is operating in a predetermined state, thereby reducing the risk of executing malicious code. The re-validation also enables the authentication process to be re-initiated, thereby keeping the key exchange updated and re-establishing the secure communication channel. The device reconfirms there are two triggers, one is triggered by the network and the other is triggered by the device. The re-validation methods described herein may be used with any validation method.
Described herein are examples of device initiated reconfirmation. The device boot reconfirmation may be performed on a periodic basis. Depending on the frequency of use of the device, the MNO may set a periodic re-acknowledgement schedule during setup of the device. At the scheduled time, the device will initiate a restart sequence that will trigger a further validation process and authentication. Meanwhile, if the device needs software update, a corresponding OAM process can be started. The re-acknowledgement may be triggered by the CN if the device does not re-authenticate/re-acknowledge within the desired time frame. For reconfirmation, which can only be initiated by the device, the operator has no control over the reconfirmation process. If a large number of devices are operating on the same schedule, such as the first day of the month, the load on the CN structure may increase.
Described herein are examples of network initiated reconfirmation. As with device-initiated re-validation of network initiation may be performed on a periodic basis, but may also be performed at any time the network deems necessary for security reasons. The operator may also set reconfirmation as part of the policy so that the operator programs the modules in the device to perform reconfirmation within a programmed time interval. The re-acknowledgement may be triggered by sending an IKEv2 message to the device indicating a re-acknowledgement request. The notification payload may be used to carry a newly defined re-acknowledgement trigger code for the device.
The PVE may periodically send a re-acknowledgement indicator to the SeGW. In order to keep track of all requests sent, the PVE stores it along with the DEV _ ID and timestamp. The PVE then periodically checks whether any devices ignore the revalidation request. The SeGW may forward the request to the device via the IKEv2 protocol. The reconfirmation message can be set according to the request of the host side when the equipment is installed, so that the risk of equipment interruption is reduced.
The device receives an IKE message whose advertised load indicates a re-acknowledgement request. Thereafter, the device initiates a restart sequence in which the validation and authentication of the network is re-established. If the device is attacked such that the re-confirmation request is ignored, the PVE may detect this in the process of monitoring all active re-confirmation requests. The PVE may send a failed revalidation to the SeGW, which may take appropriate action, such as placing the device in an isolated network.
Another network-initiated method of reconfirmation involves sending a reboot signal to the device, triggering a reboot, so that reconfirmation occurs in a secure boot process.
In another approach, the device reconfirmation may also be performed by requests from other network entities. If the device manufacturer suspects that its device has been subjected to a wide range of attacks, the manufacturer may contact the MNO and request a reconfirmation. This can be done by processing by MNO strategic departments to determine whether to reconfirm. The PVE or HMS may initiate revalidation and reauthentication.
Described herein are examples of platform management. The DMS is the main entity responsible for device management. Based on the received and stored device information, such as vendor, hardware/software configuration/TrE capabilities, etc., the DMS can initiate software updates, configuration changes, and OTA device management procedures. The management operation is typically determined by the acknowledgement data sent, the acknowledgement results from the PVE, and the policy in the C _ DB (e.g., the desired target configuration).
The DMS may establish a secure channel with the TrE of the device. The DMS may use the T _ PVM token to acquire the Dev _ ID, the latest reported confirmation data, and Clist of the device. The DMS interrogates the SeGW using the Dev _ ID to establish a secure channel with the device TrE by sending a T _ PVM with an indicator indicating that the state of the device is set from "working" to "management". Thus, the SeGW saves the token, may not provide a backhaul link (e.g., by quarantining), and waits for the DMS to confirm that the management operation is complete.
According to the management operation of the DMS, the device may be reconfirmed, for example by restarting, after a software update. Reconfirmation may then occur, wherein the state of the PVM system is maintained by using the T _ PVMs from the previous confirmation, and new T _ PVMs may no longer be generated. In this case, the DMS sends an updated T _ PVM token to the SeGW, with the device status indicator changed from "management" to "revalidate". The SeGW keeps a list of devices waiting for reconfirmation, from which it looks up the device when it requests network access. The SeGW may then wait for the device to re-acknowledge for a certain period of time. The reconfirmation result is sent back to the DMS to confirm the successful completion of the management process.
A need for reconfirmation may arise in the system model of the device. The new component downloaded from the DMS is inserted into the device configuration just before the next secure boot process. Therefore, a reconfirmation needs to be triggered as an end step of platform management. Since the device must be restarted, and if the platform validation is further bound to the platform authentication, the revalidation may include breaking existing connections for platform validation and management. In this case, the SeGW may maintain the state for reconfirmation, as described in the preceding paragraph.
With the secure channel established with the TrE of the device, the DMS may install/uninstall Software (SW) components (e.g., new SW components), change configurations, and trigger reconfirmation.
In another approach, the device may indicate a re-acknowledgement by a flag in the acknowledgement message. This avoids looking up the re-acknowledgement list for every device close to the SeGW. The flag may be set in a security process (e.g., a process executed by the TrE component) so that any device cannot reconfirm by not setting the flag.
This step and the above steps may be performed at the SeGW end instead of the PVE end, otherwise the SeGW will automatically generate a new token. In particular, these steps comprise a protocol step for device management, in which the SeGW must keep track of the reconfirmation required for the restart of the device. Since the device is reconnected for re-authentication after the restart, the SeGW must keep track of the device to be re-acknowledged for restart, otherwise the SeGW will consider its connection and authentication attempt as the first connection and issue a new token. Therefore, maintaining the reconfirmation list is included in the SeGW.
The continuous use of the T _ PVM in multiple rounds of reconfirmation helps detect recurring update failures and other types of operational anomalies.
If the DMS installs a new component to the device, it needs to be ensured that the RIM for the software is contained in the same management message sent from the DMS to the TrE. The TrE may be responsible for the secure storage of the RIM and its local management. The reconfirmation is triggered by the DMS after the component is installed, if necessary. The RIM for the new software may be sent to the PVE, which stores it in the V _ DB via RIMman. The DMS updates the configuration policy database C _ DB accordingly using CPman. The RIM for the new component can be used in the V _ DB to validate the new configuration by the PVE before the device reconfirms. In the event of a configuration change, for example, if the DMS changes a parameter for a given component, the DMS may update the C _ DB through CPman.
The TrE may provide a secure operating environment for security update and management functions. This function ensures that the attacked device can at least enter the rescue mode in case of a failure of the software or component update. In the event of a failure, the DMS may use a fallback code mechanism (FBC) for device recovery. This enables the device to become an original state in which the main code can be updated by the DMS management method.
To avoid race conditions, a re-acknowledgement may be triggered by a message sent by the DMS to the TrE after token passing. Otherwise, the device may attempt to re-acknowledge before the SeGW receives the token and prepares to re-acknowledge.
In another approach, the SeGW may count "n" for the number of recovery attempts or failed attempts for each device in the re-acknowledgement list, and upon reaching the count, may blacklist, isolate, or trigger the device for maintenance in a field, or a combination thereof.
In another way, a communication secret for establishing a secure channel may be included in or may be obtained from the T _ PVM without involving the SeGW.
Another approach may not deny connection to the device but rather disable components of the PVM that cannot be validated or replaced or updated. In general, the DMS may send a disable CInd and reconfirm message that helps mitigate the risk of experiencing an operator lock as described below. PVEs can be used to prevent "contention for trust" between equipment and operators. Various methods for preventing the occurrence of "contention for trust" are available. In one example approach, a device component may be disabled by forcing a reconfirmation without including that component in the Clist. This may be used when a built validity update is not available. In another approach, the loading order may be forced to change. In another approach, a parameter is forced to change, which may or may not affect the RIM. Forcing a change to a parameter requires the DMS to obtain the required information about all device components from the PVEs, not just those that fail validation.
In PVEs, it is generally not necessary to send RIM certificates to the devices. In existing PVM architectures, authentication and management is a task for the operator network, which is located in RIMman. Because the device trusts the network, the device can trust the RIM and the cid received in the management process. On the other hand, the Trusted Computing Group (TCG) Mobile Phone Working Group (MPWG) defines RIM acquisition performed by trusted devices as a de-centralized procedure, where the devices also verify (before installing) the acquired RIM certificates protected by MTM. The two forms are not mutually exclusive. The DMS may send the RIM along with other data, and the TCG MPWG-compatible device may be installed according to the TCG specifications. This is the difference between the device management defined by PVM and TCG MPWG for secure boot.
Examples of verification data are now described. Sending verification data (e.g., in the form of PCR values (which are aggregated hash values of individual measurements) and authenticating binding verification data are technical standards provided by the TCGTCG specification. However, establishing verification data and performing binding calculations according to the TCG specification is very costly, especially for devices with many components to be measured. This is typically addressed by the cryptographic expansion operation described herein — basically creating a new hash value from two old hash values. This can significantly slow down the start-up process of the device, which is undesirable in a home environment, for example.
Meanwhile, since it transmits information similar to that regarding the measurement result, there is redundancy between the RIM and the verification data. If the secure boot is performed correctly, the TrE compares the measurement to the RIM and loads only the components that match. Thus, the RIM assigned in the Clist carries all the information about the authentication data. In fact, since the validation data is considered to be the set of measurements, the RIM may carry more information than the validation data. The validation data is the only cryptographically abbreviated form of the actual measurement. In one embodiment, PCR values may be used as validation data.
The basic assumption for the dispute of verification data is that the secure boot process correctly compares the actual measurements to the RIM shown in Clist. Therefore, there is an important reason for security, namely, what authentication data can increase the trust level of the authenticated device. This can occur when a device has an incorrect RIM, or when a measurement is compared to a fake RIM.
Still further insights support the retention of authentication data-data that uniquely identifies the reached system state due to data with high protectivity generated during the secure boot process-even in the case of secure boot or unary (monadic) methods such as AuV. In fact, the secure boot itself does not protect the integrity of other validation data, which in this case may simply be a list of components without measurements. The CId list also tells the validator where and how to obtain trust information about the component (e.g., from the TTP).
An attacker may attempt to manipulate the validation data (Clist) to replace the lower confidence component identification with CId ("component trust rating") of the component with the higher confidence (obtained). The verification device (TrE) signs the fake data and makes a correct verification-if the data is not verified, there is no way to detect the manipulation internally.
A way to mitigate this attack to some extent is for the secure boot engine to make the data static (up-to-date PCR data) by sealing (seal) the data to that state. Upon confirmation, the data needs to be unblocked, so the same security breach occurs again. Also, the flexibility of this approach is limited by the need for the system to remain static after SML is closed.
Thus, both the device and the validator need to have validation data for the authentication data, even in case of a secure boot.
As used herein, "validation data" is synonymous with "data resulting from further processing of raw measurement data (e.g., Hash) which is then validated to find a matching RIM". Upon completion of the secure boot, the validation data uniquely identifies the platform state. The occurrence of a wrong RIM may be, for example, from an attacked source, which may have a greater impact on the PVM system as a whole and thus pose a significant risk.
One particular scenario is where a trusted RIM source located outside the operator CN is under attack, e.g. intercepted or spoofed by another party. Before detecting and correcting the attack, the RIM source may send a fake RIM to a large number of devices and the attacked component in normal PVM platform management.
In this case, the usual correction is (i.e. common practice in Public Key Infrastructure (PKI)) to revoke the corresponding RIM certificate. This may load the device, as the trusted reference data may be located in the device. TRV modification may force updates to RIM, RIMc and components within the entire device, but only a small fraction is actually affected by the attack. This causes huge network traffic and inconvenience to the user. The device supports mechanisms and protocols that enable authorized TRV modifications to be performed.
In this case, the verification data may be generated and used in the validation. The PVM system may invoke verification data usage for each individual validation device according to policy. Thereafter, the PVE may detect the attacked device and manage only the device. Referred to herein as a "minimum acknowledgement policy".
An example of PVM operation based on token passing is now described. The PVM described herein is an asynchronous process. Thus, a PVM system that is attacked by various entities should have multiple states, and it should be able to recover from the current state of the process to mitigate the various known attacks on the distributed system and its failure states.
In one example, token passing may be used to do the following. The SeGW may be configured as an entity responsible for generating and managing a token that is uniquely associated with the validation process. The PVM token can be bound not only with the identity of the validation TrE, but also with the single validation process in question. The token passing method includes replay/revalidate protection. Validation attempts can uniquely prevent replay of old authentications and provide a method of detecting DoS attacks through frequent duplicate validations. A validation session may be established by the token, allowing a unique association to be established between PVM-related data and messages and a unique validation. This is also a prerequisite for the evaluation of novelty.
The timeliness of the validation data can be controlled since the validation token can be generated from a timestamp (not necessarily signed) that was originally generated by the SeGW and then appended to the time ordered list as each entity passes the token.
Another method of adding freshness may be to read the time from the secure Real Time Communication (RTC) immediately after the RoT is loaded and use the timestamp to establish the aggregated hassh chain. Another alternative may be to use a sequential counter that is incremented each restart period, which the RoT can use to establish a hassh chain.
Yet another approach to join the newness is to complete the phase 1 and phase 2 checks, start communication with the SeGW and PVE, and then use the nonce provided by the SeGW/PVE to further verify the binding with the phase 3 check before sending the phase 3 acknowledgement data to the SeGW. This ensures the freshness of the validation data.
Continuous use of the T _ PVM in multiple rounds of duplicate acknowledgments, as in standard PVMs, helps detect duplicate update failures and other types of performance anomalies. The SeGW may detect and operate according to various conditions of the confirmation token. For example, a token that remains active for too long may indicate a general failure of the PVM. The SeGW may query the state of the token in the PVE and DMS and operate according to the state. This condition may be marked as an acknowledgment timeout. In another example, a revalidation may occur while the token is active. This may indicate various conditions such as an unexpected restart, power drain, or DoS attack. In another example, a time-based form, such as random or periodic behavior, may be detected in an Intrusion Detection System (IDS). The device may be quarantined or blacklisted and field maintenance may also be triggered.
Tokens may also be used to protect the integrity of data passed between PVM system entities and between PVM systems and devices. To this end, it may contain a hash value of the data to be protected, e.g., Clist, or a list of missing RIM's when handling the failure case F2a, and a pointer to the data. The data object in the T _ PVM does not exist as a whole because it is overloaded, resulting in a large amount of overhead that may, in fact, create a specific DoS attack.
An example of an operator RIM masking method is now described. The operator RIM mask replaces a large number of RIM certificates for device components from various external sources with RIM certificates generated by the operator (or equivalent "select home operator" (SHO)) with which the device wishes to establish a backhaul link. For the same component in the V _ DB, as long as these SHO RIM certificates (SHOIMc) are available, the certificate is superior to the external RIM certificate in validation. In device management, shorii is installed by the DMS to the device, which is still superior to the external certificate locally to the device in device security enablement performed by the TrE.
The SHORIM may be used as a "first level cache" for obtaining RIM certificates in the validation. It may be associated with a special CInd that essentially points to a technically separate, high performance V _ DB sub-database.
The operator RIM mask is applicable to any type of highly mobile device, such as M2M devices (M2 ME). When a mobile device enters the realm of a new operator and confirms, the operator may be provided with a CInd directed to another operator. The carrier may accept the CInd in a manner similar to mobile device roaming, or may substitute as described herein.
In another way of operator screening, when the SHO decides not to release the public part of the signature key of the shoric generated by the SHO, it is difficult, or even impossible, for another operator to validate the device component from the SHO. This mechanism can be extended to have the same level of locking as that provided by the traditional SIM locking process. The operator RIM mask may be used as a life cycle management tool when a device is first deployed in an area to remotely "tag" the device when it first contacts the SHO.
To establish operator RIM masking from PVMs, the following additional steps are described and reference is made to the original PVM procedure described above. In the PVM setup for RIM masking, RIMman configures the PVE and DMS to each perform functions in the operator RIM masking. In platform validation, the PVE sends (alone or with a message regarding component validity) a message containing a list of components to which SHORIM in the current V _ DB refers. The DMS is configured to perform a certificate update operation on the component in the device that will install the new shortcut (without updating the component itself).
During validation, the PVE marks components for which SHORI is not in the V _ DB (this is not related to the availability of any RIM and RIMc of the components (e.g., normal PVM processes)). The PVE sends a marked list of candidate components to RIMman, which contains the CInd and the actual RIM (which the RIM needs to generate the corresponding short (typically by signing it)) for operator RIM masking. RIMman determines which components in the received list to apply operator RIM masking according to locally available policies.
RIMman generates shorii for these components by signing the individual RIM. Certificate parameters, such as validity period, are determined according to local operator policy. RIMman generates SHOCInd that points to shormimc in the V _ DB. RIMman appends the new short and short to the V _ DB. In one embodiment, all "old" data, such as original CInd and RIMc in V _ DB, is stored for use in future traceability and rollback. RIMman sends a list of (CInd, SHOCInd) pairs to the DMS, instructing the DMS to force RIM indicator updates to the problematic device. The DMS sends RIM indicator update messages and SHO data to the device TrE in normal device management, but no component updates. With this message, the DMS may require the device to use only SHOCInd in subsequent acknowledgements.
In addition to installing SHOCInd (and possibly shornic), the operations performed on the device depend on local policy. The sophisticated equipment will keep its original manufacturer, CInd, and possibly the corresponding RIMc. To achieve flexibility, the device may attempt to save at least for each component a number of different CInds for the various operators and original component manufacturers/certification parties.
The DMS may force the device to perform a state reconfirmation. When a device's RIMc update fails, a state revalidation is required to avoid looping operations.
An example of carrier component locking is now described. As an extension of the operator RIM mask, the operator may be able to control and restrict the operation of the device or its components in the external network. This can be extended to carrier component locking as follows. The part of the component that should be locked is encrypted by the SHO using, for example, a symmetric key. An operator RIM mask is performed on the revised component. The decryption key is sent to the TrE (or UICC) in a protected and controlled space that can only be accessed by SHO authorization. Upon acknowledgement, when the PVE receives a skirt for the component, the SHO sends authorization data to the TrE. The encrypted portion of the component is then sent to the TrE's secure operating space where it is decrypted and manipulated.
Thus, SHO-locked components can only operate when a device confirms a particular SHO, and the same device cannot confirm another operator.
In another form the decrypted portion is released for operation outside the TrE. This is weaker in terms of security than the previous approach, since all components can be restored by dumping (dump) the device memory. With the complete component obtained, the RIM can be regenerated and confirmation of another operator can be successfully made, thus unlocking.
Another way to achieve component locking does not require that the cryptographic secret be managed in the TrE, or protected by another security component, such as a Universal Integrated Circuit Card (UICC). Operator unique short, and cid may be generated by using component modifications. This data can then be used in the fields of code obfuscation and watermarking.
An example of an operator component lock may involve a roaming operator intercepting an equipment component or entire equipment of another operator. The TrE-based approach described above is expected to protect the remaining free user equipment from such interception. Essentially, the device should alert the user/host side/original SHO for this process and maintain policies as to when locks are allowed and when locks are not.
The example method now described is for individualizing devices in device management by using PVMs that are related to features of a particular PVM system and operator. Devices managed by a PVM may be in a trusted state associated with a particular PVM system and management operator. When a roaming device enters the realm of another PVM system and operator, the roaming device can present a problem in that the device is required to prove who previously managed its configuration and trustworthiness. One way to implement a separate measure for the device to provide evidence to the other end is to provide the data with data that has issued the device's addressing. The individualization of the message proves the intentional signature of the sender. One method may be to include a Dev _ ID issued by the operator in the data. Any party receiving the signature data may consider its corresponding message and its contents to be specific to that particular device by the signing operator. This is valid when the counterparty believes that the signing operator correctly performed the verification of the device authentication (e.g., by the Dev _ ID). If it is not reasonable to do so, it can be replaced by the signing operator signing the entire authentication certificate of the Dev _ ID. The signed data may also include the actual RIM, since this would create another RIMc resulting in adding certain redundancy, which is extended with Dev _ ID.
Two efficient methods of establishing individualization based on PVM are now described. In one approach, RIMman contains the Dev _ ID in the short imc, which is only available when RIMc is maintained by the device, and thus, the short imc (including the Dev _ ID) is stored internally in the device. In another approach, RIMman or DMS uses the operator signature to the (Dev _ ID, cid) pair and if a SHOInd is used, the same operator signature to that SHOInd.
An example of blacklisting a device is now described. A blacklist may be established for the device and network access is prohibited based on the blacklist. The blacklist may include at least the Dev _ ID, optionally TrE information (certification, structure, manufacturer, model, serial number). Such a list is typically accessible by the DMS. In one embodiment, each MNO manages its own blacklist, and the DMS can access the list or database. The Dev ID is used to query whether a particular device is blacklisted. Thereafter, network access to the device is denied. In another embodiment, a general blacklist may be maintained where each MNO lists malicious devices, and the database is readable by all MNOs. It must be ensured that each MNO can only blacklist its own device, but all MNOs can read all entries. Such a general database requires more management and maintenance work. The above embodiments may be combined with alternative embodiments.
When the PVE receives the token T _ PVM, the PVE timestamps it and forwards it to the DMS, which retrieves the Dev _ ID, and optionally TrE information, from the token. The DMS queries the blacklist by using the Dev _ ID (and TrE information if needed or present). If the device is in the blacklist, the DMS sends a message containing the T _ PVM as a blacklist item to the SeGW. The message may contain the time stamp of the DMS. The SeGW may then reject the connection to the CN.
Another way may be implemented by using the extended information of the TrE information field. It may blacklist a particular vendor, model, serial number range, etc. Depending on the complexity of the blacklist behavior, a local MNO centric approach may be easier to implement than a centralized blacklist.
An example of whitelisting devices is now described. A white list may be established for the device based on which network access is allowed. The white list typically includes at least the Dev _ ID, optionally TrE information such as structure, manufacturer, model, serial number. Such a list is typically accessible by the DMS.
When the PVE receives the token T _ PVM, the PVE timestamps the T _ PVM and forwards it to the DMS. The DMS can obtain the Dev _ ID, and optionally TrE information, from the token. The DMS queries the white list by using the Dev _ ID (optionally using TrE information if needed or present). If the device is in the white list, the DMS sends a message containing the T _ PVM as a white list entry to the SeGW. The message may contain the time stamp of the DMS. The SeGW may then allow connection to the CN.
Another way may be implemented by using the extended information of the TrE information field. It may whitelist a particular vendor, model, serial number range, etc. Depending on the complexity of the white-listing behavior, a local MNO-centric approach may be easier to implement than a centralized white-listing. In addition, regulators may require MNOs to maintain black lists, rather than white lists.
In another embodiment, each MNO maintains a white list or database, and the DMS can access the list. The Dev ID is used to query whether a particular device is whitelisted. The device is then authorized for network access.
In another embodiment, a generic white list may be maintained in which each MNO lists its own trusted device, and this database is readable by all MNOs. It must be guaranteed that each MNO can only whitelist its own devices, but all MNOs can read all entries. Such a general database requires more management and maintenance work. The database of the generic white list device may require additional trusted relationships to be established between the MNOs. Devices that MNO a believes to be trusted can be whitelisted and can enter MNO B. This requires a standard and/or certified device validation process to compare the trustworthiness levels of the devices. Alternatively, the above-described modes may be combined.
An example of an isolated network for a device is now described. Establishing an isolated network for a device may require additional changes to the operator network. In this new network, the SeGW may still act as an execution barrier for the CN. The SeGW decides which devices to isolate.
Devices in quarantine have no direct access to the CN and provide no or limited services to users. Validation occurs when the PVE evaluates the validation data. A new operation may be triggered based on the evaluation result. For example, the device may be considered trusted and may be connected to the CN. In another example, a device may be considered hacked or unrecoverable. The device is blacklisted and blocked from further connection attempts. In another example, the SeGW forwards the confirmation result to the DMS together with the Dev _ ID and TrE information. The DMS may provide the appropriate updates/software changes to restore the device. The SeGW may be notified of the update and triggered to re-acknowledge the device by the SeGW. If the application is updated successfully, the success is confirmed, and the network access can be authorized.
The above-described blacklisting method may be used in conjunction with an isolated network. This enables the operator to take advantage of the connection to the device where possible, for example by providing updates via OTA. Alternatively, a blacklist may be used to block devices altogether. For example, if the device cannot be recovered by OTA measures. Replacement/service within the field must place such devices in mind.
If the other devices are graylisted, they are quarantined. The devices contained in the grey list are for example newly joined to the network (from another MNO); devices whose connection time has not reached a sufficient length; a device with suspect capability; and equipment for which safety warnings exist (issued by the vendor or independent manufacturer).
An example of parameter validation is now described. During PVM, the validation data may depend on the configuration parameters for the loaded component. Since these parameters may change frequently and differ among the same two devices, the basic implementation of PVM allows for clear text transmission of the parameters during validation. However, this requires a complete database of parameters to be maintained and requires recording at both the device and validator ends. This has the following effect: 1) parameter settings can take up a lot of memory space and slow down the validation process when they are evaluated; and 2) storing and evaluating the parameters extensively to each device can expose the device configuration to third parties too much, resulting in leaks.
One method of including parameters in the PVM process is based on the method of extending the hash value, i.e. concatenating the hash function results of the parameters with the measurements of the component. The parameter digest value is generated by sequencing and binary rendering the component parameter values, and then expanding the component's existing measurements using the parameter digest. Therefore, for confirmation, all the measurement and reference values, RIM and RIMc can be processed in a similar manner, thereby realizing parameter confirmation in various ways.
A similar problem exists between the relationship of certificates (e.g., x.509) to attribute certificates (e.g., attributes that are treated with reference to parameters), denoted herein as "illegal (rigged) hash values of attribute certificates," which can be addressed by including parameters in the reference measurements and RIMc.
An example of diagnostic validation is now described. An embodiment of verification based on PVM concepts includes allowing the selection of a component to be loaded without a RIM in the device. A particular component may be downloaded when no significant security software is running on the device and is sufficiently secure, but the network needs to be aware of the change. In another embodiment, the MNO establishes a policy that certain components are typically measured by the device (e.g., due to frequent changes), but are acknowledged by the network. Also, loading and measuring unknown components may be the default operation of the device and handing the confirmed task to the network. The network may isolate the device, which in turn initiates remote OAM modification of the device. For example, it may return to the original state, remove a component, or take other action.
An example of PVM diagnostics for the failure case F2a is now described. When the PVE does not send the DMS a list of components that have generated a F2a failure, the failed component can be located as follows. For example, the device may not maintain an SML that is available to be shown to the PVE for comparison to the RIM. In this case, the DMS may not replace the failed component in the device because it is not aware of the failed component, but instead replace all components in the Clist with the correct components during normal management. Upon restart and revalidation, the device may also include the list of unloaded components in the validation message, since the unloaded components failed in internal verification. The PVE can even be diagnosed by comparing the previously confirmed Clist with the updated Clist of the RIM. When verified locally with the correct RIM, the now lost component is not loaded in secure boot. Thus, the missing component is the component that needs to be replaced. These components that actually need replacement may then be replaced in a second management period.
If the device reports that a component cannot be loaded (e.g., the RIM is missing or wrong) and sends the measurement for that component to the CN, another method may be used for diagnostic validation. Depending on the MNO policies, an OAM repair process may be triggered to remove or repair a component. In another approach, if the TrE detects a RIM loss or error of a component, the device is allowed to directly request OAM repair.
Another approach may disable components that cannot be validated in the PVM and cannot be replaced/updated, rather than denying the connection to the device. In this case, the DMS may send a "CInd disabled" message for the component and trigger a re-validation of the device. This may be applicable to situations where unknown components are loaded.
Another approach may be for the DMS to indicate which components are allowed in a particular device. If the device loads and validates all components during secure boot, including components that are not allowed (e.g., because a security breach was recently discovered but no updates are yet available), the DMS may send a message to the device via the SeGW that causes the device to disable the component. The device is required to reconfirm. If the component is not loaded during the revalidation, the DMS informs the SeGW of this and the SeGW then allows the authentication/validation to be completed.
An example of a minimum acknowledgement policy is now described. Since component measurements at boot-up (e.g., extending PCR values, and writing measurements into the SML that generated Clist) may cause some delay in the boot-up process, the minimal acknowledgement mechanism requires the device to acknowledge only under certain circumstances. Since the RIM and the stored measurement values (e.g., PCR values) transmit partially identical or redundant information, eliminating this redundancy can save messages and storage capacity.
If a local integrity measurement, authentication and enforcement procedure (e.g. secure boot) can be established on the device, it is sufficient to send only the RIM used in this local authentication procedure, since the authentication data (e.g. PCR values) will contain the same information as the RIM itself. The minimum acknowledgement may therefore not send verification data but only the reference value used in the local verification process. In another approach, the RIM is not sent, but only an indicator to the RIM if and only if the RIM has a unique identifier.
Two conditions for minimum acknowledgement include: 1) local measurement, verification and enforcement (MVE) processes are trusted; 2) the RIM source for the RIM stored on the device is trusted. The verification data of the local MVE process may be reported to an external entity for evaluation. This is used for explicit establishment of trust. The MVE process may be implemented to make it vulnerable. The fact that the device later reports the RIM indicates that the MVE process is trusted. This is used for implicit establishment of trust.
RIM certificates issued by vendors, other MNOs, TTPs, and others may also be sent in order to evaluate the trustworthiness of the reported RIM. The RIM is considered trusted if the issuer of the RIM certificate is trusted. If any of the reported RIMs is not trusted, then action may be taken, such as placing the device in an isolated net or blacklisting.
Redundancy of RIM and verification data may be adjusted to increase efficiency. For example, a device may be required to transmit validation data only in certain circumstances, or only at certain frequencies. For example, if the PVM system has detected an attacked RIM; new devices roam to the operator area or SHO does not discover the device for a period of time. In another example, validation may be required to be sent only once every "N" number of acknowledgements.
A modified example for PVM is now described. Both fixes and software updates are operations required for the device to continue servicing. There are many reasons why the device needs to be modified. In addition to normal software upgrade maintenance, bug fixes and enhancements, fixes may be part of the general security processes integrated into the device. During the validation process, the software on the device is measured and its integrity is verified. This measurement is compared to the RIM located in the TrE. If the verification fails, the code has been tampered with, or the RIM appears incorrect for that particular code library. A correction procedure may be initiated to update the code base or RIM to ensure proper validation of the device.
If the device integrity check on one or more components fails, this indicates that either the components are under attack or that the corresponding trusted reference value is not consistent with the code base on the device. A modification procedure may be initiated that indicates to at least the CN that the device is not able to authenticate with the SeGW, while also facilitating network-initiated updates to the code base or to a new trusted reference value corresponding to the installed code base. Corrections can be made between the DMS and the device via the SeGW.
Some common safety requirements are applicable for initiating any corrections. These requirements are determined by the phase of the secure boot process at which the failure occurred. The worst case considered is a failure occurring at phase 2 of the secure boot, indicating that a TrE is established but no connection is made with an external entity. Therefore, in this case, the device cannot request correction in normal startup. Additional codebases, such as FBC, may be safely loaded into the TrE for correction. The security of such a process is characterized by: 1) FBC can be loaded into TrE completely and unchanged; 2) the TrE can safely execute the FBC; 3) communications with a network entity (e.g., a DMS) for making modifications are protected from integrity and confidentiality; and 4) protecting credentials used to modify the access network throughout the process. Alternatively, the FBC is not loaded to the TrE. The FBC may coexist with the TrE, e.g. as another (trusted) code library for separate correction purposes. Trust can be generated for the FBC since it is stored in a secure memory or protected by HW security secrets. Thus, the TrE does not need to operate the FBC. The FBC may be stand-alone and may operate directly without establishing a TrE.
An example of the modification initiated by the device is now described. Within the scope of device validation, the correction may be an alternative to immediately quarantining the device when an error is detected. In the case of autonomous validation, TrE is the first part to be verified. If it verifies correctly, it indicates that the device has reached a predetermined safe boot state. This can be thought of because the TrE is reliable and the RIM stored in the TrE is authentic. However, this does not indicate that the RIM is correct for the particular version of code currently loaded into the device.
An example of a network-initiated revision is now described. In the case of autonomous validation, if the device validation process fails, the FBC may be started, triggering a software update to the main code library containing the RIM. The device may send an IKEv2 message with a notification load indicating that the device is performing a fallback mode and needs immediate correction.
For the semi-autonomous validation method, the correction process does not require a full update of software or Trusted Reference Values (TRVs). When the device passes the validation of phases 1 and 2 but the phase 3 validation fails, information relating to the failed module may be returned to the PVE in the notification payload or certificate of the IKEv2 protocol. If the failed module is not deemed important by the PVE, validation and authentication may proceed while the failed module is disabled/uninstalled. However, if the failed module is important, the PVE may send information to the DMS indicating that a modification is needed.
Another situation is that the RIM stored in the TRE is not correct for a particular code base. Failed measurements can be returned to the PVE where analysis of the information indicates that an error occurred in the RIM and only these values need to be safely updated in the TRE.
Examples and embodiments for distress signals and fallback codes are now described. The device may have a fallback code (FBC) map that is intended to facilitate the device to make corrections when the device integrity verification fails. The FBC may be stored in a secure memory, such as a Read Only Memory (ROM). The FBC may be invoked if the local device integrity verification fails. The FBC may contain at least all necessary functions, methods and certificates needed for communicating with the entity in the CN responsible for amending the affected devices. Also, the FBC may also contain functionality for receiving all software updates from the network. A special "correction" DMS may also be considered.
The device and TrE may perform the following revision indication procedure when the device integrity check fails. First, the TrE may initiate operations on trusted code, referred to as fallback code (FBC). The FBC may be stored in a secure memory, such as a ROM. Second, the FBC can establish a secure connection with a pre-specified "modified" DMS. Third, the FBC may send a distress signal, which may include a device ID, to the DMS. When the distress signal is received, the DMS may know that the device has failed, for example, an integrity check and request maintenance. Alternatively, the DMS may initiate a complete firmware update process upon receiving the signal, or perform diagnostics to perform partial code/data updates.
An example of validation that does not require RIM is now described. The validation without RIM may comprise the secure sending of the component code to a secure memory, such as a secure memory card, under the control of the load TrE. The confirmation that the RIM is not required may also include replacing the digest value by encryption, thereby storing the encrypted component, e.g., code, in ordinary memory. It may also include encryption using a key, which may be TrE protected and shared with the DMS, or an encryption key derived from an asymmetric cryptographic algorithm, where the DMS and TrE may have a public and private key pair. Directional modification of the encrypted code is not allowed. Since decoding of tampered data does not make sense, any manipulation of the code is detected upon decoding, for example in a secure boot manner. Detecting such a change may be accomplished by including a digest value in the encrypted code. Further options may be used, such as error correction codes.
An example of including location-based information in the validation process is now described. Some devices may be used in applications where location-based information is of great importance, such as theft prevention, cargo tracking, fleet monitoring or surveillance. Typically, the device may be equipped with a Global Positioning System (GPS) module to provide geographic location data. The secure boot may also include a GPS module and components to ensure trusted generation and storage of location-based information. The location information may additionally be securely stored in TrE secure memory. The location information may then be included in the acknowledgement message. The message may be used, for example, to: if the reported location does not correspond to the desired location, the device configuration may be changed through the OAM process. If the device reports a new location, its configuration may be changed to have it connect to the network using different parameters, triggering a software event (e.g., login, report, or power off). It may be assumed that the location information is securely operated by the trusted application.
Applications and embodiments of PVMs in the context of h (e) NB and M2M are now described, which provide a mapping from the generic PVM structure to existing standardized network entities, protocols and mechanisms. Both applications show special safety requirements. These two applications have in common i) as mobile phones have been seen as a mature classical technology, the device is no longer a closed, immutable environment for storing and processing sensitive data; and ii) typically, these special devices are controlled by interested parties other than the Mobile Network Operator (MNO) and are connected to the core network only through intermittent and insecure links.
The first application relates to h (e) NBs, known as femtocells. H (e) NB is a small portable access point that provides terminal devices (e.g., mobile phones) with connectivity to the 3G network. H (e) NB is typically located indoors, or in the premises of the interested party, called the Host Party (HP). The HP functions as a mediator of mobile communications and services within a small designated geographical area. The HP may be used to provide mobile services in areas that are currently inaccessible (due to poor radio conditions), such as indoors or in factory environments. Since h (e) NB can act as a unified access point to the broadband internet and mobile networks, it is also an option for private homes or in-home office (SOHO) sectors.
In the H (e) NB usage environment, three interested parties, the user, HP MNO, are linked together by service level and usage agreement. Herein, the h (e) NB stores a large amount of sensitive data, such as HP authentication data customized for a mobile network, a list of Wireless Transmit Receive Units (WTRUs) or User Equipments (UEs) allowed to connect to the h (e) NB, which is stored as Closed Subscriber Group (CSG), and an Access Control List (ACL). Some of this data may be specific to the HP and/or the user. Meanwhile, the location of h (e) NB needs to be controlled to protect the mobile network from interference and to prevent illegal extension of services.
Fig. 7 illustrates an example communication environment between an h (e) NB 705, a WTRU or UE 710, and an operator core network 730. It introduces two network entities, one responsible for security and one responsible for serving the h (e) NB. Operations, administration, and maintenance 735(OAM) is a function located in the core network backhaul that provides remote administration functions to h (e) NB 705. In particular, it provides software downloads and updates, radio and other parameter settings, and other similar functions. The security gateway (SeGW)740 is the primary entry point for h (e) NB 705 into the operator core network 730, whose primary function is to protect the network 730 from illegal connection attempts and any type of attack issued from rogue h (e) NBs or bogus h (e) NBs.
A second contemplated application involves M2M communication. Typical examples of M2M devices (M2ME) are vending and ticket vending machines. More advanced cases also include telemetry, equipment maintenance and equipment management, etc. for integrated thermal power plants. If M2ME is connected to the backup network through a mobile network, the MNO can provide value added services to the owner of M2ME, first through Over The Air (OTA) management. Similar to h (e) NB, M2ME is under the control of a party other than the MNO. The interested party has specific security requirements that are different from the MNO. H (e) the security of NB and M2ME is of importance. In both cases, their respective threats, risks and subsequent security requirements are similar.
Threats may be classified into six top-level groups. Group 1 includes methods of attacking certificates. These methods include brute force attacks, physical intrusion, side channel attacks on tokens and (weak) authentication algorithms. And a malicious host side copies the authentication token. Group 2 includes physical attacks such as inserting a valid authentication token into the manipulated device, starting rogue software ("flashing"), physical tampering, and environmental/side channel attacks. Group 3 includes configuration attacks, such as rogue software updates/configuration changes, misconfigurations of HP or users, misconfigurations or attacks on ACLs. Group 4 includes protocol attacks on the devices. These attacks threaten functionality and are directed to the HP and the user. Major examples include man-in-the-middle (MITM) attacks when accessing the network for the first time, denial-of-service (DoS) attacks, attacks on devices by exploiting the vulnerability of working network services, and attacks on OAM and its traffic. Group 5 includes attacks on the core network. This is a major threat to MNOs. It includes simulating devices, tunneling traffic between devices, misconfiguring firmware in modems/routers, and DoS attacks on the core network. In the case of h (e) NB, it also involves changing positions in an impermissible manner. Finally, it includes attacks on the radio access network using malicious devices. Group 6 includes user data and identification privacy attacks including eavesdropping on other users 'Universal Mobile Telecommunications System (UMTS) terrestrial radio access network (UTRAN) or evolved UTRAN (E-UTRAN) access data, impersonating other users, disclosing the user's network ID to the h (E) NB owner, impersonating a valid h (E) NB, and providing radio access services through the CSG.
The core functionality requirements are new to h (e) NB and M2ME, which mainly relate to authentication of different stakeholders and separation of functions and data between stakeholders, i.e. domain separation. In particular, authentication of the HP or M2ME owner should be independent of the device's authentication of the network. Also, the secret data of the HP must be protected from access by the other party (even the MNO). The device must perform security sensitive tasks and simultaneously enforce security policies on the access network and the connected WTRUs. This must be able to be done in an at least semi-autonomous manner to provide continuity of service and avoid unnecessary communication in the backhaul link. Another important security area is remote administration by OAM or OTA, respectively. The device needs to securely download and install software updates, data and applications.
This requires separation of authentication positions while minimizing changes to the core network to reuse standard 3G authentication protocols, such as extensible authentication protocol-authentication and key agreement (EAP-AKA). The method contemplated thus far includes separate authentication carriers for the HP and/or M2M owners. In the former, it may be implemented as a so-called HP module (HPM), in the latter a Managed Identity (MID). Both of which may be the pseudonyms of a Universal Integrated Circuit Card (UICC), i.e. a 3G Subscriber Identity Module (SIM) card. The use of removable smart cards in the case of M2M raises various security concerns. On the one hand, maintenance operations that have to exchange such smart cards, for example for updates or operator changes, need to be avoided, since this is very costly for large M2ME queues that are geographically dispersed. Another recent option that has to be carefully considered is to download AKA credentials to a secure environment in the device. One possible structure that this option allows to use real TC technology is a virtual SIM.
In any case, security requirements, as well as advanced OTA or remote administration, require special security features for M2ME and h (e) NB. TrE may be used for this purpose. The TrE needs to interact securely with the rest of the system. It is interesting to observe the interfaces of the TrE, since these interfaces are a common model for the TCB of the TS to communicate with the rest of the platform. Basically, all TrE interfaces are initialized in the TrE's secure boot process and are therefore considered to operate correctly. There are two broad types of security for TrE interfaces. First, there are unprotected interfaces. These interfaces protect the TrE from tampering and/or eavesdropping with a device-generic source that is considered insecure. Even an unprotected interface may benefit from other security measures, such as data encryption, or allowing the interface to be available only after the TrE checks the code of the peer resource across the interface, such as during a secure boot.
Second, there is a protected interface. These interfaces either use security protocols or use security hardware to provide protection for the integrity and/or confidentiality of the data running thereon. If a security protocol is used, it may also provide authentication and message authentication and/or confidentiality.
An unprotected interface may be selected when the communicating entity is not providing protection for the communicated data. A protected interface may be selected when protection needs to be provided for the integrity and/or confidentiality of data between the TrE and another resource with which the TrE needs to communicate. Thus, the TrE performance may differ. Fig. 8 shows an example of TrE within h (e) NB, and other resources to which it may be connected. This is a minimized configuration that includes the functionality to calculate and send to the SeGW the parameters required for h (e) NB device authentication, the functionality to perform h (e) NB validation, including code integrity checking on the h (e) NB remainder at startup, and the minimum ciphering function (true random number generator). For authentication, it can be considered that the TrE may logically contain the HPM.
The structure described by the generic PVM can be easily mapped to existing h (e) NB structures. Its databases (V _ DB and C _ DB) and its management components are new to the existing H (e) NB infrastructure. Fig. 9A and 9B show both cases, h (e) NB connection via the SeGW, and h (e) direct connection of NB to HMS via interface I-HMS _ d.
The PVM structure or system 900 of fig. 9A includes h (e) NB 905, which h (e) NB 905 includes TrE 910. WTRU 912 (or User Entity (UE)) may communicate with h (e) NB 905 via I-UE interface 914. H (e) NB 905 communicates with h (e) NB Gateway (GW)918, which includes SeGW 920, through I-h interface 915. In general, the interface I-h 915 between h (e) NB 905 and SeGW 920 may be unprotected, and special measures may be taken to ensure the authenticity, integrity and optionally confidentiality of the channel. I-h 915 may be used to establish a link between h (e) NB 905 and SeGW 920 (and thus CN). For example, the SeGW 920 may communicate with the AAA server over interface I-AAA 975. The operator can establish appropriate measures to ensure the security of the interface.
The SeGW 920 may use the I-PVE interface 922 to contact the PVE 924 during the validation. The PVE 924 may send the confirmation result to the SeGW 920 using the I-PVE interface 922. The I-dms interface 930 may be used for device configuration-related communications between the h (e) NB management system (HMS)935 and the SeGW 920. The PVE 924 can use the I-pd interface 932 to communicate with the HMS935 and vice versa. The interface I-pd 932 may be used during device management for device software updates and configuration changes.
PVE 920 can use interfaces I-V926 and I-d 938 to read the RIM from V _ DB 940, and HMS935 can use interfaces I-V926 and I-d 938 to read the allowable configuration from C _ DB 950. Interfaces I-r 928 and I-c 934 can be used by PVE 920 to communicate with RIMman960 (e.g., in the event of a loss of RIM in V-DB 940) and by HMS935 to communicate with CPman 970. RIMman960 and CPman 970 may use interfaces I-rdb 962 and I-cdb 972, respectively, to read, write and manage acknowledgements to database V _ DB 940 and configuration policy database C-DB 950.
FIG. 9B shows PVM 982, where H (e) NB 905 can be directly connected to DMS 935. For example in case of fallback mode, in which h (e) NB 905 cannot execute security protocols with the SeGW. In this case, the HMS935 may act as the first point of connection for the H (e) NB 905 via interface I-dms _ d 984 and communicate with the PVE 924 via interfaces I-PVE 986 and I-pd 988 to perform validation, or at least to know which components failed in secure boot. HMS935 may perform corrective actions based on this information.
The confirmation of PVE usage can be mapped directly to the h (e) NB case in a variety of ways. The DMS function is performed by the HMS or an appropriately extended entity (evolved HMS) (ehms) with access to the cjdb.
For policy-based updates, the C _ DB provides policies that can specify the importance of the modules and the interoperability of various published versions of the modules, e.g., some modules are important to operation and some are not. This helps to limit the size of the update and provides a patch, rather than an entire firmware update. The simplest policy may be to define all modules as important to the h (e) NB operation, so that a firmware update is performed.
When a module measurement fails, the eHMS checks the policy to find the criticality of the module and its impact on module interoperability. Based on this check, a list of available patches is built. The patch may be sent to the device collectively or individually for application. In other cases, each transmission unit is integrity and confidentiality protected. The link must send packets in sequence and cannot be lost. When all patches are received (e.g., as indicated by the eHMS via a termination packet or flag), the device sends the received patch list, along with its measurements, to the eHMS, if needed, to verify the update information, or if centralized and individual patch measurements are sent by the eHMS, the device performs local verification of the patches and starts the application. After applying the patch, the system starts in normal mode, starting the device validation process.
The process can also be followed to have the eHMS send an update notification to the device, the device starts up using the ECB, and sends measurements to the eHMS whenever the manufacturer issues a new firmware version. The eHMS provides a patch or complete firmware update, followed by the same process.
In the case of a non-policy based update, the HMS sends the complete new firmware over a secure link once any measurement failure occurs. The device verifies the firmware, applies it, and starts up in normal mode.
In the case of a previously known good status, if the h (e) NB supports storage system status, the eHMS may require the h (e) NB to return to the previously known good status when withdrawing patches for which measurements failed. The method may be used to return the system to a factory state. This previously known good status may be a status certified by the PVE, eHMS, or S (e) GW.
H (e) the NB may return a previously known good state, may provide integrity protection for the system state, may provide recovery operations for previously stored system states, and may need to be protected in the event of a device attack.
An example of confirming a device connected through the public internet is now described. For devices connected to the SeGW, CN respectively over an insecure initial link, e.g. the public internet, special requirements need to be applied to ensure the security of the confirmation initial step. These special requirements are also applicable to h (e) NB-like devices, which request the establishment of such a connection from the SeGW and acknowledge it through this connection. Although h (e) NB peers (counters) of network entities are described herein (e.g., HMSs rather than common entities of PVMs), it should be clear that some methods and apparatus can only be used in settings other than h (e) NBs. Often, the validation and authentication needs to be bound to the first few steps of the initial connection, or even to the same data structure. Two ways to bind the acknowledgement and authentication with specialized protocols, such as TLS and IKEv2, are now described.
The transmission protocol of IKE, ISAKMP defines a number of available certificate profiles that allow the use of a formal domain name (FQDN) as an ID. The device certificate and the TrE certificate may be stored separately. However, the TrE certificate may also be put into the device certificate. If the TrE has a separate ID (TrE _ ID), the FQDN may be used, but the TrE may be identified by the manufacturer, rather than the operator domain name.
In the IKE SA INIT phase, and upon completion of the Diffie-Hellmann key exchange in phase 1 of the IKE session, a method may cause the SeGW to send a first authentication exchange message to request Dev CERT, which contains the CERTREQ payload. The device then replies in the next message with two CERT payloads, one with Dev CERT and one with TrE CERT. In this case, the SeGW delays the Dev _ CERT verification until the PVE has verified the TrE _ CERT and evaluated the confirmation data. After that, authentication is continued. In case the reply contains only Dev _ CERT, the SeGW goes back to AuV.
The distinction between Dev _ CERT and TrE _ CERT is advantageous if the respective IDs are used for different operational purposes. For example, the operator may assign a network address, e.g., an IP address, to the device, which the Dev _ CERT can authenticate and establish an IPSec tunnel directly from. While some types of network addresses may not be suitable for TrE CERT. Thus, two IDs are helpful in the device. A further task of the SeGW/PVE infrastructure is to exchange services for the Dev _ CERT by the application performing PVM and secondary authentication according to TrE _ CERT.
The IKE authentication message may carry any number of any type of payload. At the header of each payload, the message may contain a "next payload type" field. Thus, the entire load chain can be sent in one ISAKMP message. This may be used to separate the credentials into the payload field of the ISAKMP message of phase 2 of the initial IKE session or sessions. An example process 1000 between a device 1005, a SeGW 1010 and a PVE1015 is shown in fig. 10, using an IKE session, completely separating the credentials used for TrE and device authentication. A message is sent from the device 1005 to the SeGW 1010, containing (TrE _ Cert, VAL _ DAT) (1). The SeGW 1010 verifies (2) the obtained TrE certificate (TrE _ Cert). If the TrE _ Cert verification is successful, the SeGW 1010 sends an acknowledgment data message (VAL _ DAT) to the PVE1015 (3). The PVE1015 acknowledges the device 1005 (4) and notifies the SeGW 1015 of the success (5). The SeGW 1015 sends a certification request (CERTREQ) to the device 1005 (6). In response to the received attestation request, the device 1005 sends at least a device attestation, (Sig _ Dev (Dev _ ID), Dev _ Cert), to the SeGW 1010 (7). The SeGW 1010 authenticates Sig (Dev _ ID) (8). If the authentication is successful, a device attestation (Dev _ Cert) is sent to the AAA infrastructure, which replies whether the device is known or not. According to this embodiment, device authentication is only possible if a device considered to be authentic is confirmed by sending confirmation data signed by the TrE and an identification certified by TrE _ CERT. This provides extended protection for network components behind the SeGW, helping to mitigate DoS attacks.
In another example, a TLS handshake message for supplemental data (supplemental data) defines an extension to the TLS hello handshake message that enables sending application specific data, such as an acknowledgement message from the PVM, in the TLS handshake. This supplemental data cannot be used by the TLS protocol, but is used by applications, such as the PVE validation engine. It is possible that only one supplemental data handshake message is allowed and multiple receptions are considered failures. The type and format of the data carried may be specified as a supplemental data type (SupplementalDataType) and may be known to the sender and receiver.
In one approach, a dual handshake may be performed, providing protection for the performing PVM data carried in the supplemental data handshake message. And it is necessary to ensure that both parties are mutually authenticated before either party provides the supplemental data information.
A new supplemental data type may be defined to carry the execute PVM confirm message. Then h (e) the NB may use the first TLS handshake for mutual authentication with the SeGW. This may use the first TLS session to protect the second handshake and send acknowledgement data to the SeGW in the supplementary data field.
In another approach, the acknowledgment data may be sent in one handshake exchange by sending the supplemental data in the first handshake message, rather than two times. For acknowledged connections using a TLS session ticket (session ticket) extension, the SeGW may use the TLS extension in the acknowledgement to store the acknowledgement result in the TLS session ticket, which allows the server to send session tickets to the clients for restoring the session and saving the session state of each client.
Such session tickets may be used in PVMs for platform management. When validation of the specific failed component list fails, the SeGW receives the notification from the PVE and generates a session ticket. The ticket is encrypted using a 128-bit AES symmetric key that is not public to h (e) NB, and the integrity of the ticket is protected by a hash-based message authentication code (HMAC). Thus, the ticket is not modified by the h (e) NB and other network entities can recognize the ticket when sent by the h (e) NB. The TrE may then securely store the ticket and use the ticket for platform management in a new TLS session without, for example, sending confirmation data again. The SeGW may also determine the time of existence of the session ticket.
The AES ticket encryption key may then be placed in the T _ PVM for further use, or sent directly to other entities. This key, along with, for example, the ticket timestamp and detailed validation results, can then be sent from the PVE to the HMS. By using the TLS session ticket, h (e) NB can directly establish a secure connection for platform management. This will rely on h (e) NB to track platform management in time and contact the HMS before the ticket expires.
When h (e) NB has completed the revision with the HMS through the connection established using the session ticket, the session ticket can be used for reconfirmation. The first step is to establish a new TLS connection from h (e) NB to SeGW using the old ticket. The SeGW may then control that the ticket is from h (e) NB that has actually completed the management cycle with the HMS. After the management is completed, the SeGW looks up and compares the ticket data with the T _ PVM returned by the HMS. If the correct T _ PVM is found, the revalidation attempt using the TLS ticket may be accepted, for example, to prevent DoS attacks that are launched for replay using the TLS ticket. The TLS ticket may be accepted for reconfirmation, otherwise it may be considered to be out of date because the revision process with the HMS may take a long time. This is done without major loss of security because the SeGW has a time-stamped T _ PVM available for comparison.
An example of a PVM performing autonomous validation (AuV) is now described. The AuV method does not transmit any acknowledgement data to the SeGW and therefore does not require any changes to the existing protocols for the initial network connection of the device. Thus, during secure boot of the device, the PVM system knows anything about the verification result. The only device specific information transmitted is the Dev _ ID.
AuV limits the possibility of managing devices based on platform validation results. In particular, there is no direct way to distinguish between devices that initially authenticate to the network and devices that perform AuV for reconfirmation after an update. If device management is AuV-based, a database is required in the network for storing device state history. The example method now described is effective to perform at least basic device management from the AuV.
Examples of h (e) NB modifications for AuV-only devices are now described. Only the AuV-enabled device performs a secure boot that allows the device to perform a device authentication procedure if and only if the device integrity verification is successful. If the integrity check of any component fails, the integrity check of the device may be deemed to have failed. However, by using the FBC image, the device can contact the designated HMS for device modification.
Once the connection with the modified HMS is established, the normal code image and/or the trusted reference value of h (e) NB may be replaced. When the correction process is complete, h (e) NB should restart and restart the integrity check process again.
The PVM may use the FBC if a set of predetermined conditions are met. One example condition is that the FBC is securely stored in the device. Another condition is that the FBC can be loaded and started in case of a security failure. And another condition is that the address of the designated h (e) MS is securely stored in the FBC image. Yet another condition is that the FBC may send a distress signal to a designated h (e) MS. The signal may include a device ID and the message may be integrity protected by a key that is securely stored as part of the FBC. A further example condition is that upon receiving the signal, h (e) the MS may determine that the integrity check of the device failed and that maintenance is required. Yet another condition may be that the FBC may contain functionality that enables a network-triggered full code rebuild. Another condition may be that the FBC may contain functionality that enables network initiated TRV replacement.
Fig. 11A and 11B illustrate an example method for device modification implemented by an FBC after an integrity verification failure. RoT 1100 checks for distress flag (1). If the flag is null, the RoT 1100 checks the integrity of the TrE1105 (2). If the flag is set, the RoT 1100 loads FBC (3). If the integrity check is successful, the RoT 1100 loads the TrE1105 (4). If the integrity check fails, the RoT 1100 sets the distress flag and restarts (5). Once the normal code is loaded, TrE1105 checks the integrity of the normal code (6). If the integrity check is successful, the TrE1105 loads the normal code image (7). If the integrity check fails, TrE1105 sets a distress flag and restarts (8). If the RoT has loaded the FBC, then initiated by the FBC, a distress signal is sent to the HMS for correction (9).
An example of a basic method of correction and configuration change using AuV is now described. During AuV, the only information sent to the SeGW, and possibly used in platform management, is the device identification. Thus, in one embodiment, a device may be assigned multiple identities to use in the AuV to notify of a (limited number of) states (e.g., component integrity verification failures). In another embodiment, the authentication result may be notified using a group ID, which is not specific to any one device. The management identities may be grouped according to the steps of the secure launch process. For example, DevM _ ID3b is used to notify phase 3b failure, DevM _ ID3a is used to notify phase 3a failure, and DevM _ ID2 is used to notify phase 2 failure. Phase 1 failure cannot be notified, so the device lacks communication capability at this time.
In another example of AuV use case, the device may attempt to connect the HMS after failing and executing the fallback code as a next operation.
Failure of one or more components in phase 2 does not indicate that the device is not able to communicate. This stage should be understood as belonging to a particular type of component classification. As long as the most critical component is loaded in phase 2, the device can send its status and failed components to the PVM system. This is the case if there is a policy manager on the device, which is maintained by the HMS and provides a standard framework under which connections can be made.
For security, the DevM IDn and associated authentication data (e.g., private key) must be protected, otherwise an attacker could perform a fraudulent attack, thus destroying the management process. This is a very dangerous threat because the management ID is the same for a large number of devices. One solution is to use only this information to design the platform management process. The unique device can be informed of the success of the management process by binding a first acknowledgement, which informs of the failure of some unknown identified devices, to the re-acknowledgement. It is determined that there are multiple ways to perform this operation. In one example, after the device has authenticated one of the management identities, the SeGW runs a supplementary protocol in which the device has to authenticate the original Dev _ ID. In another approach, by exchanging specific secrets, the device and the PVM system and in particular the SeGW establish a management session covering the first and second re-confirmation procedures.
An example of a supplemental authentication protocol is now described. The device and the SeGW have completed a first authentication protocol for the device in which the device authenticates one of its management identities DevM IDn. Where they are assumed to have established an encrypted and authenticated communication session. Thereafter, the device may transmit only the Dev _ ID and authentication data for the Dev _ ID. For example, the signed message and the public key certificate may be sent over the established secure channel. This ensures that no other party knows the identity of the device requesting management and does not use this information to disrupt the management process, i.e. to disable the device before revalidation, or to impersonate the device.
The SeGW sends DevM _ ID and Dev _ ID to the PVE, which inserts it into the list of devices that need to be managed. The PVEs then notify the DMS of the required device management operations, such as "install phase 2 rollback code". The DMS downloads the corresponding code to the device through a secure channel previously established by the SeGW. In normal PVM, the system then initiates a re-validation of the device.
When the management is successful, the device then authenticates its original Dev _ ID in the AuV. The SeGW informs the PVE about it, and the PVE finds the Dev _ ID in the revalidate list and deletes it. Otherwise, the device may again validate the management ID, also find it out, and proceed with further operations according to the policy.
An example of establishing a management session is now described. This embodiment differs from the other embodiments in that the PVM manages a single, individual device. The management session may be established in a communication protocol between the device and the SeGW. The effect of this approach is to keep the device identity unknown to the PVM system, in effect by creating a pseudonym.
In normal protocol execution, the ability of the protocol to establish such a permanent secret may be limited. For example, public key establishment protocols, such as Diffie-Hellman (D-H), satisfy an attribute called federated key control, such that the established key is dependent on both parties. That is, the two-way protocol inserts (pseudo) random information to generate a different key in each execution. Session executions covering multiple parties cannot be established using this protocol.
The SeGW and the device must therefore establish a secret in a special protocol, for example by using a challenge response. The challenge may be caused by the device or the SeGW, which response must be satisfied that the second reply in the second execution (i.e. the re-acknowledgement) is the same as the reply in the first round. In a simple embodiment, the device only needs to show the nonce obtained from the SeGW in the reconfirmation, and the SeGW looks up the nonce in the table. So the nonce is a pseudonym. More sophisticated encryption protocols may be used.
Thereafter, reconfirmation may be performed as described above. However, the difference is that in this method the SeGW maintains information of the reconfirmed device for operational reasons, since this information is used in the execution of the protocol for reconfirming between the SeGW and the device.
An example of an OMA Device Management (DM) based architecture for h (e) NB is now described. OMA DM is a device management protocol jointly specified by the Open Mobile Alliance (OMA) Device Management (DM) working group and the Data Synchronization (DS) working group. The OMD DM is formed for a small mobile device, such as a phone or PDA. It does not support a broadband wired connection between the device and the DM server, but only a short-range wired connection, such as USB or RS232C, or a wireless connection, such as GSM, CDMA or WLAN. However, it may be used as a device provisioning and management protocol for h (e) NBs (particularly for WTRUs that have themselves as the core network while serving themselves as base stations for CSG and non-CSG WTRUs to which they are connected).
OMA DM is used to support usage scenarios, including, for example, first-time device configuration and implementation or disabling features, device configuration updates, software upgrades, and diagnostic reports and queries. The OMA DM server side may support all of these functions, although the device may optionally perform some or all of these features.
The OMA specification may be optimized to support the above features for small devices with limited connectivity. It may also support integrated security using authentication (e.g., by using a protocol like EAP-AKA).
Oma dm uses XML (or more precisely, a subset of SyncML) for data exchange. This can be used to provide a standardized and at the same time flexible way to define and transport properties for software modules or functions of the h (e) NB for validation purposes.
Device management is performed between a DM server, such as a management entity of a device, and a client, such as a managed device. The OMA DM supports a transport layer such as WAP, HTTP or OBEX or similar transport. DM communication is initiated asynchronously by the DM server using notification or alert messages using any available method, such as WAP push or SMS. Once communication is established between the server and the client, message sequences may be exchanged to complete the specified DM task.
The oma DM communication is based on a request-response protocol, where a request is typically issued by the DM server and the client responds with a reply message. Both the server and the client are stateful, i.e. after a built-in authentication process, any data exchanged due to a specific order may appear.
Since DM communication may be initiated by the DM server, performing the PVM by the DM may require a server query-based method for confirmation. For example, a device authentication procedure using IKEv2 may be employed, which may be initiated by the device. A number of different message types may be considered as bearers for acknowledgment data. For example, it may be sent in a list of failed software modules or device functions. In another example, a management alert message may be sent from the device to the server. Alternatively, a user of a general alert message (which can only be sent from the device to the DM server after transmission of at least one management alert message from the device or the server) may also be considered. These messages, including alert messages, may use the SyncML format, which has flexibility in specifying content and metadata for that content. This may be used to acknowledge the information transfer. DM may also support segmented data transfers, which may be used for software updates, where the size of the update may be large.
Although the earliest DM communication must be initiated by the DM server, the latter communication may be initiated by the DM client using the continue session. This capability of a DM client (e.g., h (e) NB or M2ME) to initiate communication in a session may be used for device-initiated tasks, such as device-initiated re-acknowledgement or device-initiated acknowledgement messaging.
An example of binding a confirmation in an authentication certificate is now described. Binding the acknowledgement in the authentication certificate enables a combination of acknowledgement and authentication, thereby automatically binding the authentication ID of the device with the acknowledgement. The confirmation message is then placed in an additional field of the authentication certificate. Such authentication data may optionally be placed in the notification payload field, for example, by using the IKE protocol.
If the verification data is stored within the authentication certificate, a new combined authentication/validation certificate must be issued each time the device configuration changes. The generation of this certificate must be controlled by the SeGW, since the SeGW is the entity responsible for Dev _ ID authentication for PVM purposes. This can be performed in at least two ways. First, the SeGW or subordinate entity may generate the new certificate after receiving the updated Clist from the DMS. Second, the device may generate the certificate itself and send it to the SeGW and PVE, sign it by the SeGW, and send it back to the device.
The SeGW may end the procedure (either generate and send a new certificate, or reply to a new certificate generated by the device) after successful some reconfirmation. This is to ensure to the PVM system that the new configuration has actually reached the device.
This cycle involves all three entities and devices in the CN, since new certificates may be needed when the device configuration changes. The DMS triggers a configuration change (e.g. an update to software and/or parameters) and saves the new required state in the policy database C _ DB. After applying the change to the device, a reconfirmation is required.
In an example scenario, the device applies the update and performs a revalidation. The device may use the new software but not the new certificate until the revalidation (especially for a successful update process) is complete. At this point, the device runs the new software configuration using the old certificate, which does not match the actual configuration of the device. In response, providing the new certificate to the device for device authentication; provided if and only if an update has been applied; and it is necessary to ensure that the certificate cannot be used without applying an update.
An example of revoking a device authentication certificate is now described. If during device authentication the SeGW determines that the device certificate issued by the device for device authentication needs to be revoked, the SeGW may indicate to the device that device authentication failed due to the certificate revocation and then delete the device from a white list maintained by the network, or vice versa, add to a black list maintained by the network. Upon receiving this indication, the device knows that its certificate has been revoked and that its identity has been removed from the white list or, vice versa, added to the black list. The device may then perform operations to reestablish itself as a valid entity in the network.
The SeGW may revoke the device certificate if the device ID fails, the device certificate expires or issues h (e) an entity of the NB device authorized by the trusted third party operator and its associated certificate requests a network revocation certificate.
An example of the certificate-based validation method is now described. The binding certificate is a signature data set. It is signed by the issuer, SHO, or its SeGW or equivalent responsible for managing the certificate. The signature data in the certificate includes at least Dev _ ID, device public key for authentication and validation, and Clist.
This certificate may be sent to the SeGW in a combined message of acknowledgement and authentication. The latter is a message (in part) signed by the device using its private key for authentication and validation. The message may contain other data, such as a timestamp and/or nonce, for preventing replay. The SeGW checks the certificate and the signature of the message and validates it as normal.
An example method of certificate exchange is now described. Two approaches are generally possible. This is referred to as a pre-certificate exchange and a post-certificate exchange. The difference is that the old or new certificate is re-validated for use. Both of these ways ensure that all required steps are performed automatically, i.e. either all or none. The start state is the device running the old configuration using the old certificate and the end state is the new device configuration and the new device certificate. It may be necessary for the authentication certificate and the RIM certificate to be created, managed and controlled by independent TTPs or manufacturers so that the device can be used in multiple networks instead of being fixed on one operator. Alternatively, the new device certificate may be processed by, for example, the Open Mobile Alliance (OMA) for Device Management (DM), which may be extended to include the certificate.
In the pre-certificate exchange method, the update includes a new certificate, and thus the certificate enters the device before the update is completed. When the application is updated, the device reconfirms using the new certificate. The device is marked as "updating in progress" in the CN using appropriate memory and data structures. For example, a flag is set in the authentication database. Another approach is to use the validation token T PVM.
An example of pre-certificate exchange flow is now described. The DMS sends the updated and/or changed components to the device as in the standard PVM. The DMS then sends a new Clist to the SeGW. The DMS passes the T _ PVM to the SeGW. At this point, the SeGW (and thus the PVM system) enters a state in which it waits for the device to re-acknowledge the new configuration. The SeGW collects the required information (Clist, Dev _ Id, device public key, etc.) and generates a new device certificate. The SeGW then sends the new certificate to the device, after which the communication session with the device is ended.
Now the SeGW has the T _ PVM obtained from the DMS and thus knows that a re-acknowledgement of the device should be awaited. It stores all the T _ PVMs for such devices in an internal re-validation list. Assuming the device has the updates and new credentials installed correctly, the following process proceeds. The device initiates a re-acknowledgement and sends the new certificate in an acknowledgement message. The SeGW authenticates the device by verifying the signature data and the device certificate. The SeGW looks up the T _ PVM in the revalidate list. A re-validation is performed in which the PVM system state is maintained using the T _ PVM from the previous validation (and no new is generated). This and the previous steps are performed at the SeGW instead of the PVE, otherwise the SeGW will automatically generate a new token. The re-acknowledgement list maintenance is therefore done for the SeGW.
Continuous use of the T _ PVM in multiple rounds of reconfirmation, as in standard PVMs, helps to detect recurring update failures and other forms of performance anomalies.
In a further embodiment, the TrE has a trusted update service that allows the HMS to send updates to the device, which are then applied in a secure trusted process. Secure launch may be relied upon to ensure the integrity of the update service in the TrE. When the HMS uses the new update, it may send a token to the SeGW, which contains the newly updated device configuration. The SeGW may then create a new authentication credential for the device and append it to the token, sending it back to the HMS. The HMS contains the new certificate, as well as update data for the device update service. The packet may be encrypted by the TrE and signed by the HMS. The trusted update service receives the update package, verifies the signature, decrypts the data, applies the update, and stores the new certificate in secure memory. After that, the TrE notifies the HMS that the update was successful. Since the trusted update service is protected by the secure boot, the update process is trusted and does not require revalidation. Depending on the type of update, a reboot may be required. In this case, the device may authenticate at the SeGW using the new certificate. Therefore, the HMS must ensure that the SeGW is informed about the reconfirmation to be made.
In another embodiment, if there is no trusted update service available on the device, a new certificate may be provided with the new software update, such that the certificate is encrypted by a key that is bound to the successful installation of the update. This approach and the problems involved require further consideration.
In the certificate exchange-after-method, the update may not include a new certificate containing the new device configuration. The device reconfirms using the old certificate. After the reconfirmation is successful, the CN activates the new certificate and sends the new certificate to the equipment. Since there may be no new configuration for secure boot, the new configuration is sent to the device, even though the device does not have a new certificate at this time.
An example of operator RIM masking is now described. A wireless local area network (WAN) management protocol may be used for remote management of devices. Fig. 12 shows an exemplary diagram of a signed message format 1200 that allows a software package to be downloaded from a publisher to a device. The format allows one or more files, such as firmware update or configuration packets, to be sent in a signature packet. The receiving device is able to authenticate the source and contains all instructions to install the content.
The header 1205 may contain the format version and the length of the command list and payload components. The command list 1210 contains a sequence of instructions that can be executed to install a file contained in a data package. Signature field 1215 may contain a digital signature, the message data that the digital signature issues being made up of a header and a list of commands. Although the issued message data contains only the packet header and the command list, the signature ensures the integrity of the entire packet since all commands related to the payload file 1220 contain the hash value of the file content.
In the case of operator RIM screening, the DMS signs the command list and places the software update package and its respective RIM in the payload of the message. The TrE of the device then verifies the signature of the DMS using the public key. The public key may be made available to the TrE at the time of manufacture or configuration, or by a CA trusted by the operator. All root certificates needed to verify the public key may be securely stored in the TrE. The command list then contains commands to install the software and commands for the device to obtain the RIM. This provides the operator with an efficient way to have full control over the software and RIM installation processes on the device. In such an embodiment, no explicit transmission of RIMc to the device occurs.
An example of the correction using the second code library will now be described. A secure boot failure, such as a stage 2 failure in a generic PVM device model, can create a problem over a larger range than TrE that is not trusted when TrE is used to extend trust to a revision component loaded into a normal operating space. Therefore, to initiate the fix, the FBC needs to be invoked, but needs to run inside the TrE, at least for the most important functional encryption and fix protocol stacks.
In certain cases, the FBC may be obtained from an external security source, referred to herein as an FBC carrier (carrier). This may be accomplished by a process that is partially out-of-band and may require human involvement, such as inserting a smart card into the h (e) NB device. This process may provide enhanced security by using the second security component (smart card) as an FBC carrier that securely stores and protects FBC code or by explicitly requiring human involvement in the revision initialization process to mitigate simple automated DoS attacks and may be obtained from the HP on an agreed-to-regular basis. The external carrier FBC may be a measure ensuring a simple and cheap equipment and a simple TrE. The external carrier FBC may carry the FBC's executable binary code, including all secrets required for modification, and may additionally provide a secure operating environment for the FBC when needed. In case the device is located remotely or in a location that is difficult to access, the use of a separate FBC carrier is no longer applicable. The process of establishing trust between the three entities described herein is similar to the various "transitive trust" processes described previously.
The following procedure can be used for external FBC carriers such as UICC, smart card or secure memory card with its own processing unit. The TrE is a relying party that requires loading of authorized and authenticated FBC code. On the other hand, there is less risk of showing the FBC code to an unauthorized party as long as the credentials for correction are always protected. Authentication of the TrE to the FBC bearer is not a significant problem because an out-of-band procedure is to be performed in which the TrE and the device are not really fully trusted. This is also the reason why the bearer should not display the credentials for HMS access to the device. The FCB may need to be displayed but at less risk.
Thus, the following authorization or communication order may be used. This out-of-band or manual participation step is merely intended to indicate a special use case and may be automated or integrated in other ways, such as embedding the FBC carrier in h (e) NB. In such a fallback code based procedure, the communication may be very simple, so authentication and authorization may be combined in a single protocol step.
First, the phase 1 startup succeeds, and the phase 2 startup fails. TrE stops, enters a "wait FBC" state, and flashes an LED, or provides other similar failure indications. user/HP inserts into the FBC vector. In this embodiment, the FBC bearer, e.g. a smart card such as a host side module (HPM), authorizes itself to the TrE by using a specific physical interface to inform the FBC bearer of the presence and/or submission of an authorization secret, e.g. an OTP or signature nonce. Between the TrE and the FBC, a Security Association (SA), i.e. a ciphered and integrity-protected communication session, is established. The FBC is then loaded into a secure environment, which may be provided by the TrE or FBC vectors or any combination of these two environmental properties. The FBC may then be checked for integrity, if desired, and then loaded and started.
After secure boot, FBC indicates the loading success to the bearer using the secret and creates a new SA between tre (FBC) and the bearer. The credential for the correction remains in the carrier, but the FBC contains data for HMS discovery. The FBC contacts the HMS. An end-to-end SA is established between the smartcard and the HMS by using smartcard protected credentials, which are still completely unavailable to tre (fbc). Now, the HMS knows that a valid TrE (FBC) requests a correction. The smart card hands the communication session to tre (fbc), which presents its ID to the HMS. The HMS initiates the correction process. This authorization secret needs to be well protected because such connections are available to multiple devices and thus their destruction is catastrophic. One method for performing authorization uses a TPM-protected authorization secret (e.g., a 160-bit HW-protected value) as created in obtaining all relationships. According to this implementation, the FBC can be started directly from the FBC carrier, which must then provide a safe and reliable operating environment. In this case, even the attacked TrE can be replaced. One example may be when the FBC carrier contains a secure component, the FBC is operated independently by the micro-processing unit and memory. The FBC carrier may be connected to the device through a common interface (e.g., USB, JTAG) and authenticate directly to components within the device, after which the attacked component and possibly the TrE component are replaced. In another approach, the FBC carrier device may replace the image containing the signature if a signature code image is used.
Since in some cases the TrE is not fully trusted to be used to correctly load and operate the FBC and in most cases it cannot validate the FBC loaded into the FBC bearer, some security enhancements are required so that the FBC bearer must establish trust in the remote code library execution. For example, the FBC carrier may generate a one-time secret and embed it in the FBC using obfuscation methods. Alternatively, the carrier may send another authorization secret with, or immediately after, the FBC, which secret can only be recognized and used by a successfully initiated FBC. This secret is used by the successfully started FBC to obtain the traffic secret used in the next following traffic from the protected space in the TrE.
An example of using an internal parallel code library for rollback code is now described. The internal parallel code library may include a trigger mechanism and a fallback code library needed to implement the revision. For example, h (e) NB may contain two code images, one being a normal mode and one being a fallback code image (FBC). Normal mode calls may be performed for both the AuV and the SAV in various phases. In phase 1, RoT in ROM verifies TrE. If the TrE is valid, the components of the next stage may be verified. If the integrity check of any component fails later, the code is offloaded back to the beginning of the TrE code. At this point, the TrE may begin checking for fallback (e.g., correction) codes. If the rollback code passes the integrity check, it is loaded and started. The fallback code may contain some minimum set of Device Management (DM) codes for establishing a connection with the HMS. Once a connection is established with the HMS, the failed module may be identified and the update sent to the HNB. When the correction process is complete, H (e) NB can restart and resume the validation process. The fallback code size can be kept small to facilitate communication with the HMS. Since code may be "rolled back" into the TrE and loaded using rolled back code, no trigger mechanism or register may be required.
Another form of "hybrid (internal/external) code library" is now described. The FBC may be stored within the device, for example in the case of the parallel code library described above, but is encrypted on the device and integrity protected. The TrE itself cannot be used to decrypt the FBC, otherwise the attacked TrE would cause the FBC itself to be attacked. The hybrid scheme stores decryption and authentication keys for the FBC on an external security component (e.g., a smart card or UICC). In case of a failed start-up, the TrE notifies the failure and asks the user/HP to insert an authentication token, i.e. a smart card, into the device. Depending on the device properties, two options are available. In a first option, the authentication token stores only the key content and performs mutual authentication with the TrE, during or after which the TrE receives the required key content. The TrE performs integrity check and decryption on the FBC, and then loads and starts the FBC. In another option, the authentication token is modified to automatically verify and decrypt the FBC stored on the device before the FBC is executed either by using only device resources (e.g., using a partial TrE to provide a secure execution environment), or by providing a secure FBC-executable execution environment within the authentication token itself. This approach can allow FBC storage using the device's larger memory space, and also incorporate the security of additional external security elements.
An embodiment for using an internal sequential code library is now described. The device management protocol may define protocols and commands for installing and changing software configurations on the remote device and may include a "restart" command. It may not include a device that knows to send the "correction needed" message. However, by combining the validation result of, for example, the SAV with the device management protocol, the HMS may initiate a reinstallation or reset of the software component using the device management protocol and then issue a reboot command for reconfirmation.
Alternatively, the FBC can delete or offload portions of the normal code, leaving only the remaining normal code, and initiate a reboot, followed by a revalidation. A list of normal codes that need to be deleted or uninstalled may be predefined for the FBC. Alternatively, the FBC may obtain the list from an external security component, such as a smart card (e.g., HPM). Alternatively, the FBC may obtain the list from a network-based entity (e.g., h (e) MS).
For the mechanism to operate securely, a trusted application may be required on the device, which may contain the following properties: integrity protected; securely stored in the device; the starting can be carried out under the condition that the safe starting fails; establishing a (secure) connection to the HMS; the ability to verify signatures on software and commands from the HMS; the ability to install/uninstall software on a device; and can report that the device needs modification.
A possibly redundant second code library may be used to take care of the application. From the above description and according to the above requirements, this second code library introduces some extra redundancy code into the device. All the features provided by the code library may be required in case of a normal, successful secure boot in the device. All features of the second code library may be present in the first code library.
Another way is to replace the parallel design with a sequential design. This would involve the following sequence. Upon success, RoT verifies and starts TrE. Thereafter, upon success, the TrE verifies the correction code. Upon success, the TrE verifies the remaining software components. If not, the TrE stores the failed module and sets a flag indicating that the device needs to be modified. After that, the TrE triggers the device restart. Upon restart, after verification of the repair code, the TrE passes control of the repair code and releases the list of failed modules. The revision code may then use the list for device revision processes and contact the HMS.
An example of an SAV using security policy attributes is now described. Notifying the PVE of which modules failed the internal integrity check may include establishing a standardized list of all SW modules for all structures and modules of the h (e) NB. A standardized list of Security Policy Attributes (SPAs) may also be generated. The SPA may be a policy that informs the PVE what action should be taken if the integrity check of a particular SW module fails. The PVE does not need to know other information about the failed module.
SPA codes may be standardized, which may include the following codes. A "00" module failure may indicate that network access is denied. All this type of module would be in phase 2, but including the code in the phase 3 module also allows flexibility. A "01" module failure may indicate that temporary network access is allowed. As described in the amending section, the device may use the temporary network access for amendments, for example by using an amending centre to amend failed SW modules, and may stop network access if the amendments are unsuccessful. A "02" module failure may indicate that network access is allowed. This may refer to a revision center that allows revisions to failed SW modules and may maintain network access when revisions are unsuccessful. A "03" module failure may indicate that network access is allowed. It may delete/disable/quarantine the failed SW module and may stop the network access if the operation is unsuccessful. A "04" module failure may indicate that network access is allowed. It may delete/disable/quarantine the failed SW module and may maintain network access if the operation is unsuccessful. A "05" module failure may indicate that network access is allowed and SW integrity failures may be ignored. "06" may indicate other failures.
A single SPA may be associated with each stage 3 SW module in the h (e) NB. The actual identifier of the SW module can be owned by each construct and model of h (e) NB. In SAV, h (e) NB has sent h (e) NB _ ID to SeGW, which h (e) NB _ ID can be used by the network to identify the architecture, model and serial number of h (e) NB. For each stage 3 integrity check failure, h (e) NB puts its own SW module ID and corresponding SPA in the notification payload. As per the SAV mechanism we have, the load is forwarded to the PVE.
PVE checks the SPAs and if there are any SPAs 00, the SeGW is not authorized to grant access to the h (e) NB. If SPA is 01 or 02, a fix procedure is triggered. The PVEs send H (e) NB _ ID and SW module ID. The revision center may use the h (e) NB _ ID to cross-reference its own SW module ID so that it can download the correct updates to the h (e) NB.
If there is any SPA 03 or 04, the PVE may send appropriate instructions to the SeGW. If there is any SPA 05, h (e) MS or other network component may store data for administrative purposes.
Alternatively, SPAs other than 00 may involve some restart/re-acknowledgement and ACK messages. SPA-00 is the same as the final result of AuV, unless the network now has some information about bad h (e) NB, and administrative action can be taken. Alternatively, the PVE may not be notified about modules that passed the integrity check.
If the FBC supports basic communication, the PVM can be extended to include the failure of the stage 2 module. The SPA may be part of an object that contains the SW module ID. It needs to be stored in the TrE. It cannot be stored as part of the SW module and is not trusted in case the integrity check of the SW module fails.
According to the risk assessment process, the SPA assigned to each SW module is consistent with each h (e) NB provider as part of the type determination process of the SW stack. Once the provider has established contact with the operator, it becomes simple to assign SPAs to the new SW module. Based on the previously successful type determination, the trust is assigned the appropriate SPA by the established provider.
To reduce the need to standardize the SW structure of the H (e) NB structure, the SW structure of the H (e) NB can be defined in terms of code blocks, wherein the block is defined as the minimum atomic block or quantity based on integrity check or correctable content. Individual block functions may not be defined. For example, all of the stage 3SW may be treated as a single block from the point of view of integrity checking. Alternatively, the blocks may be 1: 1 mapped to the actual SW application, or to sensitive objects within the application. SPA may be applied to the SW block. When the correction center is called due to SPA of 01 or 02, it downloads the required blocks. The block ID may relate to a vendor, whose structure may not be standardized.
If an SPA is used in the device validation, the SPA may be securely stored in the TrE and bound to the SW identifier. This may ensure, for example, that a 05-SPA is not replayed for another component with a 00-SPA. Thus, the PVE is able to verify that the received SPA actually belongs to the loaded component in the h (e) NB.
The SPA may be securely transferred from the device to the C _ DB using a registration process initiated by the device's earliest initial network connection and stored for later use. The device can then report the SW _ ID of the failed component, and the PVE can retrieve the corresponding SPA policy action from the local database. This can be used for low bandwidth connected devices.
Embodiments for grouping SPAs are now described. If the SPAs are stored locally in the TrE, the TrE can check all failed codes and their SPAs, process them, and send more general phase integrity checks. The failed module and its SPA may include the cases shown in table 1.
Failed module ID SPA
00 01
01 01
02 03
03 03
04 03
05 04
TABLE 1
The TrE may process the data shown in table 2.
SPA value Module
01 00,01
03 02,03,04
04 05
TABLE 2
A list of modules with different failure levels indicated by SPAs may be sent instead of all SPA values.
Therefore, when some bit blocks are defined in the notification message, there may be a mapping relationship as shown in table 3.
SPA Module value
00 Air conditioner
01 00,01
02 Air conditioner
03 02,03,04
04 04
05 Air conditioner
TABLE 3
The compactness of the data depends on the number of modules expected to fail. For example, if there are on average more than 1 module failures for most SPAs, the data will be more compact.
Fig. 13 is a diagram showing an example of a method of confirmation by remote certification. The validation entity 1300 receives the SML and signed PCR value. The SML contains an ordered list of all files that extend into each PCR. The validation entity 1300 performs the following steps for each item in the SML. The validation entity 1300 queries the local database 1310 for the presence of a specified filename in a known good hash value (1). The database 1310 contains all file names and binary RIMs (e.g., Hash) that are considered trusted. If the file name is not found in the database, it is considered to be not authentic (2). The validation entity 1300 may compare the RIM with the measurements reported by the SML (3). If not, the binary value of the platform has been changed (by the user, bad software, or other entity). In this case, the platform is not trusted (4). The validation entity 1300 may perform an extend operation on the virtual PCR (5). Essentially, the operations performed by the validation entity are identical to those performed by the platform during execution and measurement. At the end of the process, the virtual PCR values are compared (6) with the values reported by the platform. If not, the SML has been tampered with (e.g., if a row is deleted from the SML but a hash value is provided to the PCR, the virtual PCR does not match the reported PCR). The platform is deemed to be untrusted (7).
A hierarchical relationship between modules can be employed to reduce the number of components to report and for time delay requirements as a way to report a list of loaded components in the Clist, or to report a list of components that fail in the case of F-SAV, or to report measurements. An example arrangement is shown in fig. 14. This arrangement automatically introduces a natural order to the modules. The number of modules may be large because the number of possible modules is large due to the OS, protocol stack, management module, and other modules.
After a successful secure boot, the PVE or SeGW must issue a certificate to the device, which indicates a successful boot. Such certificates contain information elements such as TrE _ ID, version number (of software, hardware) or hash value and security timestamp of software, device location, module hash value, module Clist and other relevant information.
Such a certificate may be used for failed starts. In this case, information may be sent back to the PVE, which may verify that the reported version number is correct authentically. Since the PVE is the issuer of the certificate, it can take appropriate steps. The difference is that the PVE is not dependent on the device for trust, as is the case when the device indicates a startup success status. However, this is only valid if the PVE believes that any information it receives from the device relates to its failed boot condition. Thus, in this case, the device may be designed such that it is able to detect boot failures and will report its status to the PVE, which is not corrupted and not attacked.
The certificate may also be used for boot success. In this case, the device may send the measured hash value or measurement, and the last secure boot credential issued by the PVE, or a pointer to that credential, in a subsequent secure boot process. By doing so, the PVE can verify the presence of any malicious changes.
Certificates may also be used in situations where a device boots up and moves into another operator domain, either within a geographic area or within an operator domain. This situation occurs in the geo-tracking device. To verify the trace data, it is necessary to know whether the device successfully booted up and whether the generated data is authentic. Such a successful start-up certificate may be provided with data generated by the device. Within the certificate may be included the location of the device when the boot was successfully achieved. Thereafter, when a third party recipient of such a certificate attempts to verify the authenticity of the certificate, the present location of the device may be verified (preferably using a method that does not rely on processing of location information within the device, such as a GPS-based method) and a check may be made to see if the obtained present location matches the location in the certificate. If not, the certificate recipient may request a new secure boot of the device and a subsequent re-confirmation of integrity from the device or a network entity managing the device re-confirmation. Such certificates containing location information about where the successful start was last performed may also be used in case of a mid-stream failure, when the end-point network needs to know the environment and configuration (including location) about the last successful start.
As described herein, any form of validation may be used by the PVM. Generally, the three main methods are AuV, SAV and Remote Validation (RV). Each method handles the steps of measurement, reporting and enforcement, which are associated differently with device integrity validation. The AuV performs all three steps locally at the device. The RV performs measurements locally, and then reports the measurements to an external entity. Forced by an external entity. The SAV performs a secure launch locally, reports measurements to external entities, and allows reconfirmation.
In particular, a device using SAV may perform a direct evaluation of trust status measurements and establish an initial network connection. The evaluation results, as well as the associated reference metrics, may be reported to an external entity, such as a security gateway (SeGW). Optionally, a subset of the measurement and reference metrics may be reported.
The confirmation report may enable evaluation of the h (e) NB trust status based on h (e) NB features such as its platform structure, security policies, and device attestation. The confirmation report may include information about the h (e) NB, TrE capabilities, measurement and validation rules, TrE's security policy manager capabilities, measurement results, platform level attestation information, last boot time, or boot counters.
The device information may include, for example, a manufacturer, a structure, a model number, a version number, a hardware build or version number, or a software component or version number. TrE performance may include, for example, measurement, verification, reporting, and mandatory performance.
The measurement and internal validation rule information may include methods to perform trust status measurements and internal validation during secure boot. For example, coverage may be included, such as the name, type, and order of component loading. Methods of component verification may be included, such as the number and scope of trust chains in verification. Algorithms for measurement and verification may be included, such as secure hash algorithm 1(SHA-1) extensions. Register ranges, such as Platform Configuration Registers (PCRs) involved in boot verification, may also be included.
The security policy manager capabilities of the TrE may include information relating to implementing and enforcing security policies. The measurement results may include internally reported and verified actual measurement values, such as signed PCR values. The platform-level attestation information may include information about h (e) NBs in general, or tres in particular. The last boot time may include a secure timestamp of when the last secure boot was performed.
The start-up counter may include a counter value that is incremented when power cycling occurs, and when a safe start-up operation is performed. The counter may be a protected counter that cannot be reset or reversed, but only counts forward. The counter value may be initialized to zero when the device is first initialized.
The acknowledgement report may be bound to h (e) NB via a combined authentication and acknowledgement procedure by binding the information to an authentication protocol, such as internet key exchange protocol version 2(IKEv 2). The confirmation report may include the certificate. Optionally, some of the information may be included in the certificate.
Alternatively, the confirmation report may include a pointer or reference to a Trusted Third Party (TTP) that provides trust status information from which the external entity may obtain the trust status information. For example, the confirmation report may include a reference to a separate device trust certificate that contains trust status information.
In response to an anomaly encountered during the evaluation, the external entity may deny network access. The external entity may also evaluate the measurement and reference metrics and may detect errors not detected or reported by h (e) NB. Alternatively, h (e) NB may be granted restricted network access (quarantine). Otherwise, network access may be granted to h (e) NB. H (e) the NB may perform, evaluate and report trust status measurements in response to requests by external devices. The request may be issued by an operator. Re-validation can identify components that were not identified during startup. If an uncore acknowledgement error is detected, the external entity may send a request to the H (e) NB to perform corrective action. For example, in response to the correction request, h (e) NB may return to a predetermined state.
SAV allows detection of tampering by an indicator even if no attack is detected in the secure launch. Depending on the security attributes, a corrective step may be performed on the compromised device. This is possible as long as the indicator to the network shows that the core security initiation has not been compromised and that the security attributes are communicated. If the core is compromised, the device will not be able to connect to the network due to local execution. A compromised device may be detected by a reboot or by requesting a re-acknowledgement. Therefore, there is a higher possibility of detection. Software updates can be provided OTA without requiring a service technician to replace the device. SAV enables good accuracy access control to the CN and provides bandwidth occupancy below the RV due to the use of indicators and local enforcement.
SAV combines the advantages of AuV and RV, resulting in better granularity and better visibility into equipment safety attributes and validation measurements. It provides lower bandwidth occupancy, local device resources comparable to autonomous acknowledgments, faster and more convenient detection of the compromised device, and enables the use of an isolated network for the compromised device.
Fig. 15 is an example block diagram of a wireless communication network 1500, the wireless communication network 1500 including a WTRU1510, an h (e) NB1520, and an h (e) MS 1530. As shown in fig. 15, the WTRU1510, h (e) NB1520, and h (e) MS 1530 are configured to perform platform validation and management.
In addition to the components found in a typical WTRU, the WTRU1510 includes a processor 1516 with an optional link memory 1522, at least one transceiver 1514, an optional battery 1520, and an antenna 1518. The processor 1516 is configured to perform supplemental platform validation and management functions for PVM functions passed to the processor by a base station (e.g., h (e) NB 1520). The transceiver 1514 is in communication with the processor 1516 and the antenna 1518 to facilitate the transmission and reception of wireless communications. In the case where the WTRU1510 uses a battery 1520, the battery 1520 powers the transceiver 1514 and the processor 1516.
In addition to the components found in a typical h (e) NB, h (e) NB 1520 includes a processor 1517 with optional link memory 1515, a transceiver 1519, and an antenna 1521. The processor 1517 is configured to perform platform validation and management functions to implement the PVM method. The transceiver 1519 is in communication with the processor 1517 and the antenna 1521 to facilitate the transmission and reception of wireless communications. H (e) NB 1520 interfaces with h (e) MS 1530, and h (e) MS 1530 includes processor 1533 with optional link memory 1534.
Although not shown in fig. 15, the SeGW and PVM may include a processor with an optional link memory, a transceiver, an antenna, and a communication port, in addition to components found in typical segws and PVEs. The processor is configured to perform platform validation and management functions to implement the PVM method. The transceiver and communication port communicate with the processor and antenna as needed to facilitate the transmission and reception of communications.
The network components are selectively configured to perform the desired PVM functions as described in detail herein in connection with the various examples. In addition, the WTRU may be configured with supplemental PVM functionality, e.g., for authentication, validation, and other trusted factors, to facilitate its access and utilization of PVM-enabled networks and resources.
As an example, the various components are configured to use PVM max type task separation between entities that are active. This may be accomplished by using PVM tokens to pass specific information between various entities, as described herein.
Examples
1. A method for Platform Validation and Management (PVM) includes receiving a PVM token in response to a validation message from a device, the PVM token including at least authentication information from the device.
2. The method of embodiment 1, further comprising performing validation using predetermined information from the PVM token.
3. The method of any preceding embodiment, further comprising sending a failure report to a Device Management System (DMS) to initiate the revising and reconfirming in response to the failed component.
4. A method as in any preceding embodiment, further comprising sending the modified PVM token with the validation result.
5. The method of any preceding embodiment, wherein performing validation comprises determining suitability of at least one failure condition.
6. The method of any preceding embodiment, wherein the validation is performed using at least one of Remote Validation (RV), autonomous validation (AuV), semi-autonomous validation (SAV), full SAV (F-SAV), minimum validation, or parametric validation.
7. The method of any preceding embodiment, wherein the verification information comprises at least one of a device identification, device information, trusted context (TrE) information, verification data, a verification binding, and an ordered component list of component indicator-components.
8. The method of any preceding embodiment, wherein performing validation comprises at least one of determining that the TrE is not authentic, determining that the integrity measurement/verification data does not match, determining a lost Reference Integrity Metric (RIM) of the component, determining that the load component list policy failed, and determining that the device is out of date or the RIM certificate is out of date.
9. The method of any preceding embodiment, wherein the PVM token is bound to an identification and validation process that validates the TrE.
10. The method according to any of the preceding embodiments, wherein the timeliness of the validation is controlled by adding a timestamp to the PVM token and adding said timestamp to the chronologically ordered list by each entity passing said PVM token.
11. The method according to any of the preceding embodiments, further comprising establishing the individualization by using the device identification in the RIM certificate.
12. A method according to any preceding embodiment, further comprising sending the PVM token to the DMS to determine the suitability of the quarantine, whitelist, blacklist and grey list.
13. The method of any preceding embodiment, wherein the grey list comprises at least one of devices that newly join the network, devices that have not been connected for an extended period of time, devices with suspicious performance, and devices for which a security alert exists.
14. The method of any preceding embodiment, wherein the operator RIM mask replaces predetermined RIM certificates for device components from various external sources with operator RIM certificates.
15. A method as in any preceding embodiment wherein a query is sent to a validation database to check information received in the PVM token.
16. A method as in any preceding embodiment, wherein a query is sent to a configuration database to retrieve a configuration policy based on a predetermined identifier.
17. A method according to any preceding embodiment, wherein the retrieved configuration policy is evaluated.
18. A method according to any preceding embodiment, wherein in response to a failure condition, a message is sent to a validation database manager.
19. A method of validating a device connected to Platform Validation and Management (PVM) includes performing an integrity check on at least one pre-specified component of the device and storing an integrity check result.
20. The method of embodiment 19, further comprising performing a secure boot check on the device and storing a secure boot check result.
21. The method as in any one of embodiments 19-20 further comprising generating a confirmation message as a function of the integrity check result and the secure launch check result.
22. The method as in any one of embodiments 19-21 further comprising forwarding an acknowledgement message to the PVM.
23. A method as in any of embodiments 19-22 further comprising performing a secure boot in stages to ensure that each trusted environment (TrE) component is loaded on condition that a local validation of the TrE component is successful.
24. The method as in any one of embodiments 19-23 further comprising, in a first phase, loading components of the TrE via root of trust (RoT) -dependent secure boot.
25. The method as in any one of embodiments 19-24, further comprising, in a second phase, loading a component external to the TrE to enable communication with the PVM.
26. The method as in any one of embodiments 19-25, further comprising loading remaining components of the device.
27. The method as in any one of embodiments 19-26, wherein performing an integrity check is based on at least one trusted reference value and the TrE.
28. The method as in any one of embodiments 19-27, wherein the confirmation message comprises a local pass/fail indicator as a measure of integrity established in the first and second stages.
29. The method as in any one of embodiments 19-28 further comprising rolling back a code base.
30. A method as in any of embodiments 19-29 wherein initiating a fallback code base comprises triggering a software update to a main code base comprising a RIM.
31. A method as in any of embodiments 19-30 further comprising sending a distress signal if a fallback code is loaded.
32. The method as in any one of embodiments 19-31 wherein a fallback code (FBC) image facilitates device revision and is stored in secure memory.
33. The method as in any one of embodiments 19-32 wherein the integrity check determines that only registered components are activated.
34. The method as in any one of embodiments 19-33 wherein the registered component is activated by loading into memory.
35. The method as in any one of embodiments 19-34 wherein the registered component is activated by initially entering an integrity-verified state.
36. The method as in any one of embodiments 19-35, further comprising performing a second integrity check.
37. The method as in any one of embodiments 19-36, further comprising performing a second integrity check on a condition that the device has completed a successful network connection.
38. The method as in any one of embodiments 19-37 wherein initiating a second integrity check by the device or in response to a message is used.
39. The method as in any one of embodiments 19-38, wherein the integrity check result is stored in a protected storage location.
40. The method as in any one of embodiments 19-39 wherein the confirmation message comprises a cryptographically signed statement.
41. The method as in any one of embodiments 19-40, wherein the confirmation message comprises proof of a binding between the integrity check and a subsequent authentication process.
42. The method as in any one of embodiments 19-41, wherein the confirmation message comprises proof of a binding between a secure boot check and a subsequent authentication process.
43. The method as in any one of embodiments 19-42 wherein the confirmation message comprises a timestamp.
44. The method as in any one of embodiments 19-43, wherein the confirmation message comprises a first timestamp obtained before the integrity check and the start-up check and a second timestamp obtained after the integrity check and the start-up check.
45. The method as in any one of embodiments 19-44 wherein the confirmation message comprises an indication of the device configuration.
46. A method as in any of embodiments 19-45 wherein the confirmation message comprises an indication of a security attribute of the device component.
47. The method as in any one of embodiments 19-46 further comprising receiving a decision message from the PVM in response to the confirmation message.
48. A method as in any of embodiments 19-47 wherein the decision message comprises an indication of network privileges associated with the device.
49. The method as in any one of embodiments 19-48 further comprising the Trusted Resource (TR) performing an integrity check.
50. The method as in any one of embodiments 19-49, further comprising a Trusted Resource (TR) performing a secure boot check.
51. The method as in any one of embodiments 19-50 further comprising a Trusted Resource (TR) generating an acknowledgement message.
52. The method as in any one of embodiments 19-51 further comprising a Trusted Resource (TR) receiving a decision message from the PVM.
53. The method as in any one of embodiments 19-52 wherein the FBC deletes or uninstalls a portion of the normal code and restarts the device for reconfirmation.
54. A Platform Validation Entity (PVE) for facilitating Platform Validation and Management (PVM), comprising the PVE configured to receive a PVM token in response to a validation message from a device, the PVM token comprising at least verification information from the device.
55. The method of embodiment 54 further comprising the PVEs being configured to perform validation using predetermined information from the PVM token.
56. The method as in any one of embodiments 54-55 further comprising the PVEs being configured to send a failure report to a Device Management System (DMS) to initiate a revision and reconfirmation in response to a failed component.
57. The method as in any one of embodiments 54-56 further comprising the PVEs being configured to send a modified PVM token with a validation result.
58. The method as in any one of embodiments 54-57, wherein the authentication message comprises at least a security policy attribute.
59. An apparatus for performing validation via Platform Validation and Management (PVM), the apparatus comprising a processor configured to perform integrity checking on at least one pre-specified component of the apparatus and to store integrity check results in a memory.
60. The method as in any one of embodiments 59, further comprising the processor configured to perform a secure boot check on the device and store a secure boot check result in the memory.
61. The method as in any one of embodiments 59-60 further comprising the processor being configured to generate a confirmation message based on the integrity check result and the secure boot check result.
62. The method as in any one of embodiments 59-61, further comprising a transmitter to send an acknowledgement message to the PVM.
63. A Device Management System (DMS) for facilitating Platform Validation and Management (PVM), comprising the DMS configured to receive at least one of a failure report and a PVM token from a Platform Validation Entity (PVE) in response to a validation message from a device, to initiate a fix and re-validation in response to a failed component, the PVM token including at least authentication information from the device.
64. The method according to any of embodiments 63, further comprising the DMS being configured to determine availability of updates for at least the failed component.
65. The method as in any one of embodiments 63-64 further comprising the DMS being configured to prepare wireless updates as available updates.
66. The method according to any of embodiments 63-65, further comprising the DMS configured to ensure that there is a trusted reference value for available updates in a validation database.
67. The method as in any one of embodiments 63-66 further comprising the DMS being configured to send the modified PVM token and a revalidation indication to a security gateway (SeGW).
68. The method as in any one of embodiments 63-66 further comprising the DMS being configured to send a re-acknowledgement trigger to the device.
69. A method for use in wireless communications, the method comprising performing Platform Validation and Management (PVM).
70. The method according to any of embodiments 69, wherein performing PVM comprises performing semi-autonomous validation (SAV).
71. The method as in any one of embodiments 69-70 wherein performing the PVM comprises wireless device validation performed in a Platform Validation Entity (PVE).
72. The method as in any one of embodiments 69-71, wherein performing the PVM comprises:
73. the method of any of embodiments 69-72, further comprising performing a secure boot in stages to ensure that each trusted context component is loaded on condition that local validation of the components that need to be loaded is successful.
74. The method of any of embodiments 69-73, further comprising, in a first phase, loading the trusted context component by a root of trust (RoT) dependent secure boot.
75. The method according to any of embodiments 69-74, further comprising in a second phase, loading a component outside the trusted environment, the component being required to perform basic communication with a security gateway (SeGW).
76. The method of any of embodiments 69-75, further comprising loading remaining components of the device.
77. The method as in any one of embodiments 69-76 further comprising collecting and sending data to the SeGW, the data comprising at least one of: device identification, information belonging to an enhanced home node B (H (e) NB), information belonging to a trusted context (TrE), authentication data; verifying the binding; component designator-ordered component list of components.
78. The method according to any of embodiments 69-77, further comprising reconfirming the device in response to a loss of connection to the authenticated H (e) NB.
79. The method as in any one of embodiments 69-78, further comprising determining a failure condition during PVE validation, the failure condition comprising at least one of: determining that the TrE is untrustworthy according to the transferred TrE information; determining that the integrity measurement/verification data does not match; determining that a Reference Integrity Metric (RIM) of the component is lost; determining that a load component list (Clist) policy failed; determining that a device or RIM certificate is expired; and the SeGW rejects network access and device authentication and blocks subsequent authentication attempts.
80. The method as in any one of embodiments 69-79 further comprising generating a PVM token that is bound to an identification of a valid TrE and a validation process.
81. The method as in any one of embodiments 69-80 wherein timeliness of validation is controlled by adding a timestamp to the PVM token, the timestamp being originally generated by the SeGW and added by each entity to the time-ordered list as the PVM token is passed.
82. The method as in any embodiments 69-81 wherein the timeliness of the validation is controlled by completing the first and second stages of communication with the SeGW and the PVE, and using the nonce provided by the SeGW/PVE to bind further validation of the stage 3 check before sending the results of the third stage to the SeGW.
83. The method of any of embodiments 69-82, further comprising generating the RIM certificate by an operator with whom the device wishes to establish a backhaul link.
84. The method of any of embodiments 69-83 wherein the RIM manager configures the PVE and DMS for platform validation, configures the DMS to perform a credential update operation on a component on the device on which the RIM credential is to be installed, and further comprising enforcing a state re-validation on the device.
85. The method as in any of embodiments 69-84, further comprising an operator controlling device operation by performing at least one of: encrypting a portion of the component to be locked using the symmetric key; sending a decryption key to the TrE in a protected and controlled space that is only accessible with operator authorization; and releasing the authorization data to the TrE when the PVE receives the indicator for the component.
86. The method as in any one of embodiments 69-85, further comprising establishing the individualization from the PVM by using the device identification in the RIM certificate.
87. The method according to any of embodiments 69-86, further comprising establishing individualization according to providing operator signatures with device identification and component indicator pairs.
88. A method as in any of embodiments 69-87 further comprising establishing a blacklist for a device and barring network access based on the blacklist, wherein the blacklist includes at least a device identification.
89. A method as in any of embodiments 69-88 further comprising establishing a quarantine network for the device where the SeGW serves as a forced bearer for the core network.
90. The method as in any one of embodiments 69-89 further comprising establishing a gray list of quarantined devices, the gray list including at least one of devices that newly joined the network, devices that have not connected for a predetermined time, devices with suspicious performance, and devices for which a safety alert exists.
91. The method according to any of embodiments 69-90, further comprising performing diagnostic validation including at least one of loading an unknown component, rejecting loading an unknown component, and disabling a component.
92. The method as in any one of embodiments 69-91, further comprising performing a minimal acknowledgement comprising sending an indicator or reference value used in a local authentication process.
93. The method according to any of embodiments 69-92, further comprising binding a confirmation in the authentication certificate.
94. The method according to any of embodiments 69-93, wherein a certificate is a set of signature data signed by an issuer, its SeGW, or a subordinate entity responsible for managing these certificates, the signature data in the certificate containing at least a device identification, a device public key for authentication and validation, and a list of components.
95. The method as in any one of embodiments 69-94 further comprising performing autonomous validation that does not send any validation data to the SeGW pursuant to the phases of the secure boot process and grouping management identities.
96. The method as in any one of embodiments 69-95, further comprising sending the device identification and the authentication data over the established secure channel on a condition that the device and the SeGW have completed a first authentication protocol for the device in which the device authenticates one of its management identities.
97. The method as in any one of embodiments 69-96, further comprising securing sending the component code to the secure memory card.
98. The method as in any one of embodiments 69-97 further comprising replacing the digest value with encryption.
99. The method as in any one of embodiments 69-98, further comprising including the device location information in the acknowledgement message.
100. The method as in any one of embodiments 69-99 wherein the device is to be identified by a device identifier (Dev _ ID).
101. The method as in any one of embodiments 69-100 wherein Dev _ ID is a formal domain name (FQDN), a Uniform Resource Locator (URL), a uniform resource name, a uniform resource identifier, a Medium Access Control (MAC) address, an Internet Protocol (IP) address, an IP host identifier, a subnet address, an international mobile equipment identity, an international mobile station equipment identity and software version number, an electronic serial number, a mobile equipment identity, an IP multimedia core network subsystem user ID, or a mobile station integrated services data network ID.
102. The method according to any of embodiments 69-101, wherein the Dev ID is in alphanumeric or machine readable form enabling unique reliable and unambiguous identification of the device.
103. The method as in any one of embodiments 69-102, further comprising performing a device integrity check at boot-up based on the one or more trusted reference values and the TrE.
104. The method as in any one of embodiments 69-103, wherein the TrE is an entity that contains minimum functionality required by the PVM.
105. A method as in any of embodiments 69-104 further comprising performing network authentication with the SeGW and transmitting data containing a device identification.
106. The method as in any one of embodiments 69-105 further comprising preparing a measurement message containing a local pass/fail indicator for use as the established integrity measurement in the first and second steps.
107. The method as in any one of embodiments 69-106 wherein performing the PVM comprises basic SAV.
108. The method as in any one of embodiments 69-107, wherein performing the PVM comprises a full SAV.
109. The method as in any one of embodiments 69-108 wherein performing the PVM comprises platform validation.
110. The method as in any one of embodiments 69-109 wherein platform validation allows protection of a Core Network (CN) against malicious devices.
111. The method as in any one of embodiments 69-110 wherein performing the PVM comprises using indirect communication between the device and the CN.
112. The method as in any one of embodiments 69-103, wherein the platform validation ensures that the device in the validated security state is able to communicate with the entity in the CN.
113. The method as in any one of embodiments 69-112 wherein performing the PVM comprises reporting confirmation data.
114. The method as in any one of embodiments 69-113, wherein reporting the acknowledgement data comprises collecting the acknowledgement data and reporting it to the SeGW.
115. The method as in any one of embodiments 69-114, wherein performing the PVM comprises validating using the PVE.
116. The method as in any one of embodiments 69-115, wherein the PVEs determine the validity of the device.
117. The method as in any one of embodiments 69-116 wherein the PVEs are Policy Decision Points (PDPs).
118. The method as in any one of embodiments 69-117 wherein the PVEs report a failure condition.
119. The method as in any one of embodiments 69-118 wherein the failure condition is a TrE failure, a verification data failure, a Clist policy failure, or a pre-validation device authentication failure.
120. The method as in any one of embodiments 69-119, wherein performing the PVM comprises reconfirming.
121. The method as in any one of embodiments 69-120, wherein reconfirming comprises periodically reconfirming.
122. The method according to any of embodiments 69-121, wherein periodically reconfirming comprises confirming that the device is operating in a predetermined state with a lesser risk of executing malicious code.
123. The method as in any one of embodiments 69-122, wherein reconfirming comprises initiating an authentication process.
124. The method as in any one of embodiments 69-123 wherein performing the PVM comprises device initiated reconfirmation.
125. The method as in any one of embodiments 69-124 wherein the device initiated reconfirmation is performed periodically.
126. The method as in any one of embodiments 69-125 wherein performing the PVM comprises a network initiated reconfirmation.
127. The method as in any one of embodiments 69-126 wherein the network initiated reconfirmation is performed periodically.
128. The method as in any one of embodiments 69-127 wherein the network initiated reconfirmation may be performed for security needs.
129. The method as in any one of embodiments 69-128, wherein performing the PVM comprises platform management.
130. The method as in any one of embodiments 69-129 wherein the platform management comprises a DMS performing device management.
131. The method as in any one of embodiments 69-130 wherein the platform management is based on the received and stored device information.
132. The method according to any of embodiments 69-131 wherein the device is an h (e) NB.
133. The method as in any one of embodiments 69-132 wherein performing the PVM comprises token passing.
134. The method as in any one of embodiments 69-133 wherein the PVM is a process.
135. The method as in any one of embodiments 69-134 wherein the SeGW is responsible for generating and managing tokens uniquely associated with the validation process.
136. The method as in any one of embodiments 69-135, wherein performing the PVM comprises validating over the public internet.
137. The method as in any one of embodiments 69-136, wherein validating over the internet comprises satisfying special requirements that ensure security of the initial validation.
138. The method as in any one of embodiments 69-137 wherein performing the PVM comprises operator RIM masking.
139. The method as in any one of embodiments 69-138 wherein the operator RIM mask replaces a large number of RIM certificates for device components from various external sources with RIM certificates generated by an operator with whom the device wishes to establish a backhaul link.
140. The method as in any one of embodiments 69-139 wherein performing the PVM comprises an operator component lock.
141. The method as in any one of embodiments 69-140 wherein the carrier component lock comprises operation of the carrier control device or a component thereof in an external network.
142. The method according to any one of embodiments 69-141, wherein performing PVM comprises individualizing.
143. The method as in any one of embodiments 69-142, wherein personalizing comprises documenting by whom device configuration and trustworthiness is managed.
144. The method according to any of embodiments 69-143, wherein the individualizing includes providing data to the device, the data issuing addressing of the device.
145. The method as in any one of embodiments 69-144, wherein performing the PVM comprises blacklisting the device.
146. The method as in any one of embodiments 69-145, wherein blacklisting the device comprises establishing a blacklist for the device and barring network access based on the blacklist.
147. The method as in any one of embodiments 69-146 wherein the blacklist is a device-specific blacklist.
148. The method as in any one of embodiments 69-147 wherein the blacklist is a network-wide blacklist.
149. The method as in any one of embodiments 69-148 wherein performing PVM comprises whitelisting devices.
150. A method as in any of embodiments 69-149 wherein whitelisting devices comprises establishing a whitelist for the devices and allowing network access based on the whitelist.
151. The method as in any one of embodiments 69-150 wherein the whitelist is a device specific whitelist.
152. The method as in any one of embodiments 69-151 wherein the whitelist is a network-wide whitelist.
153. The method as in any one of embodiments 69-152 wherein performing the PVM comprises isolating a network.
154. The method as in any one of embodiments 69-153 wherein a decision is made by the SeGW as to which devices to isolate.
155. The method as in any one of embodiments 69-154 wherein the isolated device does not have direct access to the CN and provides limited services.
156. The method as in any one of embodiments 69-155 wherein performing the PVM comprises parametric validation.
157. A method as in any of embodiments 69-156 wherein parameter validation comprises sending parameters in the clear during validation.
158. The method according to any one of embodiments 69-157, wherein performing PVM comprises diagnostic validation.
159. The method as in any one of embodiments 69-158 wherein executing the PVM comprises loading an unknown component.
160. The method as in any one of embodiments 69-159 wherein loading the unknown component allows loading the component without a RIM in the device.
161. The method as in any one of embodiments 69-160, wherein performing PVM comprises PVM diagnosing a failure condition.
162. The method as in any one of embodiments 69-161 wherein the DMS is configured to omit a failed component on the replacement device.
163. The method as in any one of embodiments 69-162 wherein the DMS is configured to replace all components in the Clist with correct components.
164. The method as in any one of embodiments 69-163, wherein performing the PVM comprises denying loading of unknown components.
165. The method as in any one of embodiments 69-164, wherein the diagnostic confirmation comprises reporting that the component cannot be loaded and sending a measurement of the component to the CN.
166. The method as in any one of embodiments 69-165 wherein executing the PVM comprises disabling a component.
167. A method as in any of embodiments 69-166 wherein disabling a component comprises sending disable CInd and reconfirm messages for components that cannot be confirmed nor replaced or updated in the PVM without denying a connection to the device.
168. The method as in any one of embodiments 69-167 wherein performing the PVM comprises a minimum validation strategy.
169. The method as in any one of embodiments 69-168, wherein the minimum acknowledgement policy comprises performing device acknowledgements only under certain circumstances.
170. The method as in any one of embodiments 69-169, wherein performing the PVM comprises binding a validation in the authentication certificate.
171. The method according to any of embodiments 69-170, wherein binding the acknowledgement in the authentication certificate comprises automatically binding an authentication ID of the device with the acknowledgement.
172. The method as in any one of embodiments 69-171, wherein performing the PVM comprises revoking the device authentication certificate.
173. The method according to any of embodiments 69-172, wherein revoking the device authentication certificate comprises determining whether to revoke the device certificate for device authentication from the device.
174. The method according to any of embodiments 69-173, wherein revoking the device authentication certificate includes indicating to the device that the device authentication failed due to revoking the certificate; and remove devices from the white list or add devices to the black list.
175. The method as in any one of embodiments 69-174, further comprising the PVM performing autonomous validation (AuV).
176. A method as in any of embodiments 69-175 wherein the AuV comprises omitting to send acknowledgement data to the SeGW.
177. The method as in any one of embodiments 69-176 wherein the AuV comprises sending only the Dev _ ID.
178. The method as in any one of embodiments 69-177, wherein performing the PVM comprises pruning.
179. The method as in any one of embodiments 69-178, wherein modifying comprises updating software.
180. The method as in any one of embodiments 69-179 wherein the modification is an action necessary to continue service to the device.
181. The method as in any one of embodiments 69-180 wherein performing the PVM comprises device initiated corrections.
182. The method as in any one of embodiments 69-181 wherein upon detecting an error, performing a device-initiated fix is performed instead of quarantining the device.
183. The method as in any one of embodiments 69-182 wherein performing the PVM comprises network initiated corrections.
184. The method as in any one of embodiments 69-183, wherein executing the PVM comprises launching a fallback code base.
185. The method as in any one of embodiments 69-184 wherein initiating the fallback code library comprises triggering a software update to a master code library containing the RIM.
186. A method as in any of embodiments 69-185 wherein performing PVM comprises sending a distress signal.
187. The method as in any one of embodiments 69-186 wherein a rollback code (FBC) image facilitates device revision and is stored in secure memory.
188. The method of any of embodiments 69-187, wherein performing the PVM comprises not validating using a RIM.
189. The method as in any one of embodiments 69-188 wherein performing PVM comprises including location-based information.
190. The method according to any of embodiments 69-189, wherein the location information is used for theft prevention, cargo tracking, fleet monitoring, or surveillance.
191. The method as in any one of embodiments 69-190, wherein the device comprises a GPS module.
192. The method as in any one of embodiments 69-191 wherein the secure boot comprises validating the GPS module and components.
193. A method according to any of embodiments 69-192 wherein performing PVM comprises an acknowledgment connection using IKEv2 protocol.
194. The method according to any one of embodiments 69-193, wherein performing the PVM comprises using a transport protocol.
195. A method as in any of embodiments 69-194 wherein the transport protocol is IKE or ISAKMP.
196. The method as in any one of embodiments 69-195, wherein the transport protocol defines a plurality of available certificate profiles.
197. The method as in any one of embodiments 69-196, wherein performing the PVM comprises a Transport Layer Security (TLS) handshake.
198. The method as in any one of embodiments 69-197 wherein performing the PVM comprises using a TLS session ticket extension.
199. A method according to any of embodiments 69-198 wherein the TLS session ticket extension allows a server to issue session tickets to clients that can be used to restore sessions and save the session state of each client.
200. The method according to any of embodiments 69-199, wherein performing the PVM comprises a supplemental authentication protocol.
201. The method as in any one of embodiments 69-200 wherein the supplemental authentication protocol comprises transmitting the Dev _ ID and authentication data for the Dev _ ID over an established secure channel.
202. The method as in any one of embodiments 69-201 wherein performing the PVM comprises managing session establishment.
203. A method as in any of embodiments 69-202 wherein managing session establishment comprises using a communication protocol between a device and a SeGW.
204. The method as in any one of embodiments 69-203 wherein performing the PVM comprises certificate-based validation.
205. The method as in any one of embodiments 69-204, wherein performing PVM comprises using an Open Mobile Alliance (OMA) for a Device Management (DM) based architecture.
206. The method as in any one of embodiments 69-205, wherein performing the PVM comprises using a certificate exchange method.
207. The method as in any one of embodiments 69-206, wherein the certificate exchange method comprises merging a confirmation with an authentication and automatically binding an authentication ID of the device with the confirmation.
208. The method as in any one of embodiments 69-207, wherein the binding certificate is a signature dataset.
209. The method as in any one of embodiments 69-208 wherein the binding certificate is signed by the issuer.
210. The method as in any one of embodiments 69-209 wherein performing the PVM comprises pre-certificate exchange.
211. The method as in any one of embodiments 69-210 wherein performing the PVM comprises a post-certificate exchange.
212. The method according to any of embodiments 69-211, wherein performing the PVM comprises using a signed message format for downloading the software package from the publisher to the device.
213. The method as in any one of embodiments 69-212 wherein the signed message format allows for sending the file in a single signed data packet.
214. The method as in any one of embodiments 69-213 wherein the signed message format allows the recipient device to authenticate the source and the data format contains instructions for installing the content.
215. The method as in any one of embodiments 69-214 wherein the signed message format comprises a header containing a format version, and a command list and a length of the payload component.
216. The method as in any one of embodiments 69-215 wherein performing PVM comprises using a second code library for revising.
217. The method as in any one of embodiments 69-216, wherein performing the PVM comprises using an external FBC.
218. The method as in any one of embodiments 69-217 wherein an external FBC is used to initiate the correction and is run in TrE.
219. The method as in any one of embodiments 69-218 wherein performing the PVM comprises using an internal parallel code library.
220. The method as in any one of embodiments 69-219, wherein executing the PVM comprises using a trigger mechanism and a required fallback code base for facilitating the revising.
221. The method as in any one of embodiments 69-220 wherein performing the PVM comprises using an internal sequential code library.
222. The method as in any one of embodiments 69-221 wherein the internal sequential code library defines protocols and commands for installing or changing software configurations on the remote device.
223. The method as in any one of embodiments 69-222 wherein using an internal sequential code library comprises merging results of performing the PVM and the protocol.
224. The method as in any one of embodiments 69-223, wherein performing the PVM comprises using security policy attributes.
225. The method according to any of embodiments 69-201, wherein using the security policy attributes comprises generating a standardized list of Security Policy Attributes (SPAs).
226. The method as in any one of embodiments 69-201 wherein the SPA is a policy that informs the PVE what action should be taken if the integrity check of a particular module fails.
Although the features and components of the PVM are described above in particular combinations, each feature or component can be used alone without the other features and components or in various combinations or non-combinations with other features and components. The methods or processes provided herein may be implemented in a computer program, software, or firmware incorporated into a computer-readable storage medium for execution by a general purpose computer or a processor. Examples of the computer readable storage medium include read-only memory (ROM), random-access memory (RAM), registers, cache, semiconductor memory devices, magnetic media such as internal hard disks and removable hard disks, magneto-optical media, and optical media such as CD-ROM optical disks and Digital Versatile Disks (DVDs).
Suitable memories include, for example, a general purpose memory, a special purpose memory, a conventional processor, a digital signal processor (SDP), a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), and/or a state machine.
A processor connected with software can be used for realizingA radio frequency transceiver, a Wireless Transmit Receive Unit (WTRU), User Equipment (UE), terminal, base station, Radio Network Controller (RNC), or any host computer used in a wireless device. The WTRU may be used in conjunction with modules, implemented in hardware and/or software, such as a camera, a video camera module, a video phone, a hands-free phone, a vibrating device, a speaker, a microphone, a television transceiver, a portable headset, a keyboard, a Bluetooth moduleA module, a Frequency Modulation (FM) wireless unit, a Liquid Crystal Display (LCD) display unit, an Organic Light Emitting Diode (OLED) display unit, a digital music player, a media player, a video game player module, an internet browser, and/or any Wireless Local Area Network (WLAN) or Ultra Wideband (UWB) module.

Claims (48)

1. A method for Platform Validation and Management (PVM), the method comprising:
receiving a PVM token in response to a confirmation message from a device, the PVM token including at least authentication information from the device;
performing validation using predetermined information from the PVM token;
sending a failure report to a Device Management System (DMS) to initiate a fix and re-validation in response to the failed component; and
sending the modified PVM token with the validation result.
2. The method of claim 1, wherein performing validation comprises determining suitability of at least one failure condition.
3. The method of claim 1, wherein acknowledgement is made using at least one of remote acknowledgement (RV), autonomous acknowledgement (AuV), semi-autonomous acknowledgement (SAV), full SAV (F-SAV), minimum acknowledgement, or parametric acknowledgement.
4. The method of claim 1, wherein the verification information comprises at least one of a device identification, device information, trusted environment (TrE) information, verification data, a verification binding, and an ordered component list of component indicator-components.
5. The method of claim 1, wherein performing validation comprises at least one of determining that the TrE is not trusted, determining that integrity measurement/verification data does not match, determining that a Reference Integrity Metric (RIM) of a component is lost, determining that a load component list policy fails, and determining that a device is out-of-date or a RIM certificate is out-of-date.
6. The method of claim 1, wherein the PVM token is bound to an identification and validation process that validates the TrE.
7. The method of claim 1, wherein timeliness of validation is controlled by adding a timestamp to a PVM token and to a chronologically ordered list by each entity passing the PVM token.
8. The method of claim 1, further comprising establishing individualization by using device identification in RIM certificates.
9. The method of claim 1, further comprising sending the PVM token to the DMS to determine applicability of quarantine, whitelist, blacklist, and grey list.
10. The method of claim 9, wherein the gray list includes at least one of devices that newly join the network, devices that have not been connected for an extended period of time, devices with suspicious behavior, and devices for which a security alert exists.
11. The method of claim 1, wherein the operator RIM mask replaces predetermined RIM certificates for device components from various external sources with operator RIM certificates.
12. The method of claim 1, wherein a query is sent to a validation database to check information received in the PVM token.
13. The method of claim 1, wherein a query is sent to a configuration database to retrieve a configuration policy based on a predetermined identifier.
14. The method of claim 13, wherein the retrieved configuration policy is evaluated.
15. The method of claim 1, wherein the message is sent to a validation database manager in response to a failure condition.
16. A method of validating a device connected to Platform Validation and Management (PVM), the method comprising:
performing an integrity check on at least one pre-designated component of the device and storing an integrity check result;
performing a security boot check on the device and storing a security boot check result;
generating a confirmation message based on the integrity check result and the secure launch check result; and
forwarding the acknowledgment message to the PVM.
17. The method of claim 16, further comprising:
performing a secure boot in stages to ensure that each trusted environment (TrE) component is loaded on condition that a local validation of the TrE component is successful;
in a first phase, loading components of the TrE via a root of trust (RoT) -dependent secure boot;
In a second phase, loading components outside the TrE to realize communication with the PVM; and
the remaining components of the device are loaded.
18. The method of claim 16, wherein performing the integrity check is based on at least one trusted reference value and the TrE.
19. The method of claim 16, wherein the confirmation message includes a local pass/fail indicator as a measure of integrity established during the first and second phases.
20. The method of claim 16, further comprising rolling back a code base.
21. The method of claim 20, wherein initiating the fallback code base comprises triggering a software update to a main code base comprising a RIM.
22. The method of claim 16, further comprising sending a distress signal on a condition that a fallback code base is loaded.
23. The method of claim 16, wherein a fallback code (FBC) image facilitates device revision and is stored in secure memory.
24. The method of claim 16, wherein the integrity check determines that only registered components are activated.
25. The method of claim 24, wherein the registered component is activated by loading into memory.
26. The method of claim 24, wherein the registered component is activated by beginning to enter an integrity-proven state.
27. The method of claim 16, further comprising performing a second integrity check.
28. The method of claim 16, further comprising performing a second integrity check on a condition that the device has completed a successful network connection.
29. The method of claim 27, wherein the second integrity check is initiated one of by the device or in response to a message.
30. The method of claim 16, wherein the integrity check result is stored in a protected storage location.
31. The method of claim 16, wherein the confirmation message comprises a cryptographically signed statement.
32. The method of claim 16, wherein the confirmation message comprises evidence of a binding between the integrity check and a subsequent authentication process.
33. The method of claim 16, wherein the confirmation message comprises evidence of a binding between the secure launch check and a subsequent authentication process.
34. The method of claim 16, wherein the confirmation message comprises a timestamp.
35. The method of claim 16, wherein the confirmation message comprises a first timestamp taken before the integrity check and the start-up check and a second timestamp taken after the integrity check and the start-up check.
36. The method of claim 16, wherein the confirmation message comprises an indication of a device configuration.
37. The method of claim 16, wherein the confirmation message comprises an indication of a security attribute of the device component.
38. The method of claim 16, further comprising receiving a decision message from the PVM in response to the confirmation message.
39. The method of claim 38, wherein the decision message comprises an indication of network privileges associated with the device.
40. The method of claim 16, further comprising a Trusted Resource (TR) performing the integrity check.
41. The method of claim 16, further comprising a Trusted Resource (TR) performing the secure boot check.
42. The method of claim 12, further comprising a Trusted Resource (TR) generating the acknowledgement message.
43. The method of claim 38, further comprising a Trusted Resource (TR) receiving the decision message from the PVM.
44. The method of claim 24, wherein the FBC deletes or uninstalls a portion of normal code and restarts the device for reconfirmation.
45. A Platform Validation Entity (PVE) for facilitating Platform Validation and Management (PVM), comprising:
the PVE is configured to receive a PVM token in response to a confirmation message from a device, the PVM token including at least verification information from the device;
the PVEs are configured to perform validation using predetermined information from the PVM token;
the PVEs are configured to send a failure report to a Device Management System (DMS) in response to a failed component to initiate a revision and reconfirmation; and
the PVEs are configured to send a modified PVM token with a validation result.
46. The PVE of claim 45, wherein the verification information comprises at least a security policy attribute.
47. An apparatus for performing validation via Platform Validation and Management (PVM), the apparatus comprising:
a processor configured to perform an integrity check on at least one pre-specified component of the device and store an integrity check result in a memory;
The processor is configured to perform a secure boot check on the device and store a secure boot check result in the memory;
the processor is configured to generate a confirmation message based on the integrity check result and the secure launch check result; and
a transmitter to send the acknowledgement message to the PVM.
48. A Device Management System (DMS) for facilitating Platform Validation and Management (PVM), comprising:
the DMS is configured to receive at least one of a failure report and a PVM token from a Platform Validation Entity (PVE) in response to a validation message from a device to initiate a revision and reconfirmation in response to a failed component, the PVM token including at least authentication information from the device;
the DMS is configured to determine availability of updates for at least the failed component;
the DMS is configured to prepare wireless updates as available updates;
the DMS is configured to ensure that there is a trusted reference value for the available updates in a validation database;
the DMS is configured to send the modified PVM token and a reconfirmation indication to a security gateway (SeGW); and
the DMS is configured to send a re-acknowledgement trigger to the device.
HK12107351.4A 2009-03-06 2010-03-05 Platform validation and management of wireless devices HK1166911A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US61/158,242 2009-03-06
US61/173,457 2009-04-28
US61/222,067 2009-06-30
US61/235,793 2009-08-21

Publications (1)

Publication Number Publication Date
HK1166911A true HK1166911A (en) 2012-11-09

Family

ID=

Similar Documents

Publication Publication Date Title
JP6231054B2 (en) Verification and management of wireless device platforms
JP5390619B2 (en) HOMENODE-B device and security protocol
KR101523420B1 (en) Staged control release in boot process
US20180242129A1 (en) Method and Apparatus for Enabling Machine To Machine Communication
El Jaouhari Toward a secure firmware ota updates for constrained iot devices
HK1166911A (en) Platform validation and management of wireless devices
Strandberg Avoiding Vulnerabilities in Connected Cars a methodology for finding vulnerabilities
HK1180069A (en) Staged control release in boot process