[go: up one dir, main page]

|
|
Log in / Subscribe / Register

A proposed threat model for confidential computing

February 13, 2023

This article was contributed by Carlos Bilbao

The field of confidential computing is still in its infancy, to the point where it lacks a clear, agreed, and established problem description. Elena Reshetova and Andi Kleen from Intel recently started the conversation by sharing their view of a potential threat model in the form of this document, which is specific to the Intel Trust Domain Extension (TDX) on Linux, but which is intended to be applicable to other confidential-computing solutions as well. The resulting conversation showed that there is some ground to be covered to achieve a consensus on the model in the community.

This security specification constitutes the first public draft of a concise threat model for confidential computing for the Linux kernel. The first few paragraphs of the threat model describe the key confidential-computing assumption: the guest kernel in virtualized environments cannot trust the hypervisor. This probably is no surprise to readers familiar with the subject, but Greg Kroah-Hartman expressed his reservations:

That is, frankly, a very funny threat model. How realistic is it really given all of the other ways that a hypervisor can mess with a guest?

So what do you actually trust here? The CPU? A device? Nothing?

The threat model provides an answer; the trusted computing base (TCB) for Intel TDX is limited to the Intel platform, the TDX module, and the software stack running inside the TDX guest. We can therefore conclude that the confidential-computing TCB on any system features the platform hardware, the guest, and any intermediary that communicates between the platform and guest.

Hardening preexisting interfaces

Memory encryption and hardware attestation can help guarantee privacy from malicious software while verifying the integrity of both the guest memory and the trusted software accessing that memory. If the attestation process confirms that the confidential-computing system has not been tampered with, then it is guaranteed that the guest's private memory is unreadable to the host. These techniques offer strong security guarantees but are not enough to protect the guest from attacks that exploit the communication interfaces between the host and the guest. The threat model addresses this gap by defining a threat-mitigation matrix that lists potential interface entry points and their possible mitigations.

Non-robust device drivers are an example of a vulnerable interface that can be exploited to feed malicious input from the hypervisor side. Kroah-Hartman complained that the "hardening" terminology used in the threat model can be misleading; broken drivers should be fixed and not hardened, he said. Reshetova disagreed, stating that certain fixes apply to systems where the hardware is operating correctly, but where the hypervisor is malicious. The primary concern around this is that an untrusted host can use device interfaces to attack the guest, but the Linux device drivers were not developed with this potential threat in mind.

Regarding device drivers, the specification recommends the maintenance of a list of allowed devices. In practice, the virtualized guest needs little more than the virtio drivers. James Bottomley referred to this when he noted that virtio devices needed by a guest to boot are potentially the most dangerous. Christophe de Dinechin questioned just how much harm a malicious virtio device might cause to the guest kernel and whether such an attack could really disclose confidential information held by the guest. To date, this question remains open, but there have been efforts to mitigate virtio threats that have led to kernel patches.

The specification also explains that several subsystems could be used for fuzz-testing of the communication interfaces exposed to the malicious hypervisor. For example, Intel-specific TDVMCALL hypercalls communicate between the guest and the TDX module and can be used for fuzz testing. Randomness inside the guest also needs extra precautions. A failed RDRAND or RDSEED function must trigger an infinite loop, precluding the guest from using alternative options that the host can tamper with. The KVM clock (kvm-clock) also becomes untrusted and must be disabled inside the guest. ACPI tables are never mapped as shared with the host, a new interface is therefore introduced to allow the host to obtain the operating regions declared in those tables. The confidential-computing guest must acknowledge the private memory pages allocated by the host in order to be protected from attacks that affect the guest paging.

The confidential-computing threat model also addresses the interesting problem of how to panic the guest kernel. The host controls inter-processor interrupts, which thus cannot be trusted to safely stop other CPUs. Furthermore, some driver notifiers perform tasks that may involve waiting for some host action. Reshetova mentioned that denial-of-service (DoS) attacks that trigger a guest crash (which can be preceded by multiple oopses) are out of scope in this model, but that one cannot assume that all crashes are safe. She further explained that certain crashes, such as those related to memory corruption, can be a starting point for further security attacks, leading to privilege escalation, information disclosure, and data corruption — the sorts of outcomes that confidential computing seeks to prevent.

The Linux TDX software stack uses dm-crypt with LUKS to protect the guest's storage devices by providing encryption and authentication for the storage volumes. However, Richard Weinberger noted that the cryptography used in LUKS is meant to safeguard data in storage, but not data in transport. Reshetova responded that the disk encryption presumes that the attacker can observe all encrypted data on the disk — and the alterations that occur — when a new block is written; she is therefore uncertain of the potential for this type of attack.

Finally, a confidential-computing guest must be aware of transient execution attacks that exploit speculative CPU optimizations. For example, the kernel running inside the guest should take extra precautions to prevent any potential Spectre vulnerabilities associated with the above-mentioned host-controlled interfaces. The specification proposes using static analyzers like Smatch to identify potential attack surfaces. Nothing can replace manually inspecting identified lines of code, but this review time can be lessened by filtering against drivers that the guest kernel depends on.

In conclusion

Agreeing on a particular threat model is one of the most pressing challenges for confidential computing in the cloud, and its agreement will affect how confidential computing integrates with the larger kernel development community. For example, with regard to the efforts to strengthen drivers, some developers argued that it would be easier to create confidential-computing-specific drivers instead of relying on the existing Linux drivers, which were not written with this threat model in mind. The fuzzing efforts conducted by those working on the Linux TDX software stack have already laid the groundwork for several patches, but these are pending reviewer acceptance. Maintaining the hardness of the system, though, will require that the maintainers accept the model of what it is being hardened against.

Index entries for this article
SecurityConfidential computing
GuestArticlesBilbao, Carlos


to post comments

A proposed threat model for confidential computing

Posted Feb 13, 2023 22:04 UTC (Mon) by flussence (guest, #85566) [Link] (3 responses)

Any DRM system that relies on its observable universe passing a vibe check will fail as emulation advances, and when there's anything worth breaking the DRM for, that historically happens on the order of a few months. Has Intel itself been living under a rock for these past 20 years, or just the someone-else's-computer roommates they're selling this snake oil to?

A proposed threat model for confidential computing

Posted Feb 14, 2023 8:25 UTC (Tue) by smurf (subscriber, #17840) [Link] (2 responses)

I don't think it's snake oil, per se. As a guest you want to protect yourself against a compromised host.

Yes in theory you can't prevent a compromise via a host that's broken from the start, because emulation (as you noticed), but if you force them to emulate your whole take-up task you can discover that, due to timing if nothing else.

Once the guest is up and running the host can't do (much) more than crashing the guest or randomly corrupt data; the easiest way to discover *that* might be to simply run two guests in parallel and verify that the results are identical.

A proposed threat model for confidential computing

Posted Feb 14, 2023 8:57 UTC (Tue) by mjg59 (subscriber, #23239) [Link] (1 responses)

Emulation is avoided by having the CPU attest to boot state with a certificate that chains back to the CPU manufacturer. The CPU knows whether it's running emulation or running enclave code, so you can't fake that without compromising the CPU itself.

A proposed threat model for confidential computing

Posted Feb 14, 2023 13:21 UTC (Tue) by paulj (subscriber, #341) [Link]

Computation is the product of 2 things: The code and the state.

Having the code be signed but act on unsigned state does not give us any strong security guarantees about the computation. Particularly given:

a) We do not know how to write fully secure code
b) We rely on ever increasing amounts of complex code

Signing the code stops the code being subverted, but many many - indeed perhaps /most/ - security attacks are *not* the result of subversion of the running code, but of the exploiting of bugs in the original (as signed, often) code by feeding it state. Those attacks will not be stopped by this.

Just to note.

A proposed threat model for confidential computing

Posted Feb 14, 2023 11:22 UTC (Tue) by jezuch (subscriber, #52988) [Link] (1 responses)

Shouldn't this be done, like *before* this was implemented and shipped to millions of users?...

A proposed threat model for confidential computing

Posted Feb 14, 2023 16:43 UTC (Tue) by smurf (subscriber, #17840) [Link]

As they say: that would be too easy.

A proposed threat model for confidential computing

Posted Feb 14, 2023 14:50 UTC (Tue) by dullfire (guest, #111432) [Link] (1 responses)

I don't see anything in Intel's threat table about SMT/side channels, or other side channels that are exploitable due to the host having control of scheduling.

Is Intel just ignoring that?

A proposed threat model for confidential computing

Posted Feb 14, 2023 14:57 UTC (Tue) by dullfire (guest, #111432) [Link]

To be clear by "ignoring" I mean "excluding from their threat model"


Copyright © 2023, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds