Credits: The Register
Five boffins from four US universities have explored AMD’s Secure Encrypted Virtualization (SEV) technology – and found its defenses can be, in certain circumstances, bypassed with a bit of effort.
In a paper [PDF] presented Tuesday at the ACM Asia Conference on Computer and Communications Security in Auckland, New Zealand, computer scientists Jan Werner (UNC Chapel Hill), Joshua Mason (University of Illinois), Manos Antonakakis (Georgia Tech), Michalis Polychronakis (Stony Brook University), and Fabian Monrose (UNC Chapel Hill) detail two novel attacks that can undo the privacy of protected processor enclaves.
The paper, “The SEVerESt Of Them All: Inference Attacks Against Secure Virtual Enclaves,” describes techniques that can be exploited by rogue cloud server administrators, or hypervisors hijacked by hackers, to figure out what applications are running within an SEV-protected guest virtual machine, even when its RAM is encrypted, and also extract or even inject data within those VMs.
This is possible, we’re told, by monitoring, and altering if necessary, the contents of the general-purpose registers of the SEV guest’s CPU cores, gradually revealing or messing with whatever workload the guest may be executing. The hypervisor can access the registers, which typically hold temporary variables of whatever software is running, by briefly pausing the guest and inspecting its saved state. Efforts by AMD to prevent this from happening, by hiding the context of a virtual machine while the hypervisor is active, can also, it is claimed, be potentially thwarted.
SEV is supposed to safeguard sensitive workloads, running in guest virtual machines, from the prying eyes and fingers of malware and rogue insiders on host servers, typically machines located off-premises or in the public cloud.
The techniques, specifically, undermine the data confidentiality model of guest virtual machines by enabling miscreants to “recover data transferred over TLS connections within the encrypted guest, retrieve the contents of sensitive data as it is being read from disk by the guest, and inject arbitrary data within the guest,” according to the study.
As a result, the paper calls into question the confidentiality promises of cloud service providers. Pulling off these techniques, in our view, is non-trivial, so if anyone does fancy exploiting these weaknesses in SEV in real-world scenarios, they’ll need to be determined and suitably resourced.
In 2016, AMD introduced two memory encryption capabilities to protect sensitive data in multi-tenant environments, Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV). The former protects memory against physical attacks like cold boot and direct memory access attacks. The latter mixes memory encryption and virtualization, allowing each virtual machine to be protected from other virtual machines and underlying hypervisors and their admins.
Other vendors have their own secure enclave systems, like Intel SGX, which offers a different set of potential attack paths.
SEV, says AMD, protects customers’ guest VMs from one another, and from software running on the underlying host and its administrators. Whatever happens in these virtual machines should be off limits to other customers as well as the host machine’s operating system, hypervisor, and admins. However, the researchers have demonstrated that this threat model fails to ward off register inference attacks and structural inference attacks by malicious hypervisors.
“By passively observing changes in the registers, an adversary can recover critical information about activities in the encrypted guest,” the researchers explain in their paper.
A variant technique even works against Secure Encrypted Virtualization Encrypted State (SEV-ES), an extended memory protection technique that not only encrypts RAM but encrypts the guest’s virtual machine control block: this is an area of memory that stores a virtual machine’s CPU register contents when it is forced to yield to the hypervisor. This encryption should thus stop the hypervisor from making any sense of the paused VM’s context, though its contents can still be inferred, we’re told.
“We show how one can use data provided by the Instruction Based Sampling (IBS) subsystem (e.g. to learn whether an executed instruction was a branch, load, or store) to identify the applications running within the VM,” the paper says. “Intuitively, one can collect performance data from the virtual machine and match the observed behavior to known signatures of running applications.”
To conduct their work, the boffins used a Silicon Mechanics aNU-12-304 server with dual AMD Epyc 7301 processors and 256GB of RAM, running Ubuntu 16.04 and a custom 64-bit Linux kernel v4.15. Guest VMs received a single vCPU with 2GB of RAM, running Ubuntu 16.04 with the same kernel as the host.
While the security implications of accessing encrypted data and injecting arbitrary data are obvious, even exposing what applications are running in a guest VM has potentially undesirable consequences. Service providers could use the technique for application fingerprinting and banning unwanted software; malicious individuals could conduct reconnaissance to target exploits, to developing return-oriented programming (ROP) attacks or to undermine Address Space Layout Randomization (ASLR) defenses.
The researchers recommend the IBS subsystem be changed so that guest readings are discarded when secure encrypted virtualization is enabled.
The Register asked AMD for comment, and we’ve not heard back.