Exposure of Sensitive Information in Shared Microarchitectural Structures during Transient Execution

Incomplete Base
Structure: Simple
Description

A processor event may allow transient operations to access architecturally restricted data (for example, in another address space) in a shared microarchitectural structure (for example, a CPU cache), potentially exposing the data over a covert channel.

Extended Description

Many commodity processors have Instruction Set Architecture (ISA) features that protect software components from one another. These features can include memory segmentation, virtual memory, privilege rings, trusted execution environments, and virtual machines, among others. For example, virtual memory provides each process with its own address space, which prevents processes from accessing each other's private data. Many of these features can be used to form hardware-enforced security boundaries between software components. Many commodity processors also share microarchitectural resources that cache (temporarily store) data, which may be confidential. These resources may be shared across processor contexts, including across SMT threads, privilege rings, or others. When transient operations allow access to ISA-protected data in a shared microarchitectural resource, this might violate users' expectations of the ISA feature that is bypassed. For example, if transient operations can access a victim's private data in a shared microarchitectural resource, then the operations' microarchitectural side effects may correspond to the accessed data. If an attacker can trigger these transient operations and observe their side effects through a covert channel [REF-1400], then the attacker may be able to infer the victim's private data. Private data could include sensitive program data, OS/VMM data, page table data (such as memory addresses), system configuration data (see Demonstrative Example 3), or any other data that the attacker does not have the required privileges to access.

Common Consequences 1
Scope: Confidentiality

Impact: Read Memory

<<put the information here>>

Detection Methods 4
Manual AnalysisModerate
This weakness can be detected in hardware by manually inspecting processor specifications. Features that exhibit this weakness may include microarchitectural predictors, access control checks that occur out-of-order, or any other features that can allow operations to execute without committing to architectural state. Academic researchers have demonstrated that new hardware weaknesses can be discovered by examining publicly available patent filings, for example [REF-1405] and [REF-1406]. Hardware designers can also scrutinize aspects of the instruction set architecture that have undefined behavior; these can become a focal point when applying other detection methods.
Automated AnalysisModerate
This weakness can be detected (pre-discovery) in hardware by employing static or dynamic taint analysis methods [REF-1401]. These methods can label data in one context (for example, kernel data) and perform information flow analysis (or a simulation, etc.) to determine whether tainted data can appear in another context (for example, user mode). Alternatively, stale or invalid data in shared microarchitectural resources can be marked as tainted, and the taint analysis framework can identify when transient operations encounter tainted data.
Automated AnalysisHigh
Software vendors can release tools that detect presence of known weaknesses (post-discovery) on a processor. For example, some of these tools can attempt to transiently execute a vulnerable code sequence and detect whether code successfully leaks data in a manner consistent with the weakness under test. Alternatively, some hardware vendors provide enumeration for the presence of a weakness (or lack of a weakness). These enumeration bits can be checked and reported by system software. For example, Linux supports these checks for many commodity processors: $ cat /proc/cpuinfo | grep bugs | head -n 1 bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds mmio_stale_data retbleed
FuzzingOpportunistic
Academic researchers have demonstrated that this weakness can be detected in hardware using software fuzzing tools that treat the underlying hardware as a black box ([REF-1406], [REF-1430])
Potential Mitigations 13
Phase: Architecture and Design
Hardware designers may choose to engineer the processor's pipeline to prevent architecturally restricted data from being used by operations that can execute transiently.

Effectiveness: High

Phase: Architecture and Design
Hardware designers may choose not to share microarchitectural resources that can contain sensitive data, such as fill buffers and store buffers.

Effectiveness: Moderate

Phase: Architecture and Design
Hardware designers may choose to sanitize specific microarchitectural state (for example, store buffers) when the processor transitions to a different context, such as whenever a system call is invoked. Alternatively, the hardware may expose instruction(s) that allow software to sanitize microarchitectural state according to the user or system administrator's threat model. These mitigation approaches are similar to those that address Sensitive Information in Resource Not Removed Before Reuse; however, sanitizing microarchitectural state may not be the optimal or best way to mitigate this weakness on every processor design.

Effectiveness: Moderate

Phase: Architecture and Design
The hardware designer can attempt to prevent transient execution from causing observable discrepancies in specific covert channels.

Effectiveness: Limited

Phase: Architecture and Design
Software architects may design software to enforce strong isolation between different contexts. For example, kernel page table isolation (KPTI) mitigates the Meltdown vulnerability [REF-1401] by separating user-mode page tables from kernel-mode page tables, which prevents user-mode processes from using Meltdown to transiently access kernel memory [REF-1404].

Effectiveness: Limited

Phase: Build and Compilation
If the weakness is exposed by a single instruction (or a small set of instructions), then the compiler (or JIT, etc.) can be configured to prevent the affected instruction(s) from being generated, and instead generate an alternate sequence of instructions that is not affected by the weakness.

Effectiveness: Limited

Phase: Build and Compilation
Use software techniques (including the use of serialization instructions) that are intended to reduce the number of instructions that can be executed transiently after a processor event or misprediction.

Effectiveness: Incidental

Phase: Implementation
System software can mitigate this weakness by invoking state-sanitizing operations when switching from one context to another, according to the hardware vendor's recommendations.

Effectiveness: Limited

Phase: System Configuration
Some systems may allow the user to disable (for example, in the BIOS) sharing of the affected resource.

Effectiveness: Limited

Phase: System Configuration
Some systems may allow the user to disable (for example, in the BIOS) microarchitectural features that allow transient access to architecturally restricted data.

Effectiveness: Limited

Phase: Patching and Maintenance
The hardware vendor may provide a patch to sanitize the affected shared microarchitectural state when the processor transitions to a different context.

Effectiveness: Moderate

Phase: Patching and Maintenance
This kind of patch may not be feasible or implementable for all processors or all weaknesses.

Effectiveness: Limited

Phase: Requirements
Processor designers, system software vendors, or other agents may choose to restrict the ability of unprivileged software to access to high-resolution timers that are commonly used to monitor covert channels.

Effectiveness: Defense in Depth

Demonstrative Examples 3
Some processors may perform access control checks in parallel with memory read/write operations. For example, when a user-mode program attempts to read data from memory, the processor may also need to check whether the memory address is mapped into user space or kernel space. If the processor performs the access concurrently with the check, then the access may be able to transiently read kernel data before the check completes. This race condition is demonstrated in the following code snippet from [REF-1408], with additional annotations:

Code Example:

Bad
x86 Assembly

1 ; rcx = kernel address, rbx = probe array 2 xor rax, rax # set rax to 0 3 retry: 4 mov al, byte [rcx] # attempt to read kernel memory 5 shl rax, 0xc # multiply result by page size (4KB) 6 jz retry # if the result is zero, try again 7 mov rbx, qword [rbx + rax] # transmit result over a cache covert channel

Vulnerable processors may return kernel data from a shared microarchitectural resource in line 4, for example, from the processor's L1 data cache. Since this vulnerability involves a race condition, the mov in line 4 may not always return kernel data (that is, whenever the check "wins" the race), in which case this demonstration code re-attempts the access in line 6. The accessed data is multiplied by 4KB, a common page size, to make it easier to observe via a cache covert channel after the transmission in line 7. The use of cache covert channels to observe the side effects of transient execution has been described in [REF-1408].
Many commodity processors share microarchitectural fill buffers between sibling hardware threads on simultaneous multithreaded (SMT) processors. Fill buffers can serve as temporary storage for data that passes to and from the processor's caches. Microarchitectural Fill Buffer Data Sampling (MFBDS) is a vulnerability that can allow a hardware thread to access its sibling's private data in a shared fill buffer. The access may be prohibited by the processor's ISA, but MFBDS can allow the access to occur during transient execution, in particular during a faulting operation or an operation that triggers a microcode assist. More information on MFBDS can be found in [REF-1405] and [REF-1409].
Some processors may allow access to system registers (for example, system coprocessor registers or model-specific registers) during transient execution. This scenario is depicted in the code snippet below. Under ordinary operating circumstances, code in exception level 0 (EL0) is not permitted to access registers that are restricted to EL1, such as TTBR0_EL1. However, on some processors an earlier mis-prediction can cause the MRS instruction to transiently read the value in an EL1 register. In this example, a conditional branch (line 2) can be mis-predicted as "not taken" while waiting for a slow load (line 1). This allows MRS (line 3) to transiently read the value in the TTBR0_EL1 register. The subsequent memory access (line 6) can allow the restricted register's value to become observable, for example, over a cache covert channel. Code snippet is from [REF-1410]. See also [REF-1411].

Code Example:

Bad
x86 Assembly

1 LDR X1, [X2] ; arranged to miss in the cache 2 CBZ X1, over ; This will be taken 3 MRS X3, TTBR0_EL1; 4 LSL X3, X3, #imm 5 AND X3, X3, #0xFC0 6 LDR X5, [X6,X3] ; X6 is an EL0 base address 7 over

Observed Examples 3
CVE-2017-5715A fault may allow transient user-mode operations to access kernel data cached in the L1D, potentially exposing the data over a covert channel.
CVE-2018-3615A fault may allow transient non-enclave operations to access SGX enclave data cached in the L1D, potentially exposing the data over a covert channel.
CVE-2019-1135A TSX Asynchronous Abort may allow transient operations to access architecturally restricted data, potentially exposing the data over a covert channel.
References 15
Page Table Isolation (PTI)
The kernel development community
30-01-2023
ID: REF-1404
RIDL: Rogue In-Flight Data Load
Stephan van Schaik, Alyssa Milburn, Sebastian Österlund, Pietro Frigo, Giorgi Maisuradze, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida
19-05-2019
ID: REF-1405
Downfall: Exploiting Speculative Data Gathering
Daniel Moghimi
09-08-2023
ID: REF-1406
Hardware Security Leak Detection by Symbolic Simulation
Neta Bar Kama and Roope Kaivola
11-2021
ID: REF-1401
Meltdown: Reading Kernel Memory from User Space
Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, and Mike Hamburg
21-05-2020
ID: REF-1408
Cache Speculation Side-channels
ARM
01-2018
ID: REF-1410
Rogue System Register Read/CVE-2018-3640/INTEL-SA-00115
Intel Corporation
01-05-2018
ID: REF-1411
You Cannot Always Win the Race: Analyzing the LFENCE/JMP Mitigation for Branch Target Injection
Alyssa Milburn, Ke Sun, and Henrique Kawakami
08-03-2022
ID: REF-1389
Medusa: Microarchitectural: Data Leakage via Automated Attack Synthesis
Daniel Moghimi, Moritz Lipp, Berk Sunar, and Michael Schwarz
08-2020
ID: REF-1430
InvisiSpec: making speculative execution invisible in the cache hierarchy.
Mengjia Yan, Jiho Choi, Dimitrios Skarlatos, Adam Morrison, Christopher W. Fletcher, and Josep Torrella
05-2019
ID: REF-1417
Port Contention for Fun and Profit
Alejandro Cabrera Aldaya, Billy Bob Brumley, Sohaib ul Hassan, Cesar Pereida García, and Nicola Tuveri
05-2019
ID: REF-1418
Speculative Interference Attacks: Breaking Invisible Speculation Schemes
Mohammad Behnia, Prateek Sahu, Riccardo Paccagnella, Jiyong Yu, Zirui Zhao, Xiang Zou, Thomas Unterluggauer, Josep Torrellas, Carlos Rozas, Adam Morrison, Frank Mckeen, Fangfei Liu, Ron Gabor, Christopher W. Fletcher, Abhishek Basak, and Alaa Alameldeen
04-2021
ID: REF-1419
Spectre is here to stay: An analysis of side-channels and speculative execution
Ross Mcilroy, Jaroslav Sevcik, Tobias Tebbi, Ben L. Titzer, and Toon Verwaest
14-02-2019
ID: REF-1420
Applicable Platforms
Languages:
Not Language-Specific : Undetermined
Technologies:
Not Technology-Specific : Undetermined
Modes of Introduction
Architecture and Design
Implementation
System Configuration
Architecture and Design