Reverse-engineering last-level cache complex addressing 

Get Complete Project Material File(s) Now! »

Hardware virtualization

Virtualization is the decoupling of software services from the underlying hardware. This solution has gained attraction, with the rise of the cloud computing. For users, the benefit is simplicity: the same services run on different hardware platforms, without having to consider the specificities of the hardware. For cloud providers, the benefit is cost efficiency: several Virtual Machines (VMs), that can be owned by different tenants, run on the same physical machine. Virtualized environment have three main components: the hypervisor that is the abstraction layer, a host operating system that has privileged access to hardware, and guest operating systems that are unprivileged virtual machines. There are two main types of hardware virtualization. Type 1 hypervisors, also called native or bare-metal, run directly on top of hardware (see Figure 2.6).

Attack surface of the virtualization layer

The virtualization layer itself is a target, given its privileged access to hardware. The virtualization layer is composed of the hypervisor, the host operating system, as well as an emulator. This layer is quite complex and forms a large qtrusted computing base. The attacker searches to attack another guest by escaping the guest environment, targeting the hypervisor and performing his attack through privileged access via the host.

Covert and side channels on shared resources

We now consider another type of attacks, applicable more broadly to multitenant environments, for which virtualization is a natural use case. In the case of virtualization, these attacks do not rely on flaws in the virtualization layer such as described in Section 2.2.4. Instead of directly compromising the other guest, the attacker is a non-privileged process that uses shared hardware as a medium to leak information. These attacks fall into the category of covert  channels and side channels. Covert channels involve the cooperation of two attacker processes to actively exchange information. Side channels imply passive observation of a victim process by an attacker process, usually to extract a secret like a cryptographic key.

Achieving and detecting co-residency

To perform covert or side channels on shared hardware, the first step for an attacker is to be co-resident with his victim, i.e., to share a physical machine. In a native environment, the attacker has to run a program on the same operating system as its victim. In a virtualized environment, the attacker has to run a virtual machine alongside the virtual machine of its victim, on the same physical machine. We now review methods for an attacker to achieve and detect co-residency on a virtualized environment. Ristenpart et al. [RTSS09] presented some heuristics to achieve co-residency on Amazon EC2 instance placement. They started by mapping the IP ranges of EC2 service, that correspond to different instance types and availability zones. The authors alsshowed that a brute-force strategy already achieves a reasonable success rate, for a large set of victim instances. A more elaborate strategy abuses the placement algorithm, that tends to co-locate virtual machines launched in a short temporal interval. The attacker can also abuse the auto-scaling system that automatically creates new instances when demand increases. This forces the victim to launch a new instance, from which point the attacker can himself launch new instances until one is co-resident with the victim. Varadarajan et al. [VZRS15] reevaluated the co-residency vulnerability in three major cloud providers, after the adoption of Virtual Private Cloud (VPC). VPC logically isolates networks, but it does not gives physical isolation, i.e., virtual machines from different VPCs can share the same physical machine. Varadarajan et al. found that VPC makes prior attacks ineffective. However, Xu et al. [XWW15] demonstrated a new approach to attack instances that are in a VPC. They exploit latency variations in routing between instances that are behind a VPC and instances that are not. This approach comes at high cost, as an attacker needs to launch more than 1000 instances to achieve co-residency in a VPC. However, it shows that virtual network isolation does not completely solve the issue, thus attacks on co-resident virtual machines are still a real threat.

READ  Wireless Propagation At LMDS Frequencies

Table of contents :

Contents
Résumé
Abstract
Acknowledgments
Foreword
1 Introduction 
1.1 Context
1.2 Problem statement
1.3 Contributions
1.4 Organization of the thesis
2 State of the art 
2.1 Architecture of an x86 system
2.1.1 CPU
2.1.1.1 Data cache
2.1.1.2 Instruction cache
2.1.1.3 Branch prediction unit
2.1.2 Memory
2.2 Hardware virtualization
2.2.1 CPU
2.2.2 Memory
2.2.3 I/O devices
2.2.4 Attack surface of the virtualization layer
2.2.4.1 Hypervisor
2.2.4.2 Device emulator
2.2.4.3 Direct device assignment
2.3 Covert and side channels on shared resources
2.3.1 Achieving and detecting co-residency
2.3.2 Attack surface of an x86 processor
2.3.2.1 Data and instruction cache
2.3.2.2 Branch prediction unit
2.3.2.3 Arithmetic logic unit
2.3.3 Attack surface of the memory and memory bus
2.3.3.1 Memory deduplication
2.3.3.2 Memory bus
2.4 The case of the data cache of x86 processors
2.4.1 Time-driven attacks
2.4.2 Trace-driven attacks
2.4.3 Access-driven attacks
2.4.4 Beyond cryptographic side channels
2.4.5 Evolutions of cache attacks
2.4.5.1 From single-core to muti-core CPUs
2.4.5.2 From native to virtualized environments
2.4.5.3 From shared to non-shared memory
2.4.6 Timing measurements
2.4.7 Countermeasures
2.4.7.1 Architecture or microarchitecture level
2.4.7.2 Operating system or hypervisor level
2.4.7.3 Application level
2.5 Information leakage on GPU memory
2.5.1 Architecture
2.5.2 Programming model
2.5.3 Offensive usage of GPUs
2.5.3.1 The GPU as the subject of attacks
2.5.3.2 The GPU as a medium for attacks
2.5.4 Defensive usage of GPUs
2.6 Conclusions
3 Bypassing cache complex addressing: C5 covert channel 
3.1 Introduction
3.2 The issue of addressing uncertainty
3.3 Overview of C5
3.4 Sender
3.5 Receiver
3.6 Evaluation
3.6.1 Testbed
3.6.2 Native environment
3.6.3 Virtualized environment
3.7 Discussion
3.8 Countermeasures
3.8.1 Hardware countermeasures
3.8.2 Software countermeasures
3.9 Related work
3.10 Conclusions and perspectives
4 Reverse-engineering last-level cache complex addressing 
4.1 Introduction
4.2 Hardware performance counters
4.3 Mapping physical addresses to cache slices
4.4 Building a compact addressing function
4.4.1 Problem statement
4.4.2 Manually reconstructing the function for Xeon E5-2609 v2
4.4.3 Reconstructing the function automatically
4.5 Applications
4.5.1 Building a faster covert channel
4.5.2 Exploiting the Rowhammer vulnerability in JavaScript
4.6 Discussion
4.6.1 Dealing with unknown physical addresses
4.6.2 Comparison to previously retrieved functions
4.7 Related work
4.8 Conclusions and perspectives
5 Information leakage on GPU memory 
5.1 Introduction
5.2 Attacker model
5.3 Impact of the GPU virtualization techniques on security
5.3.1 Emulation
5.3.2 Split driver model
5.3.3 Direct device assignment
5.3.4 Direct device assignment with SR-IOV
5.4 Experimental setup
5.5 Accessing memory through GPGPU runtime
5.5.1 Native environment
5.5.2 Virtualized environment
5.5.3 Cloud environment
5.6 Accessing memory through PCI configuration space
5.6.1 Native environment
5.6.2 Virtualized and cloud environment
5.7 Countermeasures
5.7.1 GPGPU runtimes
5.7.2 Hypervisors and cloud providers
5.7.3 Defensive programming
5.8 Related work
5.9 Conclusions and perspectives
6 Conclusions and future directions 
6.1 Contributions
6.2 Future directions
6.2.1 Attack techniques
6.2.2 Defense techniques
6.2.3 Expanding knowledge of CPU internals
Appendices
A Résumé en français 
A.1 Introduction
A.1.1 Contexte
A.1.2 Problématique
A.1.3 Contributions
A.1.4 Organisation de la thèse
A.2 Contourner l’adressage complexe : le canal caché C5
A.3 Rétro-ingénierie de l’adressage du dernier niveau de cache
A.4 Fuites d’informations sur la mémoire des GPUs
A.5 Travaux futurs
A.5.1 Nouvelles attaques
A.5.2 Contre-mesures
A.5.3 Élargir les connaissances du fonctionnement interne des CPUs
B Accurate timing measurements 
C MSR values for reverse-engineering the addressing function 
C.1 Xeon CPUs
C.1.1 Monitoring session
C.1.2 MSR addresses and values for Xeon Sandy Bridge CPUs
C.1.3 MSR addresses and values for Xeon Ivy Bridge CPUs .
C.1.4 MSR addresses and values for Xeon Haswell CPUs
C.2 Core CPUs
C.2.1 Monitoring session
C.2.2 MSR addresses and values
List of Figures
List of Tables
Bibliography

GET THE COMPLETE PROJECT

Related Posts