Security Vulnerabilities in Multi-Model Computing-in-Memory Systems
| Title | Security Vulnerabilities in Multi-Model Computing-in-Memory Systems |
|---|---|
| Summary | Discover and quantify timing side-channels and model-fingerprinting risks in multi-tenant CiM accelerators using open simulators and a custom runtime |
| Keywords | |
| TimeFrame | Spring 2026 (Jan–Jun) |
| References | Kim, Seah, et al. "Moca: Memory-centric, adaptive execution for multi-tenant deep neural networks." 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2023. |
| Prerequisites | 1. Programming in Python; experience with PyTorch and Linux.
2. Background in computer architecture and basic ML (CNNs/transformers). 3. Interest in hardware security or systems security (timing/throughput analysis). |
| Author | |
| Supervisor | Mahdi Fazeli |
| Level | Master |
| Status | Open |
Computing-in-Memory (CiM) accelerators increasingly host multiple neural networks on shared analog arrays to raise utilization. Throughput-oriented mechanisms such as tenant-level area allocation, operator splitting or duplication, and fine-grained inter-layer pipelining also increase interactions among co-resident models. This thesis evaluates whether these mechanisms create exploitable timing side-channels and model-specific execution signatures in realistic edge settings without physical access. We design two primary experiments and one goal: (i) cross-model timing leakage, where an unprivileged co-tenant infers binary properties of a victim’s inputs using only its own per-inference latencies and queuing behavior; (ii) model fingerprinting, which identifies the victim’s architecture family from contention-driven timing patterns; and (iii) exploratory parameter-structure inference on small fully connected layers.