Explainable GNNs for Security Verification of RISC-V Cores

From ISLAB/CAISR
Jump to navigationJump to search
Title Explainable GNNs for Security Verification of RISC-V Cores
Summary develop an explainable graph-neural-network (GNN) workflow that localises security-relevant weaknesses in open-source RISC-V cores at RTL.
Keywords
TimeFrame
References Reimann, Lennart M., et al. ”Qtflow: Quantitative timing-sensitive information flow for security-aware hardware design on rtl.” 2024 International VLSI Symposium on Technology, Systems and Applications (VLSI TSA). IEEE, 2024.

Gosch, Lukas, et al. ”Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks.” Transactions on Machine Learning Research.

Prerequisites
Author
Supervisor Mahdi Fazeli
Level Master
Status Open


In this thesis, you will develop an explainable graph-neural-network (GNN) workflow that localises security-relevant weaknesses in open-source RISC-V cores at RTL. You will (i) extract circuit graphs from RTL, (ii) train/finetune a GNN with Jumping-Knowledge to avoid over-smoothing, and (iii) integrate XAI (e.g., GNNExplainer) to produce “vulnerability heatmaps.” As a validation case, you will introduce a small, data-dependent prefetcher-like feature into a RISC-V design and evaluate whether your pipeline flags and localises the risky structure.