TY - GEN
T1 - Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks
AU - Nagarajan, Karthikeyan
AU - Li, Junde
AU - Ensan, Sina Sayyah
AU - Khan, Mohammad Nasim Imtiaz
AU - Kannan, Sachhidh
AU - Ghosh, Swaroop
N1 - Publisher Copyright:
© 2022 EDAA.
PY - 2022
Y1 - 2022
N2 - Spiking Neural Networks (SNN) are quickly gaining traction as a viable alternative to Deep Neural Networks (DNN). In comparison to DNNs, SNNs are more computationally powerful and provide superior en-ergy efficiency. SNNs, while exciting at first appearance, contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that adversaries can exploit. We investigate global fault injection attacks by employing external power supplies and laser-induced local power glitches to corrupt crucial training parameters such as spike amplitude and neuron's membrane threshold potential on SNNs developed using common analog neurons. We also evaluate the impact of power-based attacks on individual SNN layers for 0% (i.e., no attack) to 100% (i.e., whole layer under attack). We investigate the impact of the attacks on digit classification tasks and find that in the worst-case scenario, classification accuracy is reduced by 85.65%. We also propose defenses e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area and 25% power overhead. We also present a dummy neuron-based voltage fault injection detection system with 1% power and area overhead.
AB - Spiking Neural Networks (SNN) are quickly gaining traction as a viable alternative to Deep Neural Networks (DNN). In comparison to DNNs, SNNs are more computationally powerful and provide superior en-ergy efficiency. SNNs, while exciting at first appearance, contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that adversaries can exploit. We investigate global fault injection attacks by employing external power supplies and laser-induced local power glitches to corrupt crucial training parameters such as spike amplitude and neuron's membrane threshold potential on SNNs developed using common analog neurons. We also evaluate the impact of power-based attacks on individual SNN layers for 0% (i.e., no attack) to 100% (i.e., whole layer under attack). We investigate the impact of the attacks on digit classification tasks and find that in the worst-case scenario, classification accuracy is reduced by 85.65%. We also propose defenses e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area and 25% power overhead. We also present a dummy neuron-based voltage fault injection detection system with 1% power and area overhead.
UR - https://www.scopus.com/pages/publications/85130831233
UR - https://www.scopus.com/pages/publications/85130831233#tab=citedBy
U2 - 10.23919/DATE54114.2022.9774577
DO - 10.23919/DATE54114.2022.9774577
M3 - Conference contribution
AN - SCOPUS:85130831233
T3 - Proceedings of the 2022 Design, Automation and Test in Europe Conference and Exhibition, DATE 2022
SP - 861
EP - 866
BT - Proceedings of the 2022 Design, Automation and Test in Europe Conference and Exhibition, DATE 2022
A2 - Bolchini, Cristiana
A2 - Verbauwhede, Ingrid
A2 - Vatajelu, Ioana
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 Design, Automation and Test in Europe Conference and Exhibition, DATE 2022
Y2 - 14 March 2022 through 23 March 2022
ER -