TY - GEN
T1 - Special Session
T2 - 40th IEEE VLSI Test Symposium, VTS 2022
AU - Sadi, Mehdi
AU - He, Yi
AU - Li, Yanjing
AU - Alam, Mahabubul
AU - Kundu, Satwik
AU - Ghosh, Swaroop
AU - Bahrami, Javad
AU - Karimi, Naghmeh
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Neural Networks (NNs) are being extensively used in critical applications such as aerospace, healthcare, autonomous driving, and military, to name a few. Limited precision of the underlying hardware platforms, permanent and transient faults injected unintentionally as well as maliciously, and voltage/temperature fluctuations can potentially result in malfunctions in NNs with consequences ranging from substantial reduction in the network accuracy to jeopardizing the correct prediction of the network in worst cases. To alleviate such reliability concerns, this paper discusses the state-of-the-art reliability enhancement schemes that can be tailored for deep learning accelerators. We will discuss the errors associated with the hardware implementation of Deep-Learning (DL) algorithms along with their corresponding countermeasures. An in-field self-test methodology with a high test coverage is introduced, and an accurate high-level framework, so-called FIdelity, is proposed that enables the designers to evaluate DL accelerators in presence of such errors. Then, a state-of-the-art robustness-preserving training algorithm based on the Hessian Regularization is introduced. This algorithm alleviates the perturbations during inference time with negligible degradation in the accuracy of the network. Finally, Quantum Neural Networks (QNNs) and the methods to make them resilient against a variety of vulnerabilities such as fault injection, spatial and temporal variations in Qubits, and noise in QNNs are discussed.
AB - Neural Networks (NNs) are being extensively used in critical applications such as aerospace, healthcare, autonomous driving, and military, to name a few. Limited precision of the underlying hardware platforms, permanent and transient faults injected unintentionally as well as maliciously, and voltage/temperature fluctuations can potentially result in malfunctions in NNs with consequences ranging from substantial reduction in the network accuracy to jeopardizing the correct prediction of the network in worst cases. To alleviate such reliability concerns, this paper discusses the state-of-the-art reliability enhancement schemes that can be tailored for deep learning accelerators. We will discuss the errors associated with the hardware implementation of Deep-Learning (DL) algorithms along with their corresponding countermeasures. An in-field self-test methodology with a high test coverage is introduced, and an accurate high-level framework, so-called FIdelity, is proposed that enables the designers to evaluate DL accelerators in presence of such errors. Then, a state-of-the-art robustness-preserving training algorithm based on the Hessian Regularization is introduced. This algorithm alleviates the perturbations during inference time with negligible degradation in the accuracy of the network. Finally, Quantum Neural Networks (QNNs) and the methods to make them resilient against a variety of vulnerabilities such as fault injection, spatial and temporal variations in Qubits, and noise in QNNs are discussed.
UR - http://www.scopus.com/inward/record.url?scp=85132554583&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132554583&partnerID=8YFLogxK
U2 - 10.1109/VTS52500.2021.9794194
DO - 10.1109/VTS52500.2021.9794194
M3 - Conference contribution
AN - SCOPUS:85132554583
T3 - Proceedings of the IEEE VLSI Test Symposium
BT - Proceedings - 2022 IEEE 40th VLSI Test Symposium, VTS 2022
PB - IEEE Computer Society
Y2 - 25 April 2022 through 27 April 2022
ER -