TY - GEN
T1 - Invited Paper
T2 - 44th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2025
AU - Ghosh, Archisman
AU - Ghosh, Swaroop
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Spiking Neural Networks (SNNs) are inspired by the event-driven and temporally sparse nature of biological neurons, enabling deployment in in-sensor computing systems. The sensing and computation being tightly coupled in the in-sensor devices help in low-latency and energy-efficient data processing. This paradigm introduces a novel security front, exposing vulnerabilities in both neuromorphic hardware and the temporally sparse spike encodings to a variety of emerging attack modalities. This survey offers a comprehensive examination of the security and robustness landscape for SNNs deployed in in-sensor computing environments. It begins by outlining the architectural and algorithmic characteristics that define in-sensor SNN pipelines, with particular focus on temporal coding, asynchronous processing, and hardware constraints. We then review pertinent threat models, including spike-level adversarial perturbations, sensor spoofing, electromagnetic interference, fault injection, and timing-based privacy leakage, considering both white-box and black-box attack scenarios that exploit spatiotemporal vulnerabilities. Existing defense mechanisms, spanning noise shaping, homeostatic control, adversarial training, secure spike encoding, and hardware-level protections, are systematically categorized and assessed in the context of resource-constrained, event-driven platforms. Finally, we highlight emerging research directions in secure neuromorphic learning, such as continual and federated SNN training under adversarial settings, establishing a foundation for advancing the research in secure neuromorphic systems.
AB - Spiking Neural Networks (SNNs) are inspired by the event-driven and temporally sparse nature of biological neurons, enabling deployment in in-sensor computing systems. The sensing and computation being tightly coupled in the in-sensor devices help in low-latency and energy-efficient data processing. This paradigm introduces a novel security front, exposing vulnerabilities in both neuromorphic hardware and the temporally sparse spike encodings to a variety of emerging attack modalities. This survey offers a comprehensive examination of the security and robustness landscape for SNNs deployed in in-sensor computing environments. It begins by outlining the architectural and algorithmic characteristics that define in-sensor SNN pipelines, with particular focus on temporal coding, asynchronous processing, and hardware constraints. We then review pertinent threat models, including spike-level adversarial perturbations, sensor spoofing, electromagnetic interference, fault injection, and timing-based privacy leakage, considering both white-box and black-box attack scenarios that exploit spatiotemporal vulnerabilities. Existing defense mechanisms, spanning noise shaping, homeostatic control, adversarial training, secure spike encoding, and hardware-level protections, are systematically categorized and assessed in the context of resource-constrained, event-driven platforms. Finally, we highlight emerging research directions in secure neuromorphic learning, such as continual and federated SNN training under adversarial settings, establishing a foundation for advancing the research in secure neuromorphic systems.
UR - https://www.scopus.com/pages/publications/105029396739
UR - https://www.scopus.com/pages/publications/105029396739#tab=citedBy
U2 - 10.1109/ICCAD66269.2025.11240674
DO - 10.1109/ICCAD66269.2025.11240674
M3 - Conference contribution
AN - SCOPUS:105029396739
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2025 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2025 - Conference Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 26 October 2025 through 30 October 2025
ER -