TY - GEN
T1 - Crossbar based Processing in Memory Accelerator Architecture for Graph Convolutional Networks
AU - Challapalle, Nagadastagiri
AU - Swaminathan, Karthik
AU - Chandramoorthy, Nandhini
AU - Narayanan, Vijaykrishnan
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Graph data structures are central to many applications such as social networks, citation networks, molecular interactions, and navigation systems. Graph Convolutional Networks (GCNs) are used to process and learn insights from the graph data for tasks such as link prediction, node classification, and learning node embeddings. The compute and memory access characteristics of GCNs differ, both from conventional graph analytics algorithms and from convolutional neural networks, rendering the existing accelerators for graph analytics as well as deep learning, inefficient. In this work, we propose PIM-GCN, a crossbar-based processing-in-memory (PIM) accelerator architecture for GCNs. PIM-GCN incorporates a node-stationary dataflow with support for both Compressed Sparse Row (CSR) and Compressed Sparse Column (CSC) graph data representations. We propose techniques for graph traversal in the compressed sparse domain, feature aggregation, and feature transformation operations in GCNs mapped to in-situ analog compute functions of crossbar memory, and present the trade-offs in performance, energy, and scalability aspects of the PIM-GCN architecture for CSR, and CSC graph data representations. PIM-GCN shows an average speedup of over 3–16× and an average energy reduction of 4–12× compared to the existing accelerator architectures.
AB - Graph data structures are central to many applications such as social networks, citation networks, molecular interactions, and navigation systems. Graph Convolutional Networks (GCNs) are used to process and learn insights from the graph data for tasks such as link prediction, node classification, and learning node embeddings. The compute and memory access characteristics of GCNs differ, both from conventional graph analytics algorithms and from convolutional neural networks, rendering the existing accelerators for graph analytics as well as deep learning, inefficient. In this work, we propose PIM-GCN, a crossbar-based processing-in-memory (PIM) accelerator architecture for GCNs. PIM-GCN incorporates a node-stationary dataflow with support for both Compressed Sparse Row (CSR) and Compressed Sparse Column (CSC) graph data representations. We propose techniques for graph traversal in the compressed sparse domain, feature aggregation, and feature transformation operations in GCNs mapped to in-situ analog compute functions of crossbar memory, and present the trade-offs in performance, energy, and scalability aspects of the PIM-GCN architecture for CSR, and CSC graph data representations. PIM-GCN shows an average speedup of over 3–16× and an average energy reduction of 4–12× compared to the existing accelerator architectures.
UR - http://www.scopus.com/inward/record.url?scp=85124127080&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124127080&partnerID=8YFLogxK
U2 - 10.1109/ICCAD51958.2021.9643465
DO - 10.1109/ICCAD51958.2021.9643465
M3 - Conference contribution
AN - SCOPUS:85124127080
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2021 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021
Y2 - 1 November 2021 through 4 November 2021
ER -