TY - GEN
T1 - Domain Wall Memory based Convolutional Neural Networks for Bit-width Extendability and Energy-Efficiency
AU - Chung, Jinil
AU - Park, Jongsun
AU - Ghosh, Swaroop
N1 - Publisher Copyright:
© 2016 ACM.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2016/8/8
Y1 - 2016/8/8
N2 - In the hardware implementation of deep learning algorithms such as Convolutional Neural Networks (CNNs), vector-vector multiplications and memories for storing parameters take a significant portion of area and power consumption. In this paper, we propose a Domain Wall Memory (DWM) based design of CNN convolutional layer. In the proposed design, the resistive cell sensing mechanism is efficiently exploited to design a low-cost DWM-based cell arrays for storing parameters. The unique serial access mechanism and small footprint of DWM are also used to reduce the area and power cost of the input registers for aligning inputs. Contrary to the conventional implementation using Memristor-Based Crossbar (MBC), the bit-width of the proposed CNN convolutional layer is extendable for high resolution classifications and training. Simulation results using 65 nm CMOS process show that the proposed design archives 34% of energy savings compared to the conventional MBC based design approach.
AB - In the hardware implementation of deep learning algorithms such as Convolutional Neural Networks (CNNs), vector-vector multiplications and memories for storing parameters take a significant portion of area and power consumption. In this paper, we propose a Domain Wall Memory (DWM) based design of CNN convolutional layer. In the proposed design, the resistive cell sensing mechanism is efficiently exploited to design a low-cost DWM-based cell arrays for storing parameters. The unique serial access mechanism and small footprint of DWM are also used to reduce the area and power cost of the input registers for aligning inputs. Contrary to the conventional implementation using Memristor-Based Crossbar (MBC), the bit-width of the proposed CNN convolutional layer is extendable for high resolution classifications and training. Simulation results using 65 nm CMOS process show that the proposed design archives 34% of energy savings compared to the conventional MBC based design approach.
UR - http://www.scopus.com/inward/record.url?scp=84988007983&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84988007983&partnerID=8YFLogxK
U2 - 10.1145/2934583.2934602
DO - 10.1145/2934583.2934602
M3 - Conference contribution
AN - SCOPUS:84988007983
T3 - Proceedings of the International Symposium on Low Power Electronics and Design
SP - 332
EP - 337
BT - ISLPED 2016 - Proceedings of the 2016 International Symposium on Low Power Electronics and Design
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 21st IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2016
Y2 - 8 August 2016 through 10 August 2016
ER -