Deep learning convolutional artificial neural networks have achieved success in a large number of visual processing tasks and are currently utilized for many real-world applications like image search and speech recognition among others. However, despite achieving high accuracy in such classification problems, they involve significant computational resources. Over the past few years, non-spiking deep convolutional artificial neural network models have evolved into more biologically realistic and event-driven spiking deep convolutional artificial neural networks. Recent research efforts have been directed at developing mechanisms to convert traditional non-spiking deep convolutional artificial neural networks to the spiking ones where neurons communicate by means of spikes. However, there have been limited studies providing insights on the specific power, area, and energy benefits offered by the spiking deep convolutional artificial neural networks in comparison to their non-spiking counterparts. We perform a comprehensive study for hardware implementation of spiking/non-spiking deep convolutional artificial neural networks on MNIST, CIFAR10, and SVHN datasets. To this effect, we design AccelNN-a Neural Network Accelerator to execute neural network benchmarks and analyze the effects of circuit-Architecture level techniques to harness event-drivenness. A comparative analysis between spiking and non-spiking versions of deep convolutional artificial neural networks is presented by performing trade-offs between recognition accuracy and corresponding power, latency and energy requirements.
|Number of pages
|IEEE Transactions on Multi-Scale Computing Systems
|Published - Oct 1 2018
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Information Systems
- Hardware and Architecture