Explaining the Behavior of Neuron Activations in Deep Neural Networks

Longwei Wang, Chengfei Wang, Yupeng Li, Rui Wang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Deep Neural Networks has shown superior performance in various applications. But it is often seen as black box in real world applications, which is challenging to explain from the viewpoint of humans. It is important to understand the behavior of deep neural networks so as to trust the decision made and improve the classification accuracy of deep neural networks. In this study, the information theoretical analysis is used to investigate the behavior of layer-wise neurons in deep neural networks. The activation patterns of individual neurons in fully connected layers can provide insights for the performance of the neural network model. The behavior of neuron activation is investigated based on state-of-art classification network model. We study and compare the layer-wise pattern of neurons activation in fully connected layers given the same image input. Experiments are conducted on various data sets. We find that in a well trained classification model, the randomness level of the neurons activation pattern is reduced with the depth of the fully connected layers. This means that the neuron activation patterns of deep layers is more stable than that of shallow layers. The results in this study can also answer the question of how many layers are needed to avoid overfitting in deep neural networks. Corresponding experiments are conducted to validate the assumptions.

Original languageEnglish (US)
Article number102346
JournalAd Hoc Networks
Volume111
DOIs
StatePublished - Feb 1 2021

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications

Cite this