TY - GEN
T1 - Layer-Wise Entropy Analysis and Visualization of Neurons Activation
AU - Wang, Longwei
AU - Chen, Peijie
AU - Wang, Chengfei
AU - Wang, Rui
N1 - Publisher Copyright:
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2020.
PY - 2020
Y1 - 2020
N2 - Understanding the inner working mechanism of deep neural networks (DNNs) is essential and important for researchers to design and improve the performance of DNNs. In this work, the entropy analysis is leveraged to study the neurons activation behavior of the fully connected layers of DNNs. The entropy of the activation patterns of each layer can provide an efficient performance metric for the evaluation of the network model accuracy. The study is conducted based on a well trained network model. The activation patterns of shallow and deep layers of the fully connected layers are analyzed by inputting the images of a single class. It is found that for the well trained deep neural networks model, the entropy of the neuron activation pattern is monotonically reduced with the depth of the layers. That is, the neuron activation patterns become more and more stable with the depth of the fully connected layers. The entropy pattern of the fully connected layers can also provide guidelines as to how many fully connected layers are needed to guarantee the accuracy of the model. The study in this work provides a new perspective on the analysis of DNN, which shows some interesting results.
AB - Understanding the inner working mechanism of deep neural networks (DNNs) is essential and important for researchers to design and improve the performance of DNNs. In this work, the entropy analysis is leveraged to study the neurons activation behavior of the fully connected layers of DNNs. The entropy of the activation patterns of each layer can provide an efficient performance metric for the evaluation of the network model accuracy. The study is conducted based on a well trained network model. The activation patterns of shallow and deep layers of the fully connected layers are analyzed by inputting the images of a single class. It is found that for the well trained deep neural networks model, the entropy of the neuron activation pattern is monotonically reduced with the depth of the layers. That is, the neuron activation patterns become more and more stable with the depth of the fully connected layers. The entropy pattern of the fully connected layers can also provide guidelines as to how many fully connected layers are needed to guarantee the accuracy of the model. The study in this work provides a new perspective on the analysis of DNN, which shows some interesting results.
UR - http://www.scopus.com/inward/record.url?scp=85082119212&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082119212&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-41117-6_3
DO - 10.1007/978-3-030-41117-6_3
M3 - Conference contribution
AN - SCOPUS:85082119212
SN - 9783030411169
T3 - Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
SP - 29
EP - 36
BT - Communications and Networking - 14th EAI International Conference, ChinaCom 2019, Proceedings
A2 - Gao, Honghao
A2 - Feng, Zhiyong
A2 - Yu, Jun
A2 - Wu, Jun
PB - Springer
T2 - 14th EAI International Conference on Communications and Networking in China, ChinaCom 2019
Y2 - 29 November 2019 through 1 December 2019
ER -