CIF: Small: Interpretable Machine Learning based on Deep Neural Networks: A Source Coding Perspective

  • Li, Jia (PI)
  • Siegel, Jonathan W. (CoPI)

Project: Research project

Project Details

Description

Deep neural networks (DNN) have become a core technology for building artificial intelligence (AI) systems, with numerous applications in critical domains such as manufacturing and medicine. Despite the phenomenal success of such networks, their usage has met resistance in many mission-critical tasks because it is difficult to explain the prediction model generated by the computer. For example, in some applications in public health and medicine, because only interpretable models are acceptable, users have stuck with classic machine-learning methods, which are often not as accurate as DNN models. Moreover, increased transparency in AI systems makes human inspection possible, a necessary trait for studying social justice and equity in AI. This project aims to develop theory, methods, and applications to achieve interpretability for machine-learning models based on DNN. Advancement in this project will broaden the usage of DNN in science, engineering, and industry, further unleashing its power. Besides developing fundamental methodologies, the investigators will advance the application area of automated emotion recognition. The ability to recognize and quantify emotion can help psychologists and clinical workers notice extreme distress and potential danger to oneself and others. Through this project, the research team will develop software packages for public access, graduate and undergraduate students will be trained to conduct interdisciplinary research, and the faculty members of the team will integrate the research results into their teaching activities.Although various post-hoc methods have been developed to interpret the decision of DNNs, the explanation is often unstable and highly localized by construction. More importantly, the explanation model exists in separation from the prediction model, whose high complexity remains despite the explanation. In this project, inspired by source coding, the investigators will draw an analogy between explaining a complex model and transmitting signals with a limited channel capacity. Similar to vector quantization to enable data transmission at an allowable rate, the prediction mapping is quantized so that the prediction can be described at a desired level of interpretability. To formalize the idea, the investigators propose a mixture of discriminative models, trained as an embedded part of a neural network. Function approximation theory will be developed for interpretable models based on neural networks. Besides testing and evaluating the proposed framework using benchmark datasets, such as images, videos, text, and biomedical data, the investigators will explore in greater depth the application for emotion recognition based on body movements and, in collaboration with psychology researchers, evaluate the insight gained from interpreting the prediction model.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
StatusActive
Effective start/end date10/1/229/30/25

Funding

  • National Science Foundation: $600,000.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.