Deep neural network models (DNNs) are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation; however, due to the highly non-linear inner transformations, DNNs are viewed by many as a black box. In this panel, we will discuss the interpretability and statistical inferences of the prediction using DNN model, in the context of real clinical outcome prediction use cases. The panel will begin by providing a brief overview of the deep neural network and the latest research to interpret DNN results. We will then introduce two explanation methods. We will also present methods to construct confidence intervals and p-values for the impact scores and validate them through simulations and data application. Finally, we will discuss the challenges and future work to improve the understanding and acceptance of DNN models by researchers and clinicians.

Learning Objective: 1. Understand the challenges and needs of explaining deep neural network models in the context of clinical research and clinical decision support
2. Describe state-of-the-art approaches to explain deep neural network models.
2. Compare the explanations of deep neural network models with statistical regression models.


Qing Zeng (Presenter)
George Washington University

Yijun Shao (Presenter)
George Washington University

Cecilia Dao (Presenter)
Yale University

Orna Intrator (Presenter)
University of Rochester Medical Center

Joseph Goulet (Presenter)
West Haven VA Medical Center

Presentation Materials: