event-icon
Description

Why-question answering (why-QA) is a useful NLP application that identifies documented causes or justifications in clinical text. This study adapted and evaluated a state-of-the-art language model, bidirectional encoder representations from transformers (BERT), for clinical why-QA. A large annotated clinical corpus, emrQA, was used as the base training data. The best model achieved an accuracy of 0.70 and 0.75 in exact and partial match respectively. With requiring a precision of ~0.8, the recall remained at ~0.5.

Learning Objective: - Appreciate the significance of why-question answering (why-QA) in the clinical domain
- Know the available resources for developing clinical why-QA solutions
- See the achievable based on up-to-date NLP techniques and the remaining challenges

Authors:

Jungwei Fan (Presenter)
Mayo Clinic

Presentation Materials:

Tags