event-icon
Description

Natural language processing (NLP) is useful for extracting information from clinical narratives, and both traditional machine learning methods and more-recent deep learning methods have been successful in various clinical NLP tasks. These methods often depend on traditional word embeddings that are outputs of language models (LMs). Recently, methods that are directly based on pre-trained language models themselves, followed by fine-tuning on the LMs (e.g. the Bidirectional Encoder Representations from Transformers (BERT)), have achieved state-of-the-art performance on many NLP tasks. Despite their success in the open domain and biomedical literature, these pre-trained LMs have not yet been applied to the clinical relation extraction (RE) task. In this study, we developed two different implementations of the BERT model for clinical RE tasks. Our results show that our tuned LMs outperformed previous state-of-the-art RE systems in two shared tasks, which demonstrates the potential of LM-based methods on the RE task.

Learning Objective: 1.Understand the current development and complexity of relation extraction.
2.Learn the methods that extract relations from clinical narratives.

Authors:

Qiang Wei, The University of Texas Health Science Center
Zongcheng Ji, The University of Texas Health Science Center
Yuqi Si, The University of Texas Health Science Center
Jingcheng Du, The University of Texas Health Science Center
Jingqi Wang, The University of Texas Health Science Center
Firat Tiryaki, The University of Texas Health Science Center
Stephen Wu, The University of Texas Health Science Center
Cui Tao, The University of Texas Health Science Center
Kirk Roberts, The University of Texas Health Science Center
Hua Xu, The University of Texas Health Science Center
Xu Zuo (Presenter)
The University of Texas Health Science Center

Presentation Materials:

Tags