event-icon
Description

In evolving clinical environments, the accuracy of prediction models deteriorates over time. Guidance on the design of model updating policies is limited, and there is limited exploration of the impact of different policies on future model performance and across different model types. We implemented a new data-driven updating strategy based on a nonparametric testing procedure and compared this strategy to two baseline approaches in which models are never updated or fully refit annually. The test-based strategy generally recommended intermittent recalibration and delivered more highly calibrated predictions than either of the baseline strategies. The test-based strategy highlighted differences in the updating requirements between logistic regression, L1-regularized logistic regression, random forest, and neural network models, both in terms of the extent and timing of updates. These findings underscore the potential improvements in using a data-driven maintenance approach over “one-size fits all” to sustain more stable and accurate model performance over time.

Learning Objective: After participating in this session, the learner should be better able to:
--Discuss methods and strategies for maintaining clinical prediction model performance over time.
--Discuss aspects of models and data environments that may impact updating requirements and maintenance planning.

Authors:

Sharon Davis (Presenter)
Vanderbilt University School of Medicine

Robert Greevy, Vanderbilt University
Thomas Lasko, Vanderbilt University School of Medicine
Colin Walsh, Vanderbilt University School of Medicine
Michael Matheny, Vanderbilt University School of Medicine

Presentation Materials:

Tags