• Postgrad Med J · Mar 2024

    Explainable artificial intelligence model for mortality risk prediction in the intensive care unit: a derivation and validation study.

    • Chang Hu, Chao Gao, Tianlong Li, Chang Liu, and Zhiyong Peng.
    • Department of Critical Care Medicine, Zhongnan Hospital of Wuhan University, Wuhan 430071, Hubei, China.
    • Postgrad Med J. 2024 Mar 18; 100 (1182): 219227219-227.

    BackgroundThe lack of transparency is a prevalent issue among the current machine-learning (ML) algorithms utilized for predicting mortality risk. Herein, we aimed to improve transparency by utilizing the latest ML explicable technology, SHapley Additive exPlanation (SHAP), to develop a predictive model for critically ill patients.MethodsWe extracted data from the Medical Information Mart for Intensive Care IV database, encompassing all intensive care unit admissions. We employed nine different methods to develop the models. The most accurate model, with the highest area under the receiver operating characteristic curve, was selected as the optimal model. Additionally, we used SHAP to explain the workings of the ML model.ResultsThe study included 21 395 critically ill patients, with a median age of 68 years (interquartile range, 56-79 years), and most patients were male (56.9%). The cohort was randomly split into a training set (N = 16 046) and a validation set (N = 5349). Among the nine models developed, the Random Forest model had the highest accuracy (87.62%) and the best area under the receiver operating characteristic curve value (0.89). The SHAP summary analysis showed that Glasgow Coma Scale, urine output, and blood urea nitrogen were the top three risk factors for outcome prediction. Furthermore, SHAP dependency analysis and SHAP force analysis were used to interpret the Random Forest model at the factor level and individual level, respectively.ConclusionA transparent ML model for predicting outcomes in critically ill patients using SHAP methodology is feasible and effective. SHAP values significantly improve the explainability of ML models.© The Author(s) 2024. Published by Oxford University Press on behalf of Fellowship of Postgraduate Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

      Pubmed     Copy Citation     Plaintext  

      Add institutional full text...

    Notes

     
    Knowledge, pearl, summary or comment to share?
    300 characters remaining
    help        
    You can also include formatting, links, images and footnotes in your notes
    • Simple formatting can be added to notes, such as *italics*, _underline_ or **bold**.
    • Superscript can be denoted by <sup>text</sup> and subscript <sub>text</sub>.
    • Numbered or bulleted lists can be created using either numbered lines 1. 2. 3., hyphens - or asterisks *.
    • Links can be included with: [my link to pubmed](http://pubmed.com)
    • Images can be included with: ![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
    • For footnotes use [^1](This is a footnote.) inline.
    • Or use an inline reference [^1] to refer to a longer footnote elseweher in the document [^1]: This is a long footnote..

    hide…