"UCL Hospitals uses AI to predict patient deterioration"

Explainable AI for Prediction of Patient Deterioration at University College London Hospitals NHS Foundation Trust, led by Julia Ive and Professor Hani Marcus. Nice to see CogStack AI being used to support developing explainable predictive AI based on verifiable clinical knowledge. Out now in Nature Portfolio Digital Medicine: https://lnkd.in/etciJzZp

  • text

Leslie Vásquez a bit of a formal intro to the topic but maybe of interest?

Prof James Teo Thank you for making this article accessible.This study has used NLP driven approach to identify events from unstructured free-text from EHRs which is very helpful to know.

Like
Reply

Explainability is the bridge between innovation and trust in healthcare AI. This work from UCLH, leveraging CogStack AI for interpretable predictions, highlights how thoughtful design can turn complex models into reliable tools for clinicians on the front lines.

Like
Reply

Interesting idea to apply on data!

Like
Reply

Great to see this post, Prof James Teo. What a fantastic and impactful application of the CogStack platform. Julia Ive and Professor Hani Marcus have done truly impressive work here. Using NLP on unstructured notes to predict unplanned ICU admissions is a major leap forward for patient safety. The focus on XAI is the crucial part. When clinicians can see the 'why' behind a prediction, thanks to tools like SHAP, it breaks down the 'black box' problem. This directly addresses the single biggest barrier to getting these models into practice: clinician trust. I'm curious what the authors see as the primary challenges in scaling this—whether to other surgical fields or to hospitals with different EHR setups? A huge congratulations to the whole team on this fantastic work!

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories