Introducing COVID-Net Clinical ICU, an Explainable and Trustworthy AI for Predicting COVID-19 ICU Admission
by Dr. Audrey Chung
By: Dr. Audrey Chung, Technical Product Manager at DarwinAI
In this blog post, Dr. Audrey Chung shares new insights about COVID-Net Clinical ICU,the newest addition to the COVID-Net Open Source Initiative. COVID-Net Clinical ICU was built by DarwinAI’s team members, Chief Scientist and Co-Founder Dr. Alexander Wong, Dr. Audrey Chung, Dr. Mahmoud Famouri, and Andrew Hryniowski.
This new model, made with our Explainable AI, is now publicly available and is able to predict when a COVID-19 patient will require ICU admission with an accuracy of 96.9%.
Read on to find out how the new model works and explore some of the real-world findings related to this crucially important project.
Leveraging XAI to predict ICU admissions within crowded hospitals
The COVID-19 pandemic continues to have a devastating impact on the health and well-being of the global population, with far-reaching social and economic effects. In particular, COVID-19 has placed a tremendous burden on struggling healthcare systems around the world, depleting already scarce resources. A critical component of the clinical workflow in fighting COVID-19 is accurate triaging and care planning, which enables patient-centric personalized care while simultaneously reducing the load on hospitals by only using the necessary resources for each patient. To that end, one crucial task within care planning is determining if a patient should be admitted to a hospital’s intensive care unit (ICU), especially given the ongoing shortage of available ICU space.
One promising avenue is to leverage AI to help predict ICU admissions by harnessing the wealth of clinical data being collected for each patient (e.g., demographic information, vital signs, blood results, etc.). However, a key challenge with building and using such predictive models is the difficulty understanding the rationale behind ICU admission predictions (i.e., the “how and why”), the factors most critical to ICU admission (i.e., the “what”), and under what circumstances a given predictive model is dependable and trustworthy (i.e., the “when and where”).
We at DarwinAI are proud to be leveraging our unique Explainable AI (XAI) technology to explore the building of a predictive model for ICU admission for COVID-19 patients. Motivated by the need for transparent and trustworthy ICU admission clinical decision support, we introduce COVID-Net Clinical ICU, a publicly available reference model for ICU admission prediction based on clinical data and the newest addition to the COVID-Net Open Source Initiative. We hope that the public release of models such as COVID-Net Clinical ICU can motivate and enable researchers, clinical scientists, and citizen scientists to accelerate progress in the field of AI to support the fight against the pandemic.
How we built our new model and leveraged system-level insights to improve it
COVID-Net Clinical ICU is built using a clinical dataset from Hospital Sírio-Libanês, comprising 1,925 COVID-19 records from 385 patients and their associated demographic information (e.g., age and gender), information about previous diseases (e.g., hypertension, immunocompromised, etc.), blood results (e.g, platelets count, neutrophils count, etc.), and vital signs (e.g., body temperature, pulse rate, etc.). Based on this dataset, we built an initial version of the COVID-Net Clinical ICU model that was able to predict when a COVID-19 patient would require ICU admission with an accuracy of 95.7%. We used our unique quantitative XAI system-level insight discovery on this initial model to study the decision-making impact of different clinical features and gain actionable insights for enhancing predictive performance to enable better care planning for hospitals amidst the ongoing pandemic.
The illustration above shows the 15 most predictive (positive association) and the 15 least predictive (negative association) clinical factors used by the initial COVID-Net Clinical ICU for ICU admission. The most predictive factors indicating whether a COVID-19 positive patient should be admitted to ICU are their median heart rate, mean blood sodium level, and if they are immunocompromised, as they have very high quantitative impact on the predictive performance of the model. Conversely, the minimum blood D-dimer level, median partial thromboplastin time (TTPA), and the average lymphocyte (linfocitos) count are the least predictive clinical factors. These interesting insights highlight the importance of taking a system-level insight discovery strategy using a quantitative XAI approach to better understand what factors are most important for making clinical decisions pertaining to ICU admission for COVID-19 patients.
We can take advantage of these system-level insights to improve our initial COVID-Net Clinical ICU model by using only the clinical features identified as having a quantitatively positive impact on the decision-making process, and our final model is able to predict when a COVID-19 patient would require ICU admission with an accuracy of 96.9% (noticeably higher than the initial model) while achieving a lower model complexity with 10% fewer parameters.
Using DarwinAI’s GenSynth explainability features to reveal critical insights that empower clinicians
At DarwinAI, we value bringing to life AI that you can trust, which is particularly important for mission-critical scenarios such as healthcare applications. As such, we evaluate the trustworthiness of COVID-Net Clinical ICU using our quantitative XAI technology, and take a closer look at the demographic trust spectrum to identify potential bias and gaps in fairness. While no model is perfect, understanding these gaps in fairness helps us to improve the overall performance and consistency of the model, while revealing when and where COVID-Net Clinical ICU is dependable.
The final COVID-Net Clinical ICU network generally behaves in a fair manner when making predictions across different demographics. The difference in trustworthiness for patients over the age of 65 and patients age 65 and under is minimal, and the model is relatively fair when it comes to the trustworthiness of predictions made for both age demographic groups. More interestingly, the model provides equally trustworthy predictions for female and male patients. This is particularly compelling given the fact that there is a higher number of male patients in the dataset compared to female patients (1,215 male vs. 710 female). This insight into the fairness of the final COVID-Net Clinical ICU network brings to light that the overall balance in the quantity of cases across demographic groups may not paint a complete picture in the resulting decision-making behaviour of predictive models that are built using the dataset. Notably, our unique trust quantification process can be a powerful tool for improved trustworthiness and fairness if trust gaps are indeed identified.
By digging deeper into when and why predictive models make certain decisions, we can uncover key factors in decision making for critical tasks, such as ICU admission prediction, and identify the situations we can trust these models in. We hope our work in creating an explainable and trustworthy model for predicting COVID-19 ICU admission can serve as an example of how leveraging XAI and trust quantification during model development is a step towards building and improving AI solutions you can trust. The COVID-Net Clinical ICU model is publicly available and can be found here.