0% found this document useful (0 votes)
42 views39 pages

XAI Applications in Healthcare Explained

Uploaded by

jainyvarghese
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views39 pages

XAI Applications in Healthcare Explained

Uploaded by

jainyvarghese
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

XAI in Healthcare

Jainy Varghese C Guided By


M3 NE Dr.K R Remesh Babu
IDK21ITNE01

1
Contents
• Objective
• Introduction
• Literature Review
• XAI in healthcare
• Conclusion
• References

2
Objective
• About XAI
• XAI in health care

3
Introduction
• Explain:make clear to someone by describing it in more detail
• Interpret:Explain the meaning
• Understand:perceive the intended meaning

Explainable artificial intelligence (XAI): Set of processes and methods allows


human users to comprehend and trust the results and output.

4
Definitions
• “XAI will create a suite of machine learning techniques that enables human users
to understand, appropriately trust, and effectively manage the emerging generation
of artificially intelligent partners” .
By D. Gunning
• “Given a certain audience, explainability refers to the details and reasons a model
gives to make its functioning clear or easy to understand”
By Cambridge
• “Given an audience, an explainable Artificial Intelligence is one that produces
details or reasons to make its functioning clear or easy to understand”.

5
Literature Review

6
How it works

7
XAI
• Artificial Intelligence (AI) lies at the core of many activity sectors with
learning, reasoning and adaptation capabilities
To make prediction
Black Box Machine Learning
Deep Neural Networks

8
Black Box Model
• Uses a machine-learning algorithm to make predictions
• The explanation for that prediction remains unknowable and untraceable.
• Black-box Machine Learning (ML) are increasingly being employed to make
important predictions in critical contexts
• The decisions that are not justifiable, legitimate, or that simply do not allow
obtaining detailed explanations of their behaviour
• e.g. in precision medicine

9
• When developing a Black box ML model, the consideration of interpretability as
an additional design driver can improve its implementability for 3 reasons:
Interpretability helps to ensure impartiality in decision-making.
Interpretability facilitates the provision of robustness.
Interpretability can give meaningful output.

10
[Link] diagram of “black-
box” model

11
• Behavioral psychologists view the human brain as a black box
• A black box model receives inputs and produces outputs but its workings are
unknowable
• Black box models are increasingly used to drive decision-making in the financial
markets.
• The human mind responds to stimuli. In order to change behavior, the stimuli
must be changed, not the mind that reacts to the stimuli.
• Eg: Marketers as a way to analyze the consumer decision-making process

12
• In order to avoid limiting the effectiveness of the current generation of AI
systems, eXplainable AI (XAI) proposes
1. Produce more explainable models while maintaining a high level of learning
performance (e.g., prediction accuracy)
2. Enable humans to understand, appropriately trust, and effectively manage the
emerging generation of artificially intelligent partners

13
Evolution of the number of total publications whose title, abstract and/or keywords
refer to the field of XAI during the last years.

14
XAI Application Domains
a: TRANSPORTATION
b: HEALTHCARE
c: LEGAL
d: FINANCE
e: MILITARY

15
Why do we need explanations

16
Image Interpretation approaches

• GRAD CAM
• LIME
• Ensemble XAI

17
GRAD CAM
• CNN based model
• Gradient weighted class activation mapping
• More transparent by visualizing input regions with high resolution details
• Can produce a separate visualization for every class present in the image

18
• Visualization
 visualization of the final feature map A^k shows the decriminative regions
Last convolutional layer can be considered features of a classification model
Average gradient score as weights for the feature map

19
Grad Cam overview

20
• Process
I. Given an image and a class of interest(eg:tiger cat) as input
II. Propagate the image through the CNN part of the model
III. Then through task specific computation to obtain a raw score for the category.
IV. Gradients are set to 0 except the desired class.

21
• Limitation
Failure to localize an object in the image if there are multiple occurrences of the
same object

22
LIME

• To explain individual prediction


• Local Interpretable Model-Agnostic Explanations (LIME)
• Used to approximate a complex model locally by an interpretable model
• Can explain prediction of a particular instance of interest
• Implementation of local surrogate models
• LIME focuses on training a local surrogate model to explain individual
predictions by building the sparse linear model

23
• The LIME procedure
I. Determine an interpretable representation of the instance of interest.
II. Draw a sample by disturbing the interpretable representation.
III. Apply the original model to the perturbed images and generate predictions
IV. Fit the interpretable model to the sampled images and the predictions in step iii
V. Use the interpretable model to draw conclusions about the relevance of each
interpretable component.

24
• Limitation
Very time consuming
Instability of the explanations

25
Ensemble XAI
• Based on GRAD CAM++ and SHAP
• widely applied in deep learning .
• The normalized positive SHAP values to generate the mapping layer identifying
discriminative regions
• Termed this method as Ensemble XAI (Fig.2)

26
Grad Cam++
 provide better visual explanations of CNN model predictions
SHAP
 Shapely Additive Explanations.
 It allows meaningful,local explanations of individual predictions
 Borrows the concept from cooperative game theory(It is a model of Game theory
wherein the participants (players or a set of players called coalitions) though in
competition, will work with a cooperative behavior of an external force)

27
28
Design
I. For each image, a pair of Grad-CAM++ and SHAP heat maps is generated by the
base model
II. Preprocessing is applied before the Kernel Ridge( ) as shown in Fig.
2a.
III. First, as the image is annotated by three different radiologists, to generate the y
function for Kernel Ridge
IV. calculate the weighted sum of three annotations to produce the target label
V. The target label shows three different intensity colors with the darkest area
representing the concordance area of three radiologists
VI. Lastly, the experiment data are split into three folds. For each iteration in Fig. 2b,
two folds of data are used in Kernel Ridge
VII. After three iterations, Ensemble XAI for all folds is generated without information
leakage. 29
Evaluation Metrics
1. Decision impact ratio
2. Confidence impact ratio

30
Decision impact ratio

• The percentage change in decisions as a result of omitting the critical area


identified by interpretation method.

i th original image
hc critical area

31
Confidence impact ratio

• The percentage drop in confidence as a result of omitting the critical area


identified by the interpretation method.

32
Fig.3. Heat map identified by 5 interpretation methods with mortality risk score of original images
in first row; images in absence of critical area of corresponding interpretation methods with new
mortality risk score in second row.

33
• Showing heat map by different interpretation methods.
• It had better performance than other interpretation methods in terms of
localization effectiveness and radiologists’ trust.
• best performance in the explainability evaluation.

34
35
Conclusion

36
Conclusion
• XAI techniques can be utilised to make results from AI-based autonomous
systems explainable and traceable
• Frameworks and models that help in interpreting and understanding the decisions

37
References
[1] A. Adadi, M. Berrada: “Peeking Inside the Black-Box: Survey on XAI”, 2169-3536 2018 IEEE
[2] Lin Zou, Han Leong Goh, Charlene Jin Yee Liew, Jessica Lishan Quah. “Ensemble image explainable AI
(XAI) algorithm for severe community-acquired pneumonia and COVID-19 respiratory infections”.
I.2022.3153754, IEEE Transactions on Artificial Intelligence
[3] Qinhua Hu,Francisco Nauber,Rafal costa, M. Guizani, and S. Chan. “XAI based edge fuzzy images for
covid-19 detection and identification” In: IEEE Transactions on Artificial Intelligence 68.6 (2019), pp. 5917–
5927
[4] Alejandro,Natalia, G.H “XAI:Concepts,Taxonomies,opportunities and challenges”, CA, USA,18–22 March
2013; pp.76–84.
[5] Urja Pawar, Donna O’Shea, Susan Rea “Explainable AI in Healthcare”. IEEE Trans. In artificial
intelligence. 2012, 11, 320–336

38
THANK YOU

39

You might also like