{"id":12278,"date":"2023-06-27T10:09:16","date_gmt":"2023-06-27T14:09:16","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=12278"},"modified":"2024-05-29T12:17:36","modified_gmt":"2024-05-29T16:17:36","slug":"explainable-ai-xai-implement-explainability-in-your-work","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2023\/06\/27\/explainable-ai-xai-implement-explainability-in-your-work\/","title":{"rendered":"Explainable AI (XAI): Implement explainability in your work"},"content":{"rendered":"<h6><\/h6>\r\n<em>This post is from <a href=\"https:\/\/www.linkedin.com\/in\/ogemarques\/\">Oge Marques<\/a>, PhD and Professor of Engineering and Computer Science at FAU.<\/em>\r\n<h6><\/h6>\r\n<blockquote>This is the third post in a 3-post series on <em><strong>Explainable AI<\/strong> (XAI)<\/em>. In the <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2022\/12\/30\/what-is-explainable-ai\/\">first post<\/a>, we showed examples and offered practical advice on how and when to use \u00a0XAI techniques for computer vision tasks. In the <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2023\/05\/08\/explainable-ai-xai-are-we-there-yet\/\">second post<\/a>, we offered words of caution and discussed the limitations. In this post, we conclude the series by offering a practical guide for getting started with explainability, including tips and examples.<\/blockquote>\r\n<h6><\/h6>\r\nIn this blog post, we focus on image classification tasks and offer 4 practical tips, which help you make the most of Explainable AI techniques, for those of you ready to implement explainability in your work.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>TIP 1: Why is explainability important?<\/strong><\/p>\r\nBefore you dive into the numerous practical details related to using XAI techniques in your work, you should start by examining your reasons for using explainability. Explainability can help you better understand your model\u2019s predictions and reveal inaccuracies in your model and bias in your data.\r\n<h6><\/h6>\r\nIn the second blog post of this series, we commented on the use of post-hoc XAI techniques to assist in diagnosing potential blunders that the deep learning model might be making; that is, producing results that are seemingly correct but reveal that the model was \u201clooking at the wrong places.\u201d A classic example in the literature demonstrated that a <a href=\"https:\/\/arxiv.org\/abs\/1602.04938\"><em>husky vs. wolf<\/em>\u00a0image classification algorithm<\/a>\u00a0was, in fact, a \u201csnow detector.\u201d (Fig. 1).\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12281 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/husky_vs_wolf.png\" alt=\"Explainability method LIME reveals that the husky vs wolf classifier is detecting the presence of snow.\" width=\"459\" height=\"251\" \/>\r\n<h6><\/h6>\r\n<strong>Figure 1:<\/strong> Example of misclassification in a \u201chusky vs. wolf\u201d image classifier due to a spurious correlation between images of wolves and the presence of snow.\u00a0 The image on the right, which shows the result of the LIME post-hoc XAI technique, captures the classifier blunder. [<a href=\"https:\/\/arxiv.org\/pdf\/1602.04938.pdf\">Source<\/a>]\r\n<h6><\/h6>\r\nThese are examples where there is not much at stake. But what about high-stakes areas (such as healthcare) and sensitive topics in AI (such as bias and fairness)? In the field of radiology, there is <a href=\"https:\/\/journals.plos.org\/plosmedicine\/article?id=10.1371\/journal.pmed.1002683\">a famous example<\/a> where models designed to identify pneumonia in chest X-rays learned to recognize a metallic marker placed by radiology technicians in the corner of the image (Fig. 2). This marker is typically used to indicate the source hospital where the image was taken. As a result, the models performed effectively when analyzing images from the hospital they were trained on, but struggled when presented with images from other hospitals that had different markers. And most importantly, explainable AI revealed that the models were not diagnosing pneumonia but classifying the presence of metallic markers.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12287 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/incorrect_pneumonia_detection.png\" alt=\"Explainable AI techniques reveal that pneumonia classifier is classifying medical marker.\" width=\"396\" height=\"189\" \/>\r\n<h6><\/h6>\r\n<strong>Figure 2:<\/strong> A deep learning model for detecting pneumonia: the CNN has learned to detect a metal token that radiology technicians place on the patient in the corner of the image field of view at the time they capture the image. When these strong features are correlated with disease prevalence, models can leverage them to indirectly predict disease. [<a href=\"https:\/\/journals.plos.org\/plosmedicine\/article?id=10.1371\/journal.pmed.1002683\">Source<\/a>]\r\n<h6><\/h6>\r\n<span style=\"color: #c04c0b;\"><strong>Example<\/strong><\/span>\r\n<h6><\/h6>\r\n<a href=\"https:\/\/github.com\/ogemarques\/xai-matlab\">This example<\/a> shows MATLAB code to produce post-hoc explanations (using two popular post-hoc XAI techniques,\u00a0<a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/gradcam.html\">Grad-CAM<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/imagelime.html\">image LIME<\/a>) for a medical image classification task.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong> TIP 2: Can you use an inherently explainable model?<\/strong><\/p>\r\nDeep learning models are often the first choice to consider, but should they be? For problems involving (alphanumerical) tabular data there are numerous interpretable ML techniques to choose from, including: decision trees, linear regression, logistic regression, Generalized Linear Models (GLMs), and Generalized Additive Models (GAMs). In computer vision, however, the prevalence of deep learning architectures such as convolutional neural networks (CNNs) and, more recently, vision transformers, make it necessary to implement mechanisms for visualizing network predictions after the fact.\r\n<h6><\/h6>\r\nIn <a href=\"https:\/\/www.nature.com\/articles\/s42256-019-0048-x\">a landmark paper<\/a>, Duke University researcher and professor Cynthia Rudin made a strong claim in favor of the interpretable models (rather than post-hoc XAI techniques applied to an opaque model). Alas, prescribing the use of interpretable models and successfully are two dramatically different things; for example, an interpretable model from Rudin\u2019s research group, <a href=\"https:\/\/arxiv.org\/abs\/1806.10574\">ProtoPNet<\/a>, has achieved relatively modest success and popularity.\r\n<h6><\/h6>\r\nIn summary, from a pragmatic standpoint, you are better off using pretrained models such as the ones available <a href=\"https:\/\/github.com\/matlab-deep-learning\/MATLAB-Deep-Learning-Model-Hub\">here<\/a> and dealing with their opaqueness through judicious use of post-hoc XAI techniques than embarking on a time-consuming research project.\r\n<h6><\/h6>\r\n<span style=\"color: #c04c0b;\"><strong>Example<\/strong><\/span>\r\n<h6><\/h6>\r\n<a href=\"https:\/\/www.mathworks.com\/discovery\/interpretability.html\">This MATLAB page<\/a> provides a brief overview of interpretability and explainability, with links to many code examples.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>TIP 3: How to choose the right explainability technique?<\/strong><\/p>\r\nThere are many post-hoc XAI techniques to choose from \u2013 and several of them have become available as MATLAB library functions, including <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/gradcam.html\">Grad-CAM<\/a> and <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/imagelime.html\">LIME<\/a>. These are two of the most popular methods in an ever-growing field that has more than 30 techniques to choose from (<a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC9793854\/\">as of Dec 2022<\/a>). Consequently, selecting the best method can be intimidating at first. As with many other decisions in AI, I advise to start with the most popular, broadly available methods first. Later, if you collect enough evidence (for example by running experiments with users of the AI solution) that certain techniques work best in certain contexts, you can test and adopt other methods.\r\n<h6><\/h6>\r\nIn the case of image classification, the perception of added value provided by the XAI technique can also be associated to the visual display of results. Fig. 3 provides five examples of visualization of XAI results using different techniques. The visual results are significantly different among them, which might lead to different users preferring different methods.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12305 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/XAImethods.png\" alt=\"Deep learning visualizations methods for image classification in MATLAB\" width=\"518\" height=\"795\" \/>\r\n<h6><\/h6>\r\n<strong>Figure 3:<\/strong> Examples of different post-hoc XAI techniques and associated visualization options. [<a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/deep-learning-visualization-methods.html\">Source<\/a>]\r\n<h6><\/h6>\r\n<span style=\"color: #c04c0b;\"><strong>Example<\/strong><\/span>\r\n<h6><\/h6>\r\nThe GUI-based <a href=\"https:\/\/github.com\/matlab-deep-learning\/Explore-Deep-Network-Explainability-Using-an-App\">UNPIC app<\/a> allows you to explore the predictions of an image classification model using several deep learning visualization and XAI techniques.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>TIP 4: Can you improve XAI results and make them more user-centric?<\/strong><\/p>\r\nYou can view explainable AI techniques as one option for interpreting the model\u2019s decisions along with a range of other options (Fig. 4). For example, in medical image classification, an AI solution that predicts a medical condition from a patient\u2019s chest x-ray might use gradually increasing degrees of explainability: (1) <strong>no explainability information<\/strong>, just the outcome\/prediction; (2) adding <strong>output probabilities<\/strong> for most likely predictions, giving a measure of confidence associated with them; (3) adding <strong>visual saliency<\/strong> information describing areas of the image driving the prediction; (4) combining predictions with results from a medical case retrieval (MCR) system and indicating <strong>matched real cases<\/strong> that could have influenced the prediction; and (5) adding computer-generated <strong>semantic explanation<\/strong>.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12308 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/explainability_options.jpg\" alt=\"Explainability options that help interpret the AI model's output.\" width=\"624\" height=\"168\" \/>\r\n<h6><\/h6>\r\n<strong>Figure 4: <\/strong>XAI as a gradual approach: in addition to the model\u2019s prediction, different types of supporting information can be added to explain the decision. [<a href=\"https:\/\/pubs.rsna.org\/doi\/full\/10.1148\/ryai.2020190043\">Source<\/a>]\r\n<h6><\/h6>\r\n<span style=\"color: #c04c0b;\"><strong>Example<\/strong><\/span>\r\n<h6><\/h6>\r\n<a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ug\/visualize-image-classifications-using-maximal-and-minimal-activating-images.html\">This example<\/a> shows MATLAB code to produce post-hoc explanations (heat maps) and output probabilities for a food image classification task and demonstrates their usefulness in the evaluation of misclassification results.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 18px;\"><strong>A cheat sheet with practical suggestions, tips, and tricks<\/strong><\/p>\r\nUsing post-hoc XAI can help but it shouldn\u2019t be seen as a panacea. We hope the discussions, ideas, and suggestions in this blog series were useful to your professional needs. To conclude, we present a cheat sheet with some key suggestions for those who want to employ explainable AI in their work:\r\n<h6><\/h6>\r\n<table width=\"90%\">\r\n<tbody>\r\n<tr>\r\n<td style=\"padding: 10px; text-align: left; border-top: solid; border-left: solid; border-color: #616161;\" width=\"5%\"><span style=\"color: #616161; font-size: 24px; font-family: bradley hand, cursive;\"><strong>1<\/strong><\/span><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-top: solid; border-right: solid; border-color: #616161;\" width=\"85%\"><span style=\"font-family: bradley hand, cursive; color: #616161; font-size: 16px;\">Start with a clear understanding of the problem you are trying to solve and the specific reasons why you might want to use explainable AI models.<\/span><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; border-left: solid; border-color: #616161;\" width=\"5%\"><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-right: solid; border-color: #616161;\" width=\"85%\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12485 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/understandML-1.png\" alt=\"\" width=\"204\" height=\"102\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-left: solid; border-color: #616161;\" width=\"5%\"><span style=\"color: #616161; font-size: 24px; font-family: bradley hand, cursive;\"><strong>2<\/strong><\/span><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-right: solid; border-color: #616161;\" width=\"85%\"><span style=\"font-family: bradley hand, cursive; color: #616161; font-size: 16px;\">Whenever possible, use an inherently explainable model, \u00a0such as a decision tree, or a rule-based model. On the other hand, CNNs are amenable to post-hoc XAI techniques based on gradients and weight values.<\/span><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; border-left: solid; border-color: #616161;\" width=\"5%\"><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-right: solid; border-color: #616161;\" width=\"85%\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12509 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/black_box-2.png\" alt=\"\" width=\"1274\" height=\"181\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-left: solid; border-color: #616161;\" width=\"5%\"><span style=\"color: #616161; font-size: 24px; font-family: bradley hand, cursive;\"><strong>3<\/strong><\/span><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-right: solid; border-color: #616161;\" width=\"85%\"><span style=\"font-family: bradley hand, cursive; color: #616161; font-size: 16px;\">Visualize and assess the model outputs. This can help you understand how the model is making decisions and identify any issues that may arise.<\/span><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; border-left: solid; border-color: #616161;\" width=\"5%\"><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-right: solid; border-color: #616161;\" width=\"85%\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12491 \" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/visualizations.png\" alt=\"\" width=\"352\" height=\"199\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-left: solid; border-color: #616161;\" width=\"5%\"><span style=\"color: #616161; font-size: 24px; font-family: bradley hand, cursive;\"><strong>4<\/strong><\/span><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-right: solid; border-color: #616161;\" width=\"85%\"><span style=\"font-family: bradley hand, cursive; color: #616161; font-size: 16px;\">Consider providing additional context around the decision-making process to end-users, such as feature importance or sensitivity analysis. This can help build trust in the model and increase transparency.<\/span><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; border-left: solid; border-color: #616161;\" width=\"5%\"><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-right: solid; border-color: #616161;\" width=\"85%\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12494 \" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/XAIplus.png\" alt=\"\" width=\"432\" height=\"150\" \/><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-left: solid; border-color: #616161;\" width=\"5%\"><span style=\"color: #616161; font-size: 24px; font-family: bradley hand, cursive;\"><strong>5<\/strong><\/span><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-top: dashed; border-right: solid; border-color: #616161;\" width=\"85%\"><span style=\"font-family: bradley hand, cursive; color: #616161; font-size: 16px;\">Finally, document the entire process, including the data used, the model architecture, and the methods used to evaluate the model's performance. This will ensure reproducibility and allow others to verify your results.<\/span><\/td>\r\n<\/tr>\r\n<tr>\r\n<td style=\"padding: 10px; border-left: solid; border-bottom: solid; border-color: #616161;\" width=\"5%\"><\/td>\r\n<td style=\"padding: 10px; text-align: left; border-right: solid; border-bottom: solid; border-color: #616161;\" width=\"85%\"><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-12512 size-full\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/AIsystem.png\" alt=\"\" width=\"901\" height=\"67\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 14px;\"><strong>Read more about it:<\/strong><\/p>\r\n\r\n<ul>\r\n \t<li>Christoph Molnar\u2019s book \u201cInterpretable Machine Learning\u201d (available <a href=\"https:\/\/christophm.github.io\/interpretable-ml-book\/\">here<\/a>) is an excellent reference to the vast topic of interpretable\/explainable AI.<\/li>\r\n \t<li><a href=\"https:\/\/par.nsf.gov\/servlets\/purl\/10326896\">This 2022 paper by Soltani, Kaufman, and Pazzani<\/a> provides an example of ongoing research on shifting the focus of XAI explanations toward user-centric (rather than developer-centric) explanations.<\/li>\r\n \t<li>The 2021 blog post <a href=\"https:\/\/thegradient.pub\/a-visual-history-of-interpretation-for-image-recognition\/\">A Visual History of Interpretation for Image Recognition<\/a>, by Ali Abdalla, provides a richly illustrated introduction to the most popular post-hoc XAI techniques and provides historical context for their development.<\/li>\r\n<\/ul>\r\n&nbsp;\r\n\r\n&nbsp;","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2023\/06\/explainability_options.jpg\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><p>\r\nThis post is from Oge Marques, PhD and Professor of Engineering and Computer Science at FAU.\r\n\r\nThis is the third post in a 3-post series on Explainable AI (XAI). In the first post, we showed... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2023\/06\/27\/explainable-ai-xai-implement-explainability-in-your-work\/\">read more >><\/a><\/p>","protected":false},"author":194,"featured_media":12308,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[54,9,66,12],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/12278"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/194"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=12278"}],"version-history":[{"count":78,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/12278\/revisions"}],"predecessor-version":[{"id":12611,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/12278\/revisions\/12611"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media\/12308"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=12278"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=12278"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=12278"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}