Workflow from Scientific Research

Open access visualization of Workflow, Illustration, Explainable Deep Learning, Attributive Explanations, Deliberative Explanations
CC-BY
3
Views
0
Likes
DOI

An ideal explainable deep learning system should produce various explanations to satisfy different user requirements. GALORE addresses this problem by unifying attributive (center), deliberative (left), and counterfactual (right) explanations. Attributive explanations highlight the pixels responsible for the prediction of the label 'Cardinal' for the image shown. Deliberative explanations address the why question, producing a set of insecurities, which are image regions deemed ambiguous, together with the classes that define the ambiguity. Counterfactual explanations address the why not question, visualizing the input changes needed to elicit the prediction of a user-provided counter class ('Summer Tanager').

Related Plots

Discover More Scientific Plots

Browse thousands of high-quality scientific visualizations from open-access research