An ideal explainable deep learning system should produce various explanations to satisfy different user requirements. GALORE addresses this problem by unifying attributive (center), deliberative (left), and counterfactual (right) explanations. Attributive explanations highlight the pixels responsible for the prediction of the label 'Cardinal' for the image shown. Deliberative explanations address the why question, producing a set of insecurities, which are image regions deemed ambiguous, together with the classes that define the ambiguity. Counterfactual explanations address the why not question, visualizing the input changes needed to elicit the prediction of a user-provided counter class ('Summer Tanager').