The unsupervised model correctly predicts data instances as similar or anomalous, but does so using features that do not generalize well outside the available data. The CH effect typically goes undetected in a classical validation scheme and manifests itself in the form of prediction errors only after deployment. The problem is critical because the flaw can be inherited by potentially many downstream tasks. Our explainable AI approach allows CH effects to be detected directly in the unsupervised model and, in some cases, corrected. Pos, positive; pred, predicted; neg, negative. X-ray images reproduced from: left, middle, ref. 79 under a Creative Commons licence CC 1.0 ; right, ref. 90 under a Creative Commons licence CC BY-3.0 .