Towards knowledge-based explainability for deep neural networks
Résumé
As machine learning models gain traction in real world applications, user demand for transparent results grows. The field of explainability (XAI) is meeting this challenge with remarkable speed and efficiency. Notable examples include SHAP and LIME, which are feature-based XAI methods. In this work we aim to review a distinct category of XAI approaches, whose support for providing explanations is interpretable explanatory elements representing user knowledge, instead of raw input features. We categorize these methods based on the stage at which the knowledge is integrated to the XAI pipeline. Furthermore, we highlight the literature around the assessment of XAI methods. We emphasize the importance of the metric of faithfulness of knowledge-based explanations, not only to the real world but also to the underlying model.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|