Towards knowledge-based explainability for deep neural networks
Résumé
As deep learning (DL) models gain traction in real world applications, there is a growing demand for transparency in their results. The field of explainable AI (XAI) has swiftly responded to this challenge with notable advancements, such as feature-based methods like SHAP and LIME. However, a distinct category of XAI approaches is emerging, which, instead of raw input features, leverages explicit knowledge representations to produce explanations. By incorporating domain-specific knowledge either before, during or after training the model, these methods aim to provide interpretable insights into specific outcomes or the overall functioning of the explained model. This paper reviews these approaches from the perspective of the level at which the knowledge is accounted for into the DL/XAI pipeline, comparing methods and discussing the accompanying challenges and opportunities in enhancing model interpretability.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|