Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

How good is your explanation? Algorithmic stability measures to assess the qualityof explanations for deep neural networks

Abstract : A plethora of methods have been proposed to explain howdeep neural networks reach a decision but comparativelylittle effort has been made to ensure that the explanationsproduced by these methods are objectively relevant. Whiledesirable properties for a good explanation are easy to come,objective measures have been harder to derive. Here, we pro-pose two new measures to evaluate explanations borrowedfrom the field of algorithmic stability: relative consistencyReCo and mean generalizability MeGe. We conduct severalexperiments on multiple image datasets and network archi-tectures to demonstrate the benefits of the proposed measuresover representative methods. We show that popular fidelitymeasures are not sufficient to guarantee good explanations.Finally, we show empirically that 1-Lipschitz networks pro-vide general and consistent explanations, regardless of theexplanation method used, making them a relevant directionfor explainability.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-02930949
Contributor : David Vigouroux Connect in order to contact the contributor
Submitted on : Thursday, July 1, 2021 - 4:01:24 PM
Last modification on : Wednesday, November 17, 2021 - 12:30:02 AM

Files

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02930949, version 2
  • ARXIV : 2009.04521

Citation

Thomas Fel, David Vigouroux, Rémi Cadène, Thomas Serre. How good is your explanation? Algorithmic stability measures to assess the qualityof explanations for deep neural networks. 2021. ⟨hal-02930949v2⟩

Share

Metrics

Record views

109

Files downloads

121