Skip to Main content Skip to Navigation
Journal articles

Coalitional Strategies for Efficient Individual Prediction Explanation

Abstract : As Machine Learning (ML) is now widely applied in many domains, in both research and industry, an understanding of what is happening inside the black box is becoming a growing demand, especially by non-experts of these models. Several approaches had thus been developed to provide clear insights of a model prediction for a particular observation but at the cost of long computation time or restrictive hypothesis that does not fully take into account interaction between attributes. This paper provides methods based on the detection of relevant groups of attributes-named coalitionsinfluencing a prediction and compares them with the literature. Our results show that these coalitional methods are more efficient than existing ones such as SHapley Additive exPlanation (SHAP). Computation time is shortened while preserving an acceptable accuracy of individual prediction explanations. Therefore, this enables wider practical use of explanation methods to increase trust between developed ML models, end-users, and whoever impacted by any decision where these models played a role.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03259008
Contributor : Julien Aligon <>
Submitted on : Saturday, June 12, 2021 - 7:47:40 PM
Last modification on : Friday, June 18, 2021 - 3:47:45 AM
Long-term archiving on: : Monday, September 13, 2021 - 6:11:14 PM

File

2104.00765.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

Citation

Gabriel Ferrettini, Elodie Escriva, Julien Aligon, Jean-Baptiste Excoffier, Chantal Soulé-Dupuy. Coalitional Strategies for Efficient Individual Prediction Explanation. Information Systems Frontiers, Springer Verlag, 2021, pp.1-31. ⟨10.1007/s10796-021-10141-9⟩. ⟨hal-03259008⟩

Share

Metrics

Record views

32

Files downloads

41