- 7-8월 초기 구축 문서 12개를 _archive/troubleshooting/2025_07-08_initial_setup/로 이동 - book/300_architecture/390_human_in_the_loop_intent_learning.md를 journey/research/intent_classification/로 이동 (개발 여정 문서) - 빈 폴더 제거 (journey/assets/*)
7 lines
599 B
Markdown
7 lines
599 B
Markdown
# A Unified Approach to Interpreting Model Predictions
|
|
|
|
- **Authors**: Scott M. Lundberg, Su-In Lee
|
|
- **Year**: 2017
|
|
- **Summary**: This paper introduces SHAP (SHapley Additive exPlanations), a method to explain individual predictions. It is based on the game-theoretically optimal Shapley values. SHAP attributes an importance value to each feature for a particular prediction, representing its contribution to pushing the model's output from the base value to the final prediction. It provides a unified framework that connects many other XAI methods.
|
|
- **Link**: https://arxiv.org/abs/1705.07874
|