- 7-8월 초기 구축 문서 12개를 _archive/troubleshooting/2025_07-08_initial_setup/로 이동 - book/300_architecture/390_human_in_the_loop_intent_learning.md를 journey/research/intent_classification/로 이동 (개발 여정 문서) - 빈 폴더 제거 (journey/assets/*)
631 B
631 B
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
- Authors: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
- Year: 2016
- Summary: This paper introduces LIME (Local Interpretable Model-agnostic Explanations). The key idea is to explain the prediction of any complex, black-box classifier by learning a simpler, interpretable model (e.g., a linear model) in the local neighborhood of the prediction. It provides intuitive, human-understandable explanations for individual predictions, answering the question of why a specific decision was made.
- Link: https://arxiv.org/abs/1602.04938