DOCS/journey/research/explainability/ribeiro_et_al_2016_lime.md
Claude-51124 22557e7132 docs: 오래된 트러블슈팅 아카이브 및 구조 정리
- 7-8월 초기 구축 문서 12개를 _archive/troubleshooting/2025_07-08_initial_setup/로 이동
- book/300_architecture/390_human_in_the_loop_intent_learning.md를 journey/research/intent_classification/로 이동 (개발 여정 문서)
- 빈 폴더 제거 (journey/assets/*)
2025-11-17 14:06:05 +09:00

631 B

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

  • Authors: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
  • Year: 2016
  • Summary: This paper introduces LIME (Local Interpretable Model-agnostic Explanations). The key idea is to explain the prediction of any complex, black-box classifier by learning a simpler, interpretable model (e.g., a linear model) in the local neighborhood of the prediction. It provides intuitive, human-understandable explanations for individual predictions, answering the question of why a specific decision was made.
  • Link: https://arxiv.org/abs/1602.04938