DOCS/journey/research/explainability/miller_2019_explanation_in_ai.md
Claude-51124 22557e7132 docs: 오래된 트러블슈팅 아카이브 및 구조 정리
- 7-8월 초기 구축 문서 12개를 _archive/troubleshooting/2025_07-08_initial_setup/로 이동
- book/300_architecture/390_human_in_the_loop_intent_learning.md를 journey/research/intent_classification/로 이동 (개발 여정 문서)
- 빈 폴더 제거 (journey/assets/*)
2025-11-17 14:06:05 +09:00

609 B

Explanation in Artificial Intelligence: Insights from the Social Sciences

  • Author: Tim Miller
  • Year: 2019
  • Summary: This influential paper argues that to be effective, XAI must be grounded in how humans actually explain things to each other. Drawing from philosophy, psychology, and cognitive science, Miller argues that good explanations are contrastive (explaining why P happened instead of Q), selective (not exhaustive), and social (part of a conversation). This provides crucial principles for designing user-centric explanation interfaces.
  • Link: https://arxiv.org/abs/1706.07269