# "Why Should I Trust You?": Explaining the Predictions of Any Classifier - **Authors**: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin - **Year**: 2016 - **Summary**: This paper introduces LIME (Local Interpretable Model-agnostic Explanations). The key idea is to explain the prediction of any complex, black-box classifier by learning a simpler, interpretable model (e.g., a linear model) in the local neighborhood of the prediction. It provides intuitive, human-understandable explanations for individual predictions, answering the question of why a specific decision was made. - **Link**: https://arxiv.org/abs/1602.04938