LIME (local interpretable model-agnostic explanations) is an explainability method used to inspect machine learning models. Include a tag for Python, R, etc. depending on the LIME implementation.