Data Science & MLIntermediate
Explainable AI & Model Interpretability
As AI moves into finance, healthcare, and other high-stakes domains, the ability to explain model decisions is becoming a regulatory and commercial requirement. This course covers the leading interpretability techniques used in practice: SHAP values, LIME, and feature importance analysis, applied to real models.
Tools & Technologies
PythonSHAPLIMEscikit-learnmatplotlib
Course Curriculum
1
Why Interpretability Matters
- Regulatory drivers: EU AI Act, FCA, GDPR
- Model trust, auditability, and debugging
- Global vs local explanations — what each answers
2
Global Interpretability
- Feature importance and permutation importance
- Partial dependence plots
- Accumulated Local Effects (ALE)
3
Local Explanations
- SHAP values — TreeSHAP and KernelSHAP
- LIME — explaining individual predictions
- Comparing SHAP and LIME in practice
4
Communicating & Documenting
- Model cards — what to include
- Presenting explanations to non-technical stakeholders
- Building interpretability into ML workflows
What's Included
Live instructor-led session
Small cohort
Course materials pack (slides, code, datasets)
Certificate of completion
14-day email support
£1,335Early bird
£895Save £440
per person
7 hours (1 day)
Next: Tue, 12 May 2026
8 seats remaining
Intermediate level
4/12 seats filled
Completion certificate included