Understand the key aspects and challenges of machine learning interpretability, learn how to overcome them with interpretation methods, and leverage them to build fairer, safer, and more reliable models
Do you want to understand your models and mitigate the risks associated with poor predictions using practical machine learning (ML) interpretation? Interpretable Machine Learning with Python can help you overcome these challenges, using interpretation methods to build fairer and safer ML models.
The first section of the book is a beginner’s guide to interpretability and starts by recognizing its relevance in business and exploring its key aspects and challenges. You’ll focus on how white-box models work, compare them to black-box and glass-box models, and examine the trade-offs. The second section will get you up to speed with interpretation methods and how to apply them to different use cases. In addition to the step-by-step code, there’s a strong focus on interpreting model outcomes in the context of each chapter’s example. In the third section, you’ll focus on tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. The methods you’ll explore here range from state-of-the-art feature selection and dataset debiasing methods to monotonic constraints and adversarial retraining.
By the end of this machine learning Python book, you’ll be able to understand ML models better and enhance them through interpretability tuning.
This book is for data scientists, machine learning developers, and data stewards who have an increasingly critical responsibility to explain how the AI systems they develop work, their impact on decision making, and how they identify and manage bias. Working knowledge of machine learning and the Python programming language is expected.