Our society increasingly relies on intelligent machines. Algorithms decide which e-mails reach our inboxes, whether we’re approved for credit, and whom we get the opportunity to date.
An interpretable algorithm is one whose decisions you can explain. You can better rely on such a model to be safe, accurate and useful. And an accurate model that is also interpretable can offer insights that can be used to change real-world outcomes for the better.
But the most powerful approaches to machine intelligence, including random forests, gradient boosting and neural networks, can be uninterpretable black boxes.
In this webinar we’ll learn about recent research, new tools and commercial applications of machine learning interpretability.
This webinar is for a general audience and covers the technical content at a conceptual level.