Machine Learning and Interpretability

Our society increasingly relies on intelligent machines. Algorithms decide which e-mails reach our inboxes, whether we’re approved for credit, and whom we get the opportunity to date.

An interpretable algorithm is one whose decisions you can explain. You can better rely on such a model to be safe, accurate and useful. And an accurate model that is also interpretable can offer insights that can be used to change real-world outcomes for the better.

But the most powerful approaches to machine intelligence, including random forests, gradient boosting and neural networks, can be uninterpretable black boxes.

In this webinar we’ll learn about recent research, new tools and commercial applications of machine learning interpretability.

This webinar is for a general audience and covers the technical content at a conceptual level.

One comment

  1. This1That0 says:

    I wanted to leave a comment regarding the summary graph shown around 16:00 minutes into the presentation. In particular the paper that the summary graph is produced from “Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission” notices a peculiar phenomenon with patients that have Asthma. However, it actually makes sense that these patients suffering from a Chronic disease (assuming that is what is being measured in the study) would appear to have a different risk of death rate because they are already on medications that can alleviate some of the symptoms of pneumonia. These patients are likely to be taking Inhaled corticosteroids. “Inhaled corticosteroids are the preferred medicine for long-term control of asthma. They’re the most effective option for long-term relief of the inflammation and swelling that makes your airways sensitive to certain inhaled substances.” Clearly – the study should expand to include medications patients are on because they have a physiological impact that can impact death rates.

Comments are closed.