fbpx
AI Black Box Horror Stories – When Transparency was Needed More than Ever
Arguably, one of the biggest debates happening in data science in 2019 is the need for AI explainability. The ability to interpret machine learning models is turning out to be a defining factor for the acceptance of statistical models for driving business decisions. Enterprise stakeholders are demanding transparency in... Read more
Interpretability and the Rise of Shapley Values
Interpretability is a hot topic in data science this year.  Earlier this spring, I presented at ODSC East on the need for data scientists to use best practices like permutation-based importance, partial dependence, and explanations.  When I first put together this talk, a lot of it was fairly new... Read more
Cracking the Box: Interpreting Black Box Machine Learning Models
Intro To kick off this article, I’d like to explain the interpretability of a machine learning (ML) model. According to Merriam-Webster, interpretability describes the process of making something plain or understandable. In the context of ML, interpretability provides us with an understandable explanation of how a model behaves. Basically,... Read more
Not Always a Black Box: Machine Learning Approaches For Model Explainability
Editor’s Note: Violeta is speaking at ODSC Europe 2019, see her talk “Not Always a Black Box: Explainability Applications for a Real Estate Problem“ What is model explainability? Imagine that you have built a very precise machine learning model by using clever tricks and non-standard features. You are beyond... Read more
Innovators and Regulators Collaborate on Book Tackling AI’s Black Box Problem
AI’s Biggest Compliance Hurdle If you’re in data science, machine learning, or AI, you’ve probably heard of the “black box” problem. In short, it is the regulatory and implementation barriers caused by the un-explainability of sophisticated AI. Why are sophisticated AI systems so hard to explain? Because, for the... Read more