

Training and Operationalizing Interpretable Machine Learning Models
Machine LearningModelingAzureEast 2020Microsoftposted by ODSC Community February 21, 2020 ODSC Community

AI offers companies the unique opportunity to transform their operations: from AI applications able to predict and schedule equipment’s maintenance, to intelligent R&D applications able to estimate the success of future drugs. However, in order to be able to leverage this opportunity, companies have to learn how to successfully build, train, test, and push hundreds of machine learning models in production, in ways that are robust, explainable, and repeatable. During her upcoming session “Training and Operationalizing Interpretable Machine Learning Models” at ODSC East, Francesca will introduce some common challenges of machine learning model deployment and she will discuss the following points in order to enable you to tackle some of those challenges:
- How to select the right tools to succeed with model deployment.
- How to use automated machine learning to optimize your machine learning deployment flow.
- How model interpretability toolkits can be used to build machine learning pipelines that are robust, explainable, and repeatable.
Source: www.aka.ms/AzureMLservice
1. How to select the right tools to succeed with model deployment
Building, training, testing and finally deploying machine learning models is often a tedious and slow process for companies that are looking at transforming their operations with AI. In the first part of this session, you will learn a few guidelines on how a company can select the right tools to succeed with model deployment. Francesca will illustrate this workflow using Azure Machine Learning, but it can be also used with any machine learning product of your choice.
The model deployment workflow should be based on the following three simple steps:
- Register the model – Model registration is the logical container for one or more files that make up your model. For example, if you have a model that is stored in multiple files, you can register them as a single model in the workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
- Prepare to deploy (specify assets, usage, compute target) – To deploy a model as a web service, you must create an inference configuration and a deployment configuration. Inference, or model scoring, is the phase where the deployed model is used for prediction, most commonly on production data.
- Deploy the model to the compute target – Finally, before deploying, you must define the deployment configuration. The deployment configuration is specific to the compute target that will host the web service. For example, when deploying locally you must specify the port where the service accepts requests.
2. How to use automated machine learning to optimize your machine learning deployment flow
Automated machine learning, also referred to as automated ML, is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. Automated ML is based on a breakthrough from our Microsoft Research division.
In this session, you will learn how to apply automated ML to train and tune a model for you using the target metric you specify. The service then iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. The higher the score, the better the model is considered to fit your data.
Data scientists, analysts, and developers across industries can use automated ML to:
- Implement machine learning solutions without extensive programming knowledge
- Save time and resources
- Leverage data science best practices
- Provide agile problem-solving
3. How model interpretability toolkits can be used for model training and deployment
Interpretability is critical for data scientists and business decision-makers alike to ensure compliance with company policies, industry standards, and government regulations:
- Data scientists need the ability to explain their models to executives and stakeholders, so they can understand the value and accuracy of their findings
- Business decision-makers need peace-of-mind of the ability to provide transparency for end-users to gain and maintain their trust
In this session, you will learn how model interpretability concepts are implemented in Azure Machine Learning SDK for Python. Using the classes and methods in the SDK, you will learn how to get:
- Feature importance values for both raw and engineered features
- Interpretability on real-world datasets at scale, during training and inference.
- Interactive visualizations to aid you in the discovery of patterns in data and explanations at training time.
Conclusion
If you want to learn more about automated machine learning, model interpretability, and model deployment, and how to leverage these capabilities on Azure to build and push hundreds of machine learning models in production, join Francesca at ODSC East!
About the speaker/author:
Francesca Lazzeri, PhD is an experienced scientist and machine learning practitioner with over 10 years of both academic and industry experience. She is author of a number of publications, including technology journals, conferences, and books. She currently leads an international team of cloud advocates, developers and data scientists at Microsoft. Before joining Microsoft, she was a research fellow at Harvard University in the Technology and Operations Management Unit.