fbpx
Developing Credit Scoring Models for Banking and Beyond Developing Credit Scoring Models for Banking and Beyond
Editor’s note: Aric LaBarr is a speaker for ODSC East this April 23-25. Be sure to check out his talk, “Developing... Developing Credit Scoring Models for Banking and Beyond

Editor’s note: Aric LaBarr is a speaker for ODSC East this April 23-25. Be sure to check out his talk, “Developing Credit Scoring Models for Banking and Beyond,” there!

The interpretability of machine learning models is paramount in many industries. From credit-worthiness to insurance claims, and anti-money laundering to readmittance to a hospital, getting end-users to understand the output of analytical models can be a challenge. Why is interpretability so important? Interpretability promotes understanding. In turn, understanding promotes responsible use. Let’s look at how we can do this through a couple of examples.

In-Person and Virtual Conference

April 23rd to 25th, 2024

Join us for a deep dive into the latest data science and AI trends, tools, and techniques, from LLMs to data analytics and from machine learning to responsible AI.

 

First, imagine you work for a large insurance company and head up the new, analytical division responsible for aiding in worker’s compensation claims. The investigation team at the insurance company is made up of current or former law enforcement officers. They have years of experience in investigations and have succeeded in their careers without the use of data science or AI. It is your task to head up the creation of modeling efforts to aid these investigators. You know that walking into them and starting to talk about random forests, gradient boosting, area under the ROC curve, and other such data science concepts is a nonstarter. Even talking about the nuances of how interpretable a logistic regression model is compared to other machine learning models probably won’t convince them. They are experts at what they do, but not necessarily experts at data science. Remembering that data science is meant to be a tool to aid in people’s work, not a replacement for people’s work, leads you to building a scorecard like the one below:

Now you walk into your investigation team with some interpretable and meaningful results. You tell them you have Client A who has a score of 585 and high scores like this are a sign of fraud. They ask why you think this client is worthy of an investigation and you mention that they have a rather high coverage limit for the income they have (above 7). They also got paid out in under 10 days, which is quite fast for someone getting paid out for a large coverage limit, especially with pending information for a doctor’s report. They thank you very much and start their investigation! You didn’t mention to them that the scorecard you built uses decision trees to strategically bin the variables in the model, you calculated the weight of evidence values for each of these bins and then inputted all of that into a logistic regression model with a high model performance. You didn’t need to because you built something interpretable and useable for them.

Imagine you now work for a large healthcare organization. You are put in charge of coming up with a modeling approach to help solve the problem of hospital readmittance. You have a team of doctors and nurses who will be using the output from your model to help identify risky patients before they are released from the hospital. Again, you know that walking into them and starting to talk about random forests, gradient boosting, area under the ROC curve, and other such data science concepts is a non-starter. Even talking about the nuances of how interpretable a logistic regression model is compared to other machine learning models probably won’t convince them to use what you have created. Similar to before, they are experts at what they do, but probably not experts in data science. Remembering back to your experience at the insurance company led you to build a scorecard like the one below:

Now you walk into your team of doctors and nurses with some interpretable and meaningful results. You tell them you have Patient A who has a score of 590 and high scores like this are a sign of risk for readmission to the hospital. They ask why you think this patient is at risk and you mention that they have stayed at the hospital for an extended time (longer than 5 days). They also were sent to the ICU while at the hospital and had over 15 different ICD codes assigned to their condition. They thank you very much and recheck with the patient to make sure they are ready for discharge and follow up with them frequently after discharging them! You didn’t mention to them that the scorecard you built uses decision trees to strategically bin the variables in the model, you calculated the weight of evidence values for each of these bins and then inputted all of that into a logistic regression model with a high model performance. You didn’t need to because you built something interpretable and useable for them.

In-Person Data Engineering Conference

April 23rd to 24th, 2024 – Boston, MA

At our second annual Data Engineering Summit, Ai+ and ODSC are partnering to bring together the leading experts in data engineering and thousands of practitioners to explore different strategies for making data actionable.

 

You may think it is weird that a blog talking about credit scoring at a bank hasn’t mentioned banking once so far! That is because scorecard models typically used by banks to make probability of default decisions can be used in any field that needs some interpretable modeling. The training I will provide at ODSC East will help you work through how to build successful credit-scoring models in both R and Python. It will also teach you to layer the interpretable scorecard on top of these models for ease of implementation, interpretation, and decision-making. After this training, you too will have the knowledge to be able to build more complete models that are ready to be deployed and used for better decisions by executives. Come to ODSC East and enjoy learning how to build these kinds of models yourself!

Article by Aric LaBarr, PhD, Associate Professor of Analytics at the Institute for Advanced Analytics at NC State University

 

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.

1