MIT’s New Technique Empowers Machine Learning Models to Address Confidence in Predictions
AI and Data Science NewsMITposted by ODSC Team February 23, 2023 ODSC Team
Machine learning models are becoming increasingly popular for tasks such as image classification, natural language processing, and even medical diagnosis. These models work by analyzing large amounts of data and learning patterns in that data to make predictions on new, unseen data. However, one challenge with machine learning models is that they are not always accurate in their predictions, and it’s not always clear how confident the model is in its predictions.
It’s an issue that can become quite important as more devices depend on correct predictions. Think automated machines and automobiles. To address this issue, researchers at MIT and the MIT-IBM Watson AI Lab have developed a new technique enabling a machine learning model to quantify its confidence in its predictions. This new technique enables a model to perform a more effective uncertainty quantification. While being flexible, it uses fewer computing resources than other methods.
When it comes to uncertainty quantification, machine learning models assign a numerical score to each output to indicate their confidence in the accuracy of the prediction. However, implementing uncertainty quantification techniques often is both resource and time-demanding because it requires building a new model from scratch or retraining an existing one, which can be impractical due to the sheer amount of data and costly computation required.
But that’s not all, traditional approaches can sometimes lead to a decrease in the quality of the model’s predictions. This is why this new technique is exciting. Maohao Shen, an electrical engineering and computer science graduate student and lead author of a paper on this technique. “Uncertainty quantification is essential for both developers and users of machine-learning models. Developers can utilize uncertainty measurements to help develop more robust models, while for users, it can add another layer of trust and reliability when deploying models in the real world. Our work leads to a more flexible and practical solution for uncertainty quantification.”
So how did they do it? Well, the team developed a smaller model called the metamodel. The smaller model is connected to the larger pre-trained model, using the latter’s learned features to aid in making confidence judgments. To create the metamodel that generates the output for uncertainty quantification, the researchers incorporated both model and data uncertainty. Data corruption and mislabeling are the primary sources of data uncertainty, which can only be addressed by collecting new data or correcting existing ones.
On the other hand, model uncertainty arises when the model is uncertain about how to interpret newly observed data, leading to inaccurate predictions, typically due to a lack of sufficient training examples that are similar to the new data. This challenge is particularly daunting when models are deployed and exposed to real-world data that differs from the training set. As with most models, there is still a need for a user to confirm the uncertainty quantification score’s accuracy.
For predictive modeling and machine learning overall, this new technique can help improve the reliability and trustworthiness of machine learning models, making them more suitable for safety-critical applications, which as AI continues to scale in multiple industries will become even more important.
If you’re interested in the world of machine learning or just want to upskill, then check out ODSC East’s Machine Learning Tracks , featuring expert-led discussions, hands-on training, and networking opportunities with those shaping the world of data science.