With the rapid rise of machine learning in multiple industries, safety and security questions have arisen. So, as technology becomes further integrated into our lives, what is being done to protect individuals, families, companies, and nations from bad actors? Well according to our research, there is quite a lot going on in the world of machine learning safety and security. Here are a few that are raising eyebrows.
As in the namesake, explainable AI’s purpose is to clearly and in a transparent manner, explain why a machine learning model came to a specific decision. In short, we want to make sure that whatever a model is spitting out is both trusted and makes sense. There are a number of methods to accomplish this. First are two techniques that tackle the question of why predictions are made and explain them.
For starters, both LIME (Local Interpretable Model-agonisc Explanations) and SHAP (SHapley Additive exPlanations) try to explain by approximating the model locally with an interpretable model and providing feature importance values.
There are also visual techniques that are used to help understand a model’s decision-making process. Think about how a model classifies medical images. By using a heatmap to overlay images, it can indicate which regions the model is using to make its classification. This would allow a doctor to understand which parts of the image the model is using to make its prediction, and potentially identify any errors or biases in the model’s decision-making.
Adversarial machine learning:
This kind of machine learning security will be more familiar to those who have a grasp of cyber security or work within the field. With adversarial machine learning, the purpose is to protect your model from being influenced by malicious attacks whose sole goal is to cause the model to produce either harmful or incorrect decisions.
This sub-field of machine learning safety is one of the most active due to the fact that it’s critically important for specific emerging technologies such as self-driving cars, financial fraud detection, and medical diagnosis. These attacks can come in the form of tampered training data, which in turn causes the model to make incorrect predictions that can affect future performance or what’s called an adversarial example. Inputs that are specifically created to cause models to misclassify input which over time can cause it to also produce incorrect predictions.
Fairness and bias:
Fairness and bias in machine learning is the basic idea that models should not perpetuate or amplify societal biases in their decisions or predictions. When it comes to machine learning models and the impact they can have, it’s another important sub-field due to how these models are often used to make important decisions. In some cities across the United States, this is seen with machine learning models assisting municipal departments with housing, law enforcement, fraud detection, and other issues.
If the model is unfairly biased against an individual or group, it could cause a negative cascade effect that can amplify into something worse. How this is done, goes back to training datasets. For example, if a training data set isn’t fairly representative of a population it’s serving, then decisions made by the model could lead to unfairly biased decisions.
This is the part where the outside environment plays a key role in machine learning safety and security. Robustness in machine learning details the ability of a model to operate with conditions of change, primarily outside of its sandbox and in the real world. The point of this field is to reduce the errors and noise that can affect predictions.
Since our world isn’t static and dynamic, this is an important sub-field in machine learning as models often are created to be deployed in uncertain environments. It’s up to techniques and research on robustness to improve a model’s reliability and trustworthiness outside of the sandbox and in the real world.
Privacy-preserving machine learning:
Just as you’d expect with this namesake, privacy-preserving machine learning is all about protecting the privacy of individuals whose data is used to train or operate machine learning models. These models require lots of datasets, which often cannot be randomly generated. This is data created by individual people using the internet, or from other assets, which are collected, cleaned, and used to train machine learning models.
So it’s no wonder that this would be another hot trend in machine learning security/safety. If people can’t trust datasets to protect their privacy, they’d be less inclined to share their data, which in turn reduces to available data that can train models. Now there are a few ways to do this.
First is a technique called differential privacy. What this does is add noise into the data being used to train a model in a way in which it becomes quite difficult to infer any information about individual data points, protecting privacy, but still allowing models to learn useful patterns from the data. Another technique is called homomorphic encryption. Basically, it makes the value of the data available without the need of decrypting it first. Thus, protecting privacy without losing out on the valuable information a model can use.
Some wild and amazing stuff going on in the background ensuring machine learning models are accurate in their predictions, able to withstand the dynamic nature of our world, and protecting the privacy of individuals who populate training data sets. There’s a lot more happening in the world of machine learning security and safety and one place you won’t want to miss is ODSC East 2023 May 9th-11th. There you’ll enjoy an entire focus area dedicated to machine learning safety and security. Now if you are looking for more on-demand flexibility, then you’ll love Ai+ Training, featuring the best training from the leading experts in machine learning, deep learning, AI, and more.