Facial classification is one of the most promising and controversial machine learning use cases. The technology has considerable potential in areas like security, but it also carries substantial privacy and bias concerns. Relying on racial recognition models that aren’t accurate can lead to severe consequences.
Facial recognition has come a long way in its relatively short history, but many people still question its efficacy. Here’s a closer look at the history and current state of the technology and where it can go from here.
Early Facial Recognition
Facial classification technology had its roots in the 1960s when researcher Woody Bledsoe taught a computer to recognize facial features and measure the distance between them. While this system was crude and slow by today’s standards, the core concept is the same as facial recognition today.
As computing power improved, these models became better at recognizing facial features, helping distinguish between them. A study in 2006 revealed that algorithms had become 10 times more accurate than in 2002 and 100 times more accurate than in 1995.
As online platforms like Google and Facebook grew, so did image databases, helping spur improvements. By 2011, facial recognition models helped confirm Osama Bin Laden’s identity. In 2014, the technology had become accurate enough for Facebook to suggest people to tag in photos.
Facial Classification Accuracy Today
Many facial classification models today still require human input to be decisive, but they’re far more accurate than they were just a few years ago. A leading algorithm in 2018 made 20 fewer mistakes than one from 2013. Some of the most advanced models today boast an error rate of just 0.45%, matching the ability of most humans.
Those figures are impressive, but most facial recognition algorithms today don’t reach that level. Only highly advanced models with sufficient human input can achieve that 0.45% error rate. More notably, facial classification still struggles at reaching the same accuracy with every face.
Some models are up to 10 times less accurate when identifying Black faces than white ones. This discrepancy may arise from training these models mostly on white faces, and it raises some big questions about the fairness of using these algorithms in a legal setting.
Deepfakes Bring New Complications
One of the most significant challenges facing facial classification today is the emergence of deepfakes. These AI-manipulated videos have increased by 330% between 2019 and 2020, and facial recognition models can’t distinguish them from reality. Microsoft’s Azure Cognitive Services could not identify 78% of deepfakes as falsified.
Blurred lines around faces’ edges and strange motions may make deepfakes seem off to humans, but machines have a harder time telling the difference. That could present a massive security issue given the world’s current reliance on facial recognition technologies.
There are already 15 million identity theft cases each year in the U.S. alone. Deepfakes could make that worse. Criminals could use them to fool biometric security stops in financial apps, getting past two-factor authentication to access people’s bank accounts or personal information.
Creating More Accurate Facial Recognition Models
These shortcomings are concerning, especially considering facial recognition’s general accuracy apart from them. End-users may see how accurate these models are — not counting racial disparities or deepfakes — and assume they’re safe to use in sensitive applications. That could lead to substantial security and equality gaps without improvements.
One of the most important steps is to use high-quality image datasets in your training data. Lighting, occlusion, noise, and different angles can make it harder for machines to recognize facial features. Similarly, training images should ideally have a blank background to stop algorithms from looking at non-facial features.
Training facial recognition models on a diverse range of faces is also crucial. If these models learn from mostly white faces, they’ll lack sufficient accuracy when analyzing photos of people of color.
Given the rising prevalence of deepfakes, it’s also important to use machine learning tools to find manipulated content. Deepfakes create inconsistencies in image resolution that deep learning models could detect and highlight, suggesting a video could be falsified. Compiling a database of deepfakes could help train models to catch them, pairing with traditional facial recognition to reduce false positives.
Facial Recognition Has Improved But is Far From Perfect
Facial classification technology can boast impressive accuracy in some circumstances today, but users must understand its shortcomings. Ethnic discrepancies and the threat of deepfakes still pose a challenge for today’s models.
Developers that understand these risks can address them to make more accurate models. Facial recognition technology could become close to perfect soon with further development and acknowledgment of these flaws.