Facial recognition is an important topic that affects many facets of modern society. Over the past several years it has influenced the trajectory of policing, government, law, and the broader tech industry in the United States and around the world.
[Related article: Microsoft, IBM, and Amazon Ban Police from Using Facial Technology]
The recent protests against racially biased policing in the US, in particular, have shined a spotlight on the role of facial recognition in policing, causing several prominent companies to revise their policies on the development and use of this tech. Both Microsoft and Amazon have pledged not to sell their facial recognition technology to police departments until there are regulations in place and for a year, respectively. IBM, meanwhile, announced that it would be shutting down further development of its facial recognition technology. Additionally, two major US cities, San Francisco and Boston, have banned the use of this technology.
One of the major issues with the currently available technology is that it is less able to accurately identify non-white, non-male faces. Unfortunately, some of the recent efforts to address this issue have illustrated how difficult it is to completely remove bias from the technology.
Recently, a group of researchers tried to make facial recognition technology more inclusive by creating two databases: one that is “racially balanced” and comprises a segment of the LGBTQ community and one that is “gender-inclusive.”
Although the researchers’ intentions may have been altruistic, the manner in which they classified gender for this dataset is itself informed by underlying biases of how male, female, and non-binary faces should look. These datasets once again illustrate how difficult it is to remove bias completely from facial recognition technology.
Even with the controversy surrounding facial recognition, globally there is hope that the technology can help solve problems in many industries, such as healthcare. In countries where identifying patients can be very difficult, for example, facial recognition technology can be used to prevent the misidentification of patients and ensure that the doctors have the correct medical records.
Additionally, the data science community, as well as society in general, has taken several promising steps in addressing and rectifying the current issues plaguing the technology. Gabriel Bianconi, founder of Scalar Research, has identified three areas of progress in particular:
- Reducing Bias: Facial recognition (much like many other areas of ML) has historically suffered from unintentional biases (e.g. racial, gender) arising from the data they’re trained on. The ML community has taken note of this issue and is actively developing solutions (e.g. better data, better algorithms) to tackle it.
- Privacy: There has been work towards developing methods focused on the privacy of the user; for example, federated learning allows models to learn and make predictions without needing sensitive data to leave the device.
- Regulation for Surveillance: Congress is exploring regulating facial recognition use by government agencies. Again, like many technologies, [facial recognition] can be abused/misused or be positive, so proper regulation regarding government use of [facial recognition] would hopefully lead to better, positive adoption.
If you are interested in learning more about the ethics of AI technology and how to mitigate bias in machine learning models, ODSC Europe is hosting several sessions on the topics, including
- Ethical Issues for Data Science, Machine Learning and Artificial Intelligence│Brendan Tierney│Architect│Oralytics
- Explain Machine Learning Models│Margriet Groenendijk, PhD│Data & AI Developer Advocate│IBM
- Ensuring Ethical Practice in AI│Sray Agarwal│Manager Data Science│Publicis Sapient
- Removing Unfair Bias in Machine Learning | Margriet Groenendijk, PhD│Data & AI Developer Advocate│IBM
For more information on ODSC Europe and featured talks and speakers, check out the website here.