In her talk at ODSC East 2018, Dr. Catherine (Cathy) O’Neil, author of New York Times bestseller Weapons of Math Destruction, described the pitfalls of placing blind trust in algorithms. With a series of examples, Dr. O’Neil showed how algorithms – often seen as sacrosanct when shrouded in the aura of complex math – can enable decisions that violate people’s legal, human, and constitutional rights.
- Algorithms, useful as they may be in helping us make important decisions for business, policy, and civic life, can enable decisions that violate people’s rights.
- People tend to place blind trust in algorithms, particularly those shrouded in the thick cloud of complex mathematics.
- Individuals involved in building and using algorithms need to consider the ethical implications of their use.
- At the federal level, we need an “FDA for Algorithms” capable of assessing the potential risks and benefits of algorithms used at mass scales.
In one case O’Neil highlighted, teachers were being fired as a result of the ‘Education Value Added Assessment System’ employed by the Houston, Texas Independent School District. When teachers began to question the system’s methodology, which turned out to consist of algorithms used to rate and rank teachers’ performance, they were told that the EVAAS was proprietary and confidential.
On further investigation, however, it was found that the algorithms were being used to fire people almost completely randomly, and in fact were encouraging teachers to cheat on classroom reports to increase their scores. When the case was taken to trial, a Federal judge ruled that due process rights for six teachers fired were violated as a result of the EVAAS.
Algorithms can also cause problems when used in hiring processes, as O’Neil evidenced by the case of Kyle Behm. An engineering student with bipolar disorder, Brehm failed a number of pre-employment personality tests that, it turned out, had been specifically designed to screen for individuals with mental illnesses. Investigators discovered that, although the designers of the personality test and its underlying algorithm had warned its clients against using it for that purpose, the test had made its way to such screenings nonetheless.
Read more by Cathy O’Neil at her blog mathbabe.org
Following these examples, Dr. O’Neil warned about the risks of using algorithms to conduct ‘predictive policing’ or predict criminal recidivism. Policing algorithms have been shown to lead to disproportionate arrests among minority groups for crimes committed just as frequently by the general population. Likewise, recidivism algorithms – used to predict crimes committed by individuals recently released from prison – often end up, “creating their own reality.”
In a kind of feedback loop, prisoners are consistently re-incarcerated, distancing them further from society, and increasing the likelihood that they’ll end up back behind bars.
O’Neil offered several solutions to the problems created by our increasing reliance on algorithms at ODSC East 2018. As individuals, Dr. O’Neil said, “I want everyone to become an ethicist.” Data scientists, computer scientists, and business professionals that use algorithms to make important decisions ought to consider how someone’s legal, human, or constitutional rights could be violated through their use. Yet even if the world of algorithms was to move in this direction, Dr. O’Neil suggests, it wouldn’t be enough. She believes that we need an “FDA for Algorithms,” which can examine the algorithms being put out at mass scale with no policing. In the same way that the FDA considers the harm caused by certain products, the Algorithm FDA would consider all of the relevant stakeholders and their concerns, asking, “for whom does this system fail?”
To take a deeper dive into how algorithms affect various aspects of society, check out Dr. O’Neil’s full talk from ODSC East 2018 below.
ODSC East 2018 kicks off a series of conference, webinars and meet-ups that the international data science community can participate in. Learn where the future of AI gathers by participating in the upcoming conferences in London and San Francisco. Interested in learning in-depth talks about various aspects of data science? Check ODSC’s Learn AI platform to hone in on your skills.