fbpx
AI Ethics: Avoiding Our Big Questions AI Ethics: Avoiding Our Big Questions
A lot of discussion over AI is hyperbolic comparisons to the Terminator. While this isn’t helpful to the discussion, addressing ethics... AI Ethics: Avoiding Our Big Questions

A lot of discussion over AI is hyperbolic comparisons to the Terminator. While this isn’t helpful to the discussion, addressing ethics concerns with bigger, faster AI on the horizon is a necessary part of development. Instead of willful ethics violations, we may have more trouble with ignoring the potential negative side effects of our question for AI integration. We may not take an active role in ethical violations, but we have to examine what our deployment race could mean. Let’s take a look at a few ways we could be giving up our ethical responsibility by outsourcing certain actions to AI.

[Related article: European AI Strategies, Compared]

Military Operations

The goals of military operations are to accomplish the objective and spare the lives of soldiers and innocent civilians. One way we’re exploring AI in the military is by removing soldiers from the direct violence of the battlefield and putting them behind unmanned drones. Even further, moving these drones to Autonomous Weapons Systems as seen in the demilitarized zone between South and North Korea removes the human factor altogether. Unfortunately, as Peter Asaro suggests, deciding to target a weapon and pull the trigger is an innately moral act. We determine whether the action is moral, whether following orders is the right thing, and whether we wish to live with the consequences. In some cases, that dilemma causes soldiers to defy immoral orders. Removing the human element altogether could prove to be a dangerous proposition. We save soldiers, yes, but there’s a downside. What AI brings to the table is sheer efficiency. Humans aren’t so efficient, and that’s what makes AI a great partner for things like data labeling or customer service chatbots. When we’re talking about firing a weapon, we must question whether efficiency is our most important objective.

Employment

We’ve joked for generations about hating our jobs, built entire sitcoms around the humor of job boredom, and wondered if there’s more to life than work. Our dreams of utopia feature groups of humans lazing around in flowy clothing and not a sign of a bored, burnt-out worker anywhere. So the introduction of AI into some of our most menial jobs seems like a good thing; except when it isn’t. Case in point, large corporations are saving money by switching to automatic check out lines, automated teller machines, and automated ordering kiosks. In theory, this is great because it means workers can occupy higher paying jobs that require higher order thinking skills and innovation.  The company, in turn, can afford to move humans to those positions because of money saved with automation. Unfortunately, this may not be a reality. Institutional barriers to education could be keeping the brightest talent from ever entering the workforce. Our automation has moved response times to lightning levels, something we freely impose on workers in terms of email or communication, which causes workers not to clock out (not really). Some corporations could shore up those profits from automation instead of investing in human infrastructure. We also have a strange relationship to work. The idea that humans could work less fills us with fear because we believe philosophically that those who can’t or won’t work themselves to death deserve what they get, nothing. So how do we change the perception that moving from hustle and grind to the four day work week is a good thing? There are no answers here, but humans have been through automation before. We’ll have to address how AI automation will alter the landscape of work and what that means to a civil society.

AI Rights

Building exponentially complex systems helps us deploy AI where we need it most. Image recognition, for example, requires thought processes like what a human can do. Self-driving cars require learning the way humans do. Reading sentiment requires processing emotions the way humans do. See where we’re going? Building human-like machines requires that we consider what we’ll do when machines finally become human-like. Despite movies and post-apocalyptic novels, we must carefully consider the types of rights we bestow on machines once our machines become like us. These questions aren’t new. We’ve applied them to different groups of people since recorded history, and some of the considerations sound frighteningly like how we’ve classified people in the past. If we’re moving forward and choosing not to replicate our own biases and prejudices within our neural networks, the time may come when we have to decide how to treat our human-like machines too.

[Related article: Emphasizing Humanity in a World of AI]

Examining The Dual Side of AI Ethics

Divesting ourselves from these questions won’t help fix our issues. We don’t want to decide whether to pull the trigger, so we make a machine do it. We ignore what it means to be employed so that we can laud the profits made within a company and forget the human loss. We still refuse to have the conversation about what makes rights inalienable, so we press forward creating machines in our image without thinking of the consequences. AI discussions can trend towards hysteria on both sides. The terminator camp worries about machines coming for us and the utopia side sees tremendous progress, but in the middle are questions with no easy answers. We’ve reached the point of no return. We can’t put AI back in the bag, and we shouldn’t have to. What must happen is something very human, a real philosophical reckoning.

Elizabeth Wallace, ODSC

Elizabeth is a Nashville-based freelance writer with a soft spot for startups. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do. Connect with her on LinkedIn here: https://www.linkedin.com/in/elizabethawallace/

1