How Will AI Nudging Affect Our Privacy? How Will AI Nudging Affect Our Privacy?
When you want to influence behavior, but you don’t want to restrict freedom of choice, you might take advantage of something... How Will AI Nudging Affect Our Privacy?

When you want to influence behavior, but you don’t want to restrict freedom of choice, you might take advantage of something behavioral science calls “nudging.” While most of us believe that humans behave rationally, nudge theory believes that humans are often irrational, making decisions based on influence and priming.

The truth is, we often go against what we know to be in our best interest, making decisions based on situated models of cognition, meaning we’re actually highly influenced by things going on around us. 

With AI transforming almost every aspect of our lives, it only makes sense that AI is also helping “nudge” us to make certain decisions. Philosopher Dr. Karina Vold sheds light on how AI nudging is different than traditional human model nudging and what that means for us.

[Related Article: Explainable AI: From Prediction to Understanding]

How Do We Make Decisions?

Cognitive Science tends to follow a dual model of cognition. We have the automatic system, which is fast and unconscious, and the reflective system, which is conscious and more analytic. The automatic system handles a lot of our daily activities because it requires less energy.

Built-in biases help speed up decision making, so nudging can take advantage of that to help influence decisions. Telling someone that smoking can cause cancer may not deter people from smoking. What does help is showing pictures of diseased lungs, which tap into our cognitive biases.

Thaler and Sunstein believed that nudges help intervene in people’s behavior, but never removes a restriction of choice. It only makes you more likely to choose one particular thing over another. For example, healthy food on eye level can encourage people to buy healthier foods while putting candy in smaller packages discourages us from buying too much sugar.

What Is AI Nudging?

AI is a big part of nudging now with the advent of constant exposure to technology. For example, Google’s suggested searches could be nudging because as Google autofills, you could be gently encouraged to click on certain choices.

Netflix is engaging in a type of nudging as well by setting episodes to begin automatically. You could spend minimal effort to stop the video, but we are more likely to move on to the next episode because of that gentle influence. Youtube does this as well with the added element of an algorithmic based choice in what video to play next.

This AI nudging is any aspect of the digital architecture that influences someone’s behavior. It’s the same principle, but using aspects of digital choice architecture to perform that nudge.


What’s Different About AI Nudging?

While AI nudging is slightly different than the way traditional nudging takes advantage of natural cognitive biases, in a lot of ways it’s implemented in the same way. Here are a few ways Dr. Vold says that AI nudging isn’t truly different:

  • Asymmetry: there’s asymmetry in the power dynamic whether it’s AI or humans doing the nudging.
  • Methods of persuasion: we can actually influence the algorithm first, and the human can be just as unwilling to change.
  • Privacy: people can collect information without consent, and humans are have always collected information.
  • Modalities: even regular nudges have a lot of different features, and while they’re different than digital modalities, they aren’t necessarily more narrow.
  • Embedded nudges: we couldn’t avoid the nondigital nudge as much as we’d like to think we could. In fact, very few real-world architectures aren’t designed.

So is there anything different about AI nudging? While some of the methods still reflect traditional nudging, Dr. Vold believes there are key differences with the introduction of AI: 

  • Big data: computers can access much larger amounts of data, but more importantly, how the data is used is different. Computers can actually put that information to use in a targeted way.
  • Personal data: this concept is extended to personal data. The availability of your personal data can also give a clearer picture of who you are than even the people closest to you.
  • Micro-targeting: in the past, nudging worked on a larger scale, but now with personal data, nudges can be customized in a way that’s more effective specifically for you.
  • Feedback loop: data can continually recalibrate to be the most effective nudging. 

There is no neutral interface, no neutral feed. Ethically, what can we do about this and how should we be thinking about these issues? To determine when nudging might be acceptable or warranted, we have to consider context.

Issues with AI nudging will affect this context in the following ways:

  • Privacy in the context of AI is linked to autonomy rather than hiding.
  • Manipulation of groups and individuals within AI nudging. You can be targeted indirectly based on information about your group. It can also be highly divisive.

How Can We Nudge Back?

We can manage algorithmic outputs by using the algorithms boundaries. You aren’t changing the code, but using a workaround to bypass the algorithm itself. For example, Uber drivers often use workarounds to optimize earnings by using the algorithm to their advantage (parking between two drivers to continue getting the hourly wage while being on break, for example).

[Related Article: The 5 Biggest Debates in Data Science Today]

Philosophers are working on the ethics of AI nudging within the sphere of its differences. It’s important to consider how AI nudging can be used ethically since Dr. Vold believes that it is no longer possible to refuse nudging. We know that everyone around us is being nudged, so it’s less about refusing to be nudged and more about knowing when and how this nudging is taking place.

We can’t address AI nudging exactly the ways we do traditional nudging. Moving forward, we’ll have to consider the differences to maintain our autonomy and reduce the amount of harmful nudging through the long reach of AI.

Watch the full talk here!

Elizabeth Wallace, ODSC

Elizabeth is a Nashville-based freelance writer with a soft spot for startups. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do. Connect with her on LinkedIn here: https://www.linkedin.com/in/elizabethawallace/