fbpx
The Bright Side of Technology in Black Mirror The Bright Side of Technology in Black Mirror
The technology in Black Mirror isn’t actually all about death and nihilism. The British anthology sci-fi show Black Mirror (created by Charlie Booker and Annabel Jones)... The Bright Side of Technology in Black Mirror

The technology in Black Mirror isn’t actually all about death and nihilism.

The British anthology sci-fi show Black Mirror (created by Charlie Booker and Annabel Jones) tackles technology in almost all of its episodes. With a varying degree of realism, implied moral take-aways, and success, the show seems to put technology in the villainous point, and push viewers to question their relationship and dependence to these developing technologies. 

However, this is only one understanding of the show. To me, the technology in Black Mirror encourages viewers to analyze the intrinsic human characteristics that make technology vulnerable to be misused. Instead of putting tech as the bad guy, Black Mirror shows humans finding infinite ways to abuse programming that’s just doing its job. In this article, we’ll look at a few different episodes that flip the switch on the common trope of “technology is evil,” as well as some other details within the series that point to this humans-as-villains theory, and a speculative look at how Black Mirror could be shaping the future. Warning: spoilers from the first four seasons below!

Hated in the Nation

“Hated in the Nation” follows a cop who is on a case of mysterious deaths. It ends up that people are getting killed by robotic, artificially intelligent bees that are drilling into people’s skulls. It’s eventually revealed that someone has hacked the bees and they’re targeting people based on trending hashtags used on social media. What’s worse, is that when the cops find a way to shut down the system, they find that the bees have now been programmed to attack anyone who used the hashtag. 

[Related Article: Adversarial Attacks on Deep Neural Networks]

Many movies or TV shows that feature artificial intelligence like to put the AI in the villain position—AI that has pushed past its programmed capabilities and formed an evil consciousness. In this case, however, the AI do only as they’re programmed to do, it just happens that they have been compromised by people who want to do wrong to the world, and then reprogrammed. The AI isn’t the one doing wrong, it’s human beings who think they can play god. It’s quite the script change to most AI media. 

Men Against Fire

In this storyline, the army has developed a mask or implant in order to help soldiers better perform their missions. It’s eventually revealed that it’s an interactive and implemented automatic DeepFake, designed to mask what the soldiers are actually seeing, hearing, smelling, or feeling, so they don’t realize that they’re tasked with killing normal people. 

Currently, DeepFake technology is limited to the digital sphere, rather than implemented into a VR type mask. But The components to this technology are there, just waiting to be combined. And while watching the main character realize what he was doing to innocent people was mildly terrifying, it’s not the technology’s fault in this episode. The DeepFake and VR was used to reduce the trauma soldiers felt—a real issue that is being tested with the use of drones, to remove people from the situation (it’s not working). It was the military and murderous tactics they used the technology on that was the real horror—the human characteristic of being power hungry and exclusionary. These are technologies that regularly are attacked in real life for their abilities to create fake realities and abuse real people. But instead of reinforcing that idea, the villain is the military’s—the people’s—use of the tech to extinguish a race of people. 

Arkangel

This episode, “Arkangel,” follows overprotective but well-meaning mom who implants a tracking system and inter-eye video camera into her young daughter. The mother is able to see where her daughter is on a map, read her vitals, see what she sees, and even “parental-lock” her sight. As the daughter grows, the mother becomes more and more protective, and more and more willing to use her intrusive powers, eventually she calls it quits when her daughter starts showing violent tendencies at school. During her teen years, however, the mother gets worried and starts to intrude on her daughter again. When the daughter finds out, she becomes so angry that she begins to attack her mother, but isn’t able to see the damage she’s causing because of the parental controls. 

Nothing about the individual capabilities of this technology are outlandish, each already exist in the form of heart monitors, small video cameras, or  trackers. And while the invasive, inter-eye version of these technologies are a stretch from reality, there’s nothing in the episode that implies that the technology is the villain. From her characterization to the points at which the mother imposes her technology, the mother and her desire to control her daughter—control the world—is placed as the villain. We as the audience know that she means well, and yet, we see the flaws in her good intentions and end up taking the side of the daughter. So at the end, it’s not the technology we’re mad at, it’s the mother. 

This episode has taken a version of rudimentary technology and shown what happens when humans become control-hungry and overbearing. Where the series could have easily made it so the technology takes control of the daughter, or manipulates the mother, instead, it’s just a simple case worry-to-a-fault, and that’s something we can all relate to. 

Nosedive

This episode follows a woman as she tries to build her reputation and gain social media-like merit points. It’s a world in which everyone has a score, hovering next to their heads, and you’re rated by others with each interaction throughout the day. The episode follows our main character as she plummets from her point status, all in an attempt to raise her status. Her entire life is dependent on her status, so much that she’s willing to do anything to raise her merits. But when she finally reaches rock bottom and is thrown in jail, she has her device removed and she’s shown in “madness” in her cell, which really just seems to be inner freedom. 

Similarly to “Arkangel,” the technology and psychology here isn’t super reaching. Social media has been shown to be addictive and cause bouts of anxiety, depression, or paranoia. We rate much of the service industry based on our interactions with them (gig economy ratings and tipping in general). Likewise, there have already been implementations of point-based systems attached to people via face and pattern recognition to decide things like insurance rates or to track crime. 

Despite the plot of the episode, Black Mirror doesn’t attack the idea of social media, instead, it pushes us to consider the implications of our willingness to judge others and all the ways in which humans will find ways to do so. We’ve gotten used to social media and have, for the most part, have become unwilling to question the issues of it. Once again, the show doesn’t demonize the technology itself, but instead, the people who are so willing to ruin a persons’ life in a world that depends on what others’ think of you. 

Metalhead

“Metalhead” follows a woman as she runs from murderous artificially intelligent dogs, after trying to retrieve a stuffed animal for someone who is dying. We just watch as the dogs show off different tools they’ve been programmed with in order to kill humans: solar charging, projectile tracking devices which implant in peoples’ skin with complementary GPS tracking, audio and visual processing, hacking (it takes over and drives a car at one point), among many other ways. 

“Metalhead” is the only episode within Black Mirror that doesn’t have a heavily implied moral lesson, nor does it give any explanation for the situation we find characters in. And yet, it’s one of the most realistic portrayals of technology within the series, with the dogs being directly inspired by the Boston Dynamics robotic dogs. Sure, the ones in the show are murderous, and Boston Dynamics’ aren’t (though, this one almost looks the part). But a few modifications in programming and they’d be the same. Which is the point that brings us back to our idea that Black Mirror doesn’t believe technology is evil. Despite the obvious portrayal of the dogs as murderous, at no point does the dog show emotions or ethics-based decision making. It’s just doing what it’s been programmed to do—programmed by some unmentioned person to do. The fear within this episode is that AI only knows what it’s been taught, and so even if we can sympathize for the main character, the robo-dog will only continue on in the way it’s been programmed. This episode does, for once, set the technology as the villain, but it’s still not the technology’s fault. 

[Related Article: What do Popular Movies About AI Get Wrong?]

Other Details

On top of all plot lines and technology in Black Mirror that push us to question who the true villain is, there are some other details that encourage this understanding also. 

Good Outcome

In just two episodes, there are happy endings for viewers. Firstly, “San Junipero” follows two women who meet in the title town—a digital reality to supplement the afterlife. They fall in love in the VR, and get married in real life as old women, deciding to live the rest of their lives in the virtual reality afterlife, rather than have their consciousnesses die with them. 

The second, “Hang the DJ” doesn’t feel like a happy ending until the very last seconds of the episode. The episode is set up where we’re watching a couple get paired over and over again with random people that don’t make them happy, and they’re told that it’s all so they can one day be matched with “the one.” It feels pretty glum for most of the episode, as we watch these characters faith in the system deteriorate as they spend their lives with horrible, mismatched people that they have no control over. However, in the last minute of the episode, we watch the couple rebel against the system, only for us to learn that they were a simulation in the “real” matching of that same couple. The episode ends with them being paired together as the perfect match, and they’re happy. 

These are the only happy endings we as viewers get, and it’s made only possible because of the stored consciousness virtual reality and the reinforcement learning-type simulations. Nowhere in these happy endings do the technologies become the villains, like it could have been if the show was, in fact, trying to warn against the perils of technology. They’re happy endings—that they didn’t have to create and that don’t really fit in with the rest of the series—that are completely orchestrated by technology. 

The Title and Opening Screen

The name Black Mirror is beautifully and symbolically mimicked by the title screen—a black screen, with white lettering, that cracks like glass just before the episode starts. You watch the mirror crack (if you don’t skip the intro like Netflix offers), and if you’re in the right position—say, in front of a large TV or a laptop—you see yourself in the screen as it goes black. You see yourself in the mirror, and see your image crack. More than anything else in the show, this literal cracked black mirror tells viewers that they’re the ones in the looking glass, being analyzed. 

The Future

In many ways, all of which sound sort-of conspiracy-theory-esque, Black Mirror could be changing our future. Some believe the show is one single predictive timeline, showing the progression of humans through time via the technology they develop. Others hold to the belief that it’s a warning sign of technology we shouldn’t create or let ourselves use any further. And with Black Mirror’s groundbreaking interactive episode, “Bandersnatch,” some are wondering about the negative implications behind the data that Netflix stores with your decisions in the episode. Could Netflix see that you chose to kill the character’s father without hesitation, and then feed you violent shows? And countless steps after that, could police gather that data and use it as evidence against you for committing a violent crime? Theoretically, it’s a possibility—even if Netflix says your data is secure. But that would be a pretty Black Mirror-esque outcome, don’t you think?

Ava Burcham, ODSC

Ava Burcham, ODSC

Ava is a content assistant at ODSC. She's a senior at Emerson College, getting a BA in a degree program she created herself called Writing and Publishing on Inequality, and will be attending a graduate program on Inequality studies in England the fall of 2020. She's the co-founder and content director of an employment resource (EthicalEmployment.co IG@ethicalemployment) designed to help young adults navigate their transition into the working world. Personal Instagram: @MissBurcham LinkedIn.com/in/aburcham Portfolio Website: MissBurcham.com

1