fbpx
Google Announces New AI Ethos Following Project Maven Debacle Google Announces New AI Ethos Following Project Maven Debacle
Google will cease work on the US Defense Department’s Project Maven when its contract expires next year, ending a controversial partnership... Google Announces New AI Ethos Following Project Maven Debacle

Google will cease work on the US Defense Department’s Project Maven when its contract expires next year, ending a controversial partnership between the tech giant and the military. The ensuing controversy has captivated data scientists and the public, embroiling the company in the ongoing debate about the ethics of artificial intelligence and the technology’s destructive potential. An investigation of the situation reveals there’s more to it than meets the eye. We explore their new AI Ethos. 

What is Project Maven?

The word “Maven“, as made popular in Malcolm Gladwell’s bestseller The Tipping Point, connotes an individual who is first to become aware of new or nascent trends. That makes it a (reasonably) apt title for the Pentagon’s multi-billion dollar initiative. This new initiative aims to integrate and consolidate the US military’s artificial intelligence capabilities across its many units and departments.

Announced in April 2017, Maven aims specifically to, “turn the enormous volume of data available to DoD into actionable intelligence and insights at speed,” in recognition of “increasingly capable adversaries and competitors” in the areas of AI and big data.  

Google’s contract with the Pentagon involved aiding with the initial phase of Project Maven – building enhanced computer vision algorithms for object detection and classification using copious amounts of FMV (full motion video) from the military’s automated assets, such as its MQ-1 Predator Drones. Image recognition in drones has already enabled new methods for intelligence, surveillance, reconnaissance, and, to an unclear extent, targeting.  

Discord

Some Googlers saw trouble in the Maven contract almost immediately. Dr. Fei-Fei Li, head of Stanford’s AI Lab and chief AI scientist for Google Cloud, was quick to warn of the potential PR damage such a contract could cause. “Avoid at ALL COSTS any mention or implication of AI,” she wrote in an email to colleagues, according to the New York Times. “Weaponized AI is probably one of the most sensitized topics of AI – if not THE most. This is red meat to the media to find all ways to damage Google.” What Dr. Li and other Google leaders didn’t seem to foresee, however, was that the backlash would spawn most forcefully from within.

Dr. Fei-Fei Li, chief AI scientist at Google Cloud, Speaking at AI for Good Global Summit in 2017. (Image source: Wikimedia Commons)

Soon after the contract was announced this past March, thousands of Google employees signed a petition to CEO Sundar Pichai, asking that the project be terminated. “We believe that Google should not be in the business of war,” the letter began. “Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Stars, Stripes, and Silicon

Google’s work on Maven is by no means the first time that big tech has worked with the military. In fact, the link between the US military and Silicon Valley dates back to the 1940s and 1950s. The relationship between government and big tech took root after the Second World War, when Stanford University researchers began to win government contracts to assist the NSA, CIA, and other military branches on top-secret Cold War tech research.

It was only when Stanford students began venturing beyond campus, starting tech ventures of their own, that the nation’s greatest hub of innovation came into full existence. “In the period starting with the close of WWII to the late 70s, the U.S. government created ideal economic conditions for technology innovation and commercialization to thrive in Silicon Valley,” writes Vitaly Golomb in TechCrunch.

In addition to ongoing contracts for projects like Maven, the Defense Department’s DARPA (Defense Advanced Research Projects Agency), continues to maintain close ties with tech firms across the country, with a heavy presence in Silicon Valley. In fact, former DARPA chief Regina Dugan left the agency in 2012 to work for Google and later Facebook.

The Dilemma

Signers of the Google employee petition are not the only ones worrying about the use of AI in warfare. Since 2015, an open letter from Max Tegmark’s Future of Life Institute suggesting a worldwide ban on autonomous weaponry has garnered thousands of signatures.

The letter discusses the dark possibility of a global AI arms race, which it calls “virtually inevitable” if any military power pushes ahead in AI weapons development. “[Such weapons] will become ubiquitous and cheap,” the letter suggests. “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.” Some of the petition’s notable endorsers include Stephen Hawking, Elon Musk, and a number of prominent Googlers like Demis Hassabis, founder and CEO of Google DeepMind.

Fears of autonomous weaponry are compounded over the military’s extant use of AI in warfare. Drone strikes are responsible for hundreds of civilian deaths over the years. Although the actual pulling of the trigger is supposedly still performed remotely by a human, the drone provides the “eyes and arms” to the operator’s brain.

 

The banner image of Max Tegmark’s Future of Life Institute website. (Image source: futureoflife.org)

On the flip side, critics of Google’s decision to end the Maven partnership suggest that enhanced AI would actually decrease the risk of civilian casualties. Others argue that the company’s involvement could actually steer the project away from severe moral transgressions, adding leeway for civility and ethical decision making in the enterprise of national defense. “Helping to defend the U.S. is nothing to be ashamed of,” wrote Michael Bloomberg in a critical response to the Google decision.

Compounding the ethical quandary is the fact that it’s difficult to separate “warfare technology,” as described by the Google petition, with the multipurpose tech of everyday life. Radar and GPS, for example, were both originally military inventions conceived during the Second World War and Cold War, respectively. And the internet – Google’s lifeblood – spawned out of ARPANET, a military initiative aimed at streamlining communications across military branches.

Finally, there’s the “inevitability” argument, suggesting that regardless of the US military’s actions regarding AI, adversaries like China and Russia will continue to pursue autonomous weaponry programs. In this view, walking into an AI arms race may be an unfortunate necessity.

Seven Principles of the new AI Ethos

On June 7th, in response to the Maven debacle, Sundar Pichai published a blog post detailing seven objectives that he says will guide Google’s AI endeavors going forward. The objectives, which include “Be Socially Beneficial” and “Avoid Bias,” seek to address some of the latent AI criticisms of late, such as that certain machine learning algorithms seem to exhibit racial discrimination in tasks like predictive policing.

More pertinent to Maven, Pichai’s blog post describes AI applications that Google claims it won’t pursue, including, “technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.” Google will, however, continue its work for governments and militaries, so long as that work does not contradict these principles, says Pichai.

While the post should go some distance in placating relevant parties, Pichai and Google will inevitably face accusations of equivocation. To be sure, none of these principles represent a significant change from what the company would have claimed as its AI ethos prior to taking the Maven contract. 

Some observers were surprised that the public didn’t see the announcement of a new Google division dedicated to ethical AI, comparable to the one established by DeepMind last year. Still, Bloomberg news called the post “a watershed moment for Google and AI,” having published the criticism by Michael Bloomberg just a few days earlier. 

Google-owned DeepMind established an Ethics & Society division in 2017. Advisors include Economist Jeffrey Sachs of Columbia University and Oxford Professor Nick Bostrom. (Image source: deepmind.com) 

All in all, the situation represents another high profile controversy surrounding the ethical use of data and artificial intelligence, issues which are coming more fully into the public imagination. 

The optimistic data scientist can hope that, in spite of the tumult, the new Google recognition and standards for accountability will have an overall positive impact on the development on beneficial AI. In the meantime, Project Maven will continue, and there’s a good chance less scrutinized AI vendors will become involved. Time will tell whether the military’s grand AI vision comes to fruition.  

Stay up to date on all things data science here.

Alex Amari

I’m a graduate student at Oxford University pursuing an MSc in Social Data Science with the ultimate goal of working in tech entrepreneurship.

1