

An AI Winter in the Past and Present-Day Worry
BlogBusiness + ManagementTech NewsTech UpdatesTechnologyAI|Artificial Intelligenceposted by Alex Amari July 30, 2018 Alex Amari

“The AI winter is well on its way,” says Filip Pieknewski, a computer vision and AI expert whose recent viral blog post forewarns an imminent period of reduced funding and interest in artificial intelligence. The term ‘AI winter’ first appeared in the mid-’80s to refer to the quickly diminishing confidence in ‘expert systems,’ and a general decline in interest surrounding AI.
Pieknewski believes we’re on the precipice of such a period due to the overhype of AI’s current capabilities and potential, propagated by well-known experts such as Andrew Ng. In his post, Pieknewski references Gary Marcus, a psychology professor at NYU and ex-director of Uber’s AI lab. Marcus has written extensively about the major limitations of neural networks in criticism of what he views as the AI community’s myopic gravitation towards deep learning. Inflated confidence in the tech, Marcus suggests, inevitably creates “fresh risk for seriously dashed expectations.”
Gary Marcus published a popular and controversial trilogy of articles, beginning with ‘Deep Learning: A Critical Appraisal‘ in January 2018. (Image source: YouTube).
Is an AI winter truly coming? As debates on Twitter and Facebook reflect, the seasonal forecast of AI fluctuates. Various articles weigh in on the debate and take a side, but few mention the factors and circumstances that led up to the past two AI winters.
So, before we consider Pieknewski’s case for the potentially forthcoming AI winter, let’s take a look back and see what comprised the circumstances surrounding the first two AI winters on record.
The Origins of AI
For many, the history of artificial intelligence begins with the famous Dartmouth Workshop of 1956, when mathematicians and scientists spent eight summer weeks at Dartmouth College discussing the prospects for artificial intelligence. In fact, the term ‘artificial intelligence’ is generally attributed to the conference’s funding proposal made by John McCarthy – then a young Dartmouth math professor – to the Rockefeller Foundation:
“We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
A number of the workshop’s participants are now recognized as founding fathers of AI, including McCarthy, Claude Shannon, and Marvin Minsky.
‘Look Ma, No Hands!’
The decade following the Dartmouth Workshop consisted of a period of several early AI breakthroughs and growing interest in the subject; an era McCarthy later dubbed the ‘Look Ma, No Hands!’ era.
During this time, early advancements in AI demonstrated the potential of the field. Arthur Samuel’s checkers program became the first system capable of beating humans at a board game. Newell, Shaw, and Simon’s famous Logic Theorist program went on to develop new proofs for mathematical theorems, a feat previously thought impossible for a machine. AI and machine learning projects based on the promise of automatic and instant translation of Russian documents and scientific reports gained extensive military financial support during the Cold War.
Inroads like these gained the attention of the recently established Advanced Research Projects Agency (known as DARPA today, and ARPA back then), which went on to channel millions of dollars toward AI research programs led by John McCarthy, Marvin Minsky, and others. But the financial support received by these early AI projects had an unforeseen expiration date.
Attendees of the Dartmouth workshop of 1956. (Image source: Lords AI Committee/Twitter)
After pouring millions of dollars into language processing research, the government became concerned in the 1960s that promised breakthroughs were failing to materialize. Problems that had seemed surmountable at face value proved to be far more difficult than anticipated, and the work was further bogged down by constraints of memory and processing power. In a famous 1966 report, the Automatic Language Processing Advisory Committee (ALPAC) concluded that machine translation was too costly, unreliable, and slow to warrant further government investment.
Throughout the following years, more reports emerged criticizing the failure of AI to achieve the “grandiose objectives” laid out by some of its champions. DARPA began to pull funding, and by the mid 70s the world of AI had slid into its first wintery mire. In a domino effect, governments across the globe scaled back on funding for academic research, forcing progress in a field as interdisciplinary as AI to virtually grind to halt.
Expert Systems and the Fifth Generation
Spring, as it turned out, was just around the corner. The 80s saw a rise in popularity of the first “expert systems,” such as DENDRAL and MYCIN. These programs sought to automate decision-making processes by applying deterministic if-then protocols onto vast quantities of information.
With the success of expert systems, many businesses began to see promise in AI. More organizations started investing heavily in LISP machines, so named for running on the Lisp programming language invented by John McCarthy. LISP machines were effective because they “had the aim of supporting complete abstractions,” doing away with some of the implementational details that burdened other programming languages. Costly though they were, stories emerged of LISP machines saving companies millions of dollars by automating tasks previously reserved for human experts.
CADR – The Lisp Machine, late 1970s, MIT Museum. (Image source: Wikimedia).
The expert system boom coincided with the 1981 announcement of the Fifth Generation Project by the Japanese Ministry of International Trade and Industry. At the peak of its postwar ‘economic miracle,’ the Japanese government set aside $850 million. The money was funneled into “a revolutionary ten-year plan for the development of large computer systems which will be applicable to knowledge information processing systems.” While not solely dedicated to AI, the project set ambitious goals in machine translation, human-like reasoning among computers, and other tasks now associated with artificial intelligence. Much of the software that arose from this project, however, was ultimately rejected by the Japanese tech industry, and numerous goals never came to fruition.
Despite advancements in expert systems and increased adoption of early AI usage in business practices, flurries began to fall on the field once again by the mid 80s.
At a 1984 meeting of the American Association of Artificial Intelligence, the term ‘AI winter’ came up as a topic for debate, leading Marvin Minsky and prominent AI theorist Roger Schank to warn that inflated expectations surrounding AI’s capabilities had spiraled out of control. Japan echoed this solemn attitude; government support for the Fifth Generation project had fizzled out due to its unfulfilled aspirations. The excitement flickered, public and private investment dried up once again, and another AI winter hit hard.
An Imminent Return?
Skip a few decades – pass over a revolution in personal computing, the rise of the Internet, and AI triumphs like IBM’s DeepBlue defeating Garry Kasparov – and AI’s gradual thawing from its last winter has turned into a meteoric rise.
Billions of dollars are again pouring into machine learning and artificial intelligence from militaries and governments alike. This trend is most notable in China which in late 2017 announced its intention to become the world’s leading AI superpower by the 2030s. In parallel to government support and investment in AI, the private sector, now more than ever, has independently contributed to significant innovation in the field. Venture capitalists devote billions to projects while key advances emerge from private AI labs where top researchers earn eye-popping salaries.
Audiences watched with awe as Deep Blue defeated Garry Kasparov in 1997. (Image source: Scroll.in)
Those aligned with Filip Pieknewski and Gary Marcus would view such an ecosystem as unsustainable, a disequilibrium resting on overconfidence in deep learning and disingenuous claims about AI’s capabilities.
Pieknewski believes that self-driving cars represent the most relevant example: continuously touted as just a few years away, yet, he argues, less capable than we thought. As in the lead-ups to the two past AI winters, this view suggests, marketing has raced ahead of engineering, and it’s only a matter of time before the snow starts falling.
Of course, others in the AI community are more sanguine about the future. Some argue that even if AI’s capabilities have been overestimated (which is certainly up for debate), it doesn’t necessarily follow that an AI winter is around the corner.
“We haven’t yet solved even 10% of the problems we could solve with existing AI/ML techniques,” writes Francois Chollet, one of Google’s top neural networks experts. “Even if new research were to deliver nothing from now on, there still wouldn’t be another AI winter.” Others have criticized Pieknewski for harping too narrowly on Uber’s recent spate of self-driving crashes while neglecting to mention what many consider the more consistent progress of competitors like Cruise Automation and Google-owned Waymo. Finally, there’s the fact that AI is more global than ever before. An AI winter in one part of the world need not trigger a planetary ice age.