

A Look at Gary Marcus’s Deep Learning: A Critical Appraisal
ModelingResearchAccelerate AI|AI|Business|Deep Learning|ODSC East 2018Gary Marcus's Deep Learningposted by Caspar Wylie, ODSC March 14, 2018 Caspar Wylie, ODSC

There are many mixed opinions regarding the future of deep learning. Gary Marcus’s paper, “Deep Learning: A Critical Appraisal” overviews the social and more technical concerns with deep learning, and examines the possibility of it simply hitting a wall. Having only reached mainstream technology at acceptable production standards in the last five years, it’s difficult to evaluate its place in the future, or how deep learning needs to evolve first.
Gary Marcus’s Deep Learning Paper
The paper first looks at what deep learning is doing well, such as face and speech recognition. Although these things may be very impressive, their utility is questioned a little.
Apple’s Face ID is not functionally a revolution; it makes nearly no difference at all to our day-to-day lives despite the fact that the technology developed to make Face ID work required a huge redesign of the iPhone. Another example is chatbots, which some consider to be a failure and disappointment. Although deep learning techniques are very good in translating voice to text, the failure comes after that: text to function. How often is it that Siri actually helps? Really, it’s only consistent for the most trivial of tasks.
On the contrary to this, there is an impressive conviction among data scientists that deep learning will continue to produce magical results. Andrew Ng is quoted as saying “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future”.
In section 3.5 of Marcus’s paper, deep learning’s transparency problem is discussed. This is the famous idea that neural networks are a black box, specifically meaning the thousands of optimized weight values that in someway map questions to answers, and generally define a problem, are meaningless to us. Therefore, we can’t really ask a neural network why it has produced a particular output, which is going to become more essential as deep learning becomes more sophisticated. Debugging deep learning algorithms is simply not as concrete as when doing so with non-heuristic ones. The why and how factors are just still too mysterious. However, the paper does agree this could change.
Every year deep learning is being pushed onto more wide-spread, difficult problems. There may be a point with these problems where the data required to solve them is unknown. We still need to supply data to deep learning models, but knowing what prior knowledge is needed can be very difficult. All in all, Gary Marcus is excited to see what happens, with no doubt of further success, while still acknowledging the more pessimistic side to deep learning’s future.