fbpx
A Look at Gary Marcus’s Deep Learning: A Critical Appraisal A Look at Gary Marcus’s Deep Learning: A Critical Appraisal
There are many mixed opinions regarding the future of deep learning. Gary Marcus’s paper, “Deep Learning: A Critical Appraisal” overviews the... A Look at Gary Marcus’s Deep Learning: A Critical Appraisal

There are many mixed opinions regarding the future of deep learning. Gary Marcus’s paper, “Deep Learning: A Critical Appraisal” overviews the social and more technical concerns with deep learning, and examines the possibility of it simply hitting a wall. Having only reached mainstream technology at acceptable production standards in the last five years, it’s difficult to evaluate its place in the future, or how deep learning needs to evolve first.

Gary Marcus’s Deep Learning Paper

The paper first looks at what deep learning is doing well, such as face and speech recognition. Although these things may be very impressive, their utility is questioned a little.Gary Marcus's Deep Learning

Apple’s Face ID is not functionally a revolution; it makes nearly no difference at all to our day-to-day lives despite the fact that the technology developed to make Face ID work required a huge redesign of the iPhone. Another example is chatbots, which some consider to be a failure and disappointment. Although deep learning techniques are very good in translating voice to text, the failure comes after that: text to function. How often is it that Siri actually helps? Really, it’s only consistent for the most trivial of tasks.

On the contrary to this, there is an impressive conviction among data scientists that deep learning will continue to produce magical results. Andrew Ng is quoted as saying “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near  future”.

In section 3.5 of Marcus’s paper, deep learning’s transparency problem is discussed. This is the famous idea that neural networks are a black box, specifically meaning the thousands of optimized weight values that in someway map questions to answers, and generally define a problem, are meaningless to us. Therefore, we can’t really ask a neural network why it has produced a particular output, which is going to become more essential as deep learning becomes more sophisticated. Debugging deep learning algorithms is simply not as concrete as when doing so with non-heuristic ones. The why and how factors are just still too mysterious. However, the paper does agree this could change.

Every year deep learning is being pushed onto more wide-spread, difficult problems. There may be a point with these problems where the data required to solve them is unknown. We still need to supply data to deep learning models, but knowing what prior knowledge is needed can be very difficult. All in all, Gary Marcus is excited to see what happens, with no doubt of further success, while still acknowledging the more pessimistic side to deep learning’s future.

Caspar Wylie, ODSC

Caspar Wylie, ODSC

My name is Caspar Wylie, and I have been passionately computer programming for as long as I can remember. I am currently a teenager, 17, and have taught myself to write code with initial help from an employee at Google in Mountain View California, who truly motivated me. I program everyday and am always putting new ideas into perspective. I try to keep a good balance between jobs and personal projects in order to advance my research and understanding. My interest in computers started with very basic electronic engineering when I was only 6, before I then moved on to software development at the age of about 8. Since, I have experimented with many different areas of computing, from web security to computer vision.

1