The century of two Singularities
BlogBusiness + ManagementGovernmentposted by Calum Chace August 15, 2017
Editor’s Note: Calum will speak on AI at ODSC’s Europe based Summit on Accelerating Businesses with AI this October of 2017. Information can be found at ai.odsc.com/.
I believe the 21st century is the most interesting time to be alive – and the most important. If humanity survives it intact, our future is glorious. But that is a big “if”.
In maths and physics, a singularity is a point in a process where a variable becomes infinite, and the normal rules break down. At the centre of a black hole, the gravitational field becomes infinite, and the laws of physics cease to apply. So the word “singularity” is a superlative for disruption and transformation. I believe that in this century we will go through two of them.
The most profound one is known as the technological singularity, which will happen if and when we create an artificial general intelligence (AGI) – an AI with all the cognitive abilities of an adult human. Because AIs can be expanded and enhanced in ways that our brains cannot, the AGI will quickly become a superintelligence.
We don’t know for sure that we will be able to do this, and the task is immense. The human brain operates at the exaflop scale or above – a billion billion calculations per second. Our most powerful supercomputers are only just approaching this level now. But there seems to be no reason in principle why it cannot be done, and most AI researchers think it is a question of when rather than if. Timescales vary from a couple of decades to a hundred years or more, but they cluster in the second half of this century.
Until recently, very few people took the notion of near-term superintelligence seriously, but in 2015 we had the “three wise men” moment when Stephen Hawking, Elon Musk and Bill Gates all talked about the promise and peril of advanced AI. For time-starved journalists, “if it bleeds it leads”, so these comments were widely mis-represented as doomsaying, and almost every article about AI carried a picture of the Terminator. Then there was a backlash against the backlash, with AI researchers and others lining up to warn us not to throw the baby of AI out with the bathwater of unfriendly superintelligence.
They were right. Homo sapiens owes its dominant position on this planet to intelligence. If we create an entity which becomes a million times smarter than Albert Einstein, it could probably solve all our major problems – like poverty, war, unhappiness, and even death. DeepMind, a team of elite AI researchers in London now owned by Google, has a two-step mission statement: solve AI, then use that to solve everything else.
First there is the little matter of making sure that the first superintelligence really likes and understands humans. This is not a trivial task, but we probably have a couple of generations to complete it. We must succeed, for if we fail, extinction is not the worst possible outcome.
Before we reach the technological singularity, we will face another stiff challenge. My book “The Economic Singularity” argues that in the next few decades most humans will become unemployable because machines (AI systems plus their peripherals, the robots) will be able to do anything that we can do for money cheaper, faster and better. And unlike us, their capabilities will be improving all the time. At an exponential rate, if not faster.
Most economists say this is the Luddite Fallacy, named after the 19th-century gangs who smashed machines in England during the early industrial revolution. The economists are right to point out that so far, automation has not caused lasting unemployment. Instead it has made production processes cheaper and more efficient, creating more wealth, and therefore more jobs. They assume the new wave of automation by AIs will do the same.
Maybe they are right: the truth is, we just don’t know yet. But it seems unlikely: machines are now able to recognise and classify faces better than humans. They are catching up fast in speech recognition and they are also making rapid progress with natural language processing. These capabilities are what most people rely on to earn their daily bread – service industries now comprise by far the largest part of most developed economies. Robots are also improving quickly, and it is hard to see how most manual jobs in factories, warehouses and elsewhere will still be done by humans a few decades from now.
If you doubt this, consider the power of exponential growth. Moore’s Law is the observation that computers improve exponentially – they get twice as powerful every 18 months. To illustrate what this means, imagine taking 30 steps: you will travel about 30 metres. If you could take 30 exponential steps you would travel to the moon. To be more precise, your 29th step would take you to the moon: your 30th step would bring you all the way home. Exponential growth is incredibly powerful, and it is back-loaded. AI is impressive today, but we have seen none of its real potential yet.
Economists argue that we will work ever more closely with these immensely capable machines, contributing uniquely human characteristics like creativity and empathy. Unfortunately, most of us are not exceptionally creative in our daily jobs, and it is not true that computers cannot be creative. Creativity is the combining of two of more ideas in a novel way, and it is not true that you have to be conscious or human to do that. The DeepMind system which taught itself to play Atari video games displayed creativity when it invented a new way to win at Breakout.
Machines don’t have empathy, and probably won’t unless and until we create an AGI and it turns out to be conscious. But they can fake empathy very well, and in many situations where we think we want empathy, the pretence of it is just fine. Machine therapists are proving surprisingly effective in many contexts, and robotic carers like the fake baby seal Paro are adored by many of their “patients”.
Economists argue that even if machines do take all our existing jobs, we will invent new ones which we cannot currently imagine. Virtual reality landscaper, anyone? This has happened before: the farm workers who moved to the city in the 19th century in search of work in the factory could not have imagined that his granddaughter would become a website designer.
Unfortunately, an analysis of US labour statistics reveals that 80% of the jobs done by people today existed back in 1900. More important, 90% of the people working in the US today are doing jobs which existed in 1900. Maybe we will invent scores of new jobs which only humans can do, but neither common sense nor past experience suggest that.
A world of widespread unemployability does not have to be a bad thing – in fact it could be wonderful. A world in which robots do all the (mostly boring) jobs could be one where humans are free to get on with the important business of life: playing, socialising, exploring, learning, and having fun. Is it really the pinnacle of human aspiration to be an actuary, or a delivery girl for Amazon?
All we have to do now is figure out how to provide an income for those who aren’t working – and a good income, not subsistence-level welfare. Which may well mean we are going to need a new type of economy. Working out what that looks like, and how to get from here to there is a serious challenge, so we had better start taking it seriously. Economists, please note.
©ODSC2017

Calum Chace is a best-selling author of fiction and non-fiction books and articles, focusing on the subject of artificial intelligence. His books include Surviving AI, a non-fiction book about the promise and the challenges of AI, and Pandora's Brain, a techno-thriller about the first superintelligence. His latest book The Economic Singularity addresses the prospect of widespread technological unemployment. He is a regular speaker on artificial intelligence and related technologies and runs a blog on the subject at www.pandoras-brain.com. He also serves as chairman and coach for growing companies. Before becoming a full-time writer, Calum had a 30-year career in journalism and business, in which he was a marketer, a strategy consultant and a CEO. He maintains his interest in business by serving as chairman and coach for a selection of growing companies. In 2000 he co-wrote The Internet Startup Bible, a business best-seller published by Random House. A long time ago, Calum studied philosophy at Oxford University, where he discovered that the science fiction he had been reading since boyhood was actually philosophy in fancy dress.
How To Create Trust Between AI Builders and AI Users
Featured Postposted by ODSC Community May 26, 2023
The Most Popular In-Person Sessions from ODSC East 2023
East 2023Conferencesposted by ODSC Team May 26, 2023
Is an AI Coding Assistant Right For You?
Modelingposted by ODSC Team May 26, 2023