

DeepMind Co-Founder Suggested an Interesting Way of Measuring Human-like Intelligence For AI
AI and Data Science Newsposted by ODSC Team June 22, 2023 ODSC Team

A co-founder of DeepMind has come up with a novel way of measuring human-like intelligence for AI chatbots like ChatGPT. According to Bloomberg, Mustafa Suleyman, formerly head of applied AI at DeepMind and currently CEO of Inflection, AI dismissed the use of the Turing test.
In his view, AI chatbots should be tested on how they’d turn $100,000 into a million dollars. All of this came out in his new book, “The Coming Wave: Technology, Power and the Twenty-first Century’s Greatest Drama.” He wrote, “We don’t just care about what a machine can say; we also care about what it can do.”
His dismissal of the Turing Test is because it’s “unclear whether this is a meaningful milestone or not.” He’d go on to add, “It doesn’t tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence,“.
Instead, chatbots should get $100,000 in seed money to invest. Then, measured if it can turn that seed into a million dollars. Part of the test would also have the bot research an e-commerce business idea, develop a plan for the product, then find a manufacturer. Finally, it would of course have to sell the item it came up with.
Though unlikely now, Mustafa Suleyman believes that AI will be able to reach this ability within the next two years. Per Bloomberg, he wrote, “We don’t just care about what a machine can say; we also care about what it can do.”
This might sound far-fetched right now, but if you think about it, it really isn’t. Over the last year, LLMs such as ChatGPT have been able to pass graduate-level MBA courses, the bar exam, and other high-level tests. So it stands to reason as these AI programs improve their ability to apply logic through human interactions, that an AI bot coming up with a business concept that works is possible.
Though this might seem far off. Currently, AI chatbots and other programs are being trained to do so. Many users are using programs such as ChatGPT to write books, other computer programs, and even investment thesis. But with that said. Would this sort of test truly be able to measure human-like intelligence?