

New Statement Signed by the Likes of OpenAI’s Sam Altman Warns of AI’s Extinction Risk
AI and Data Science Newsposted by ODSC Team May 30, 2023 ODSC Team

A new statement by the Center for AI Safety, a San Francisco-based not-for-profit, warns of the existential risks associated with AI. Some who signed on to this statement include OpenAI’s Sam Altman, DeepMind CEO Demis Hassabis, MIT’s Max Tegmark, Microsoft CTO Kevin Scott, and many other notable names.
In part, the statement points to the risk associated with AI and equates the harm to that of a nuclear apocalypse. So the statement calls on policymakers to focus on AI, mitigating any risks that could harm humans. This isn’t the first time scientists and other AI leaders have gone public with their concerns about AI, but this is the first time such strong language was used.
Back in March, 1,000 tech leaders, scientists, and thought leaders called for a pause in the development of large language models more powerful than GPT-4. Their concerns centered around the unforeseen impacts of the technology before society was ready to handle it.
The statement from the Center of AI Safety used much stronger language and said in part, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
These words are pretty strong, and will for most, invoke some of sci-fi’s most popular tropes such as AI gone wrong, and killer robots. But it seems that the Center for AI Safety has this in mind and is worried that “AI’s most severe risks” could find themselves lost in the noise related to AI.
To the group, AI the risks associated with AI require an “open…discussion.” For Sam Altman and OpenAI’s part, they announced on Twitter new grants aimed at helping AI become “democratic” through ten $100,000 grants.
grants for ideas about how to democratically decide on the behavior of AI systems: https://t.co/XIARyDvI7f
— Sam Altman (@sama) May 26, 2023
Overall, it seems that OpenAI’s Sam Altman and other signatories are not alone. Though slowly, multiple nations, including the United States, China, and members of the EU are taking action when it comes to AI regulation. Each are at different stages of development and seems to take aim at different issues ranging from privacy, fraud, and more.
It’s clear that nations are taking reports, such as Goldman Sachs AI report seriously, as AI continues to enter multiple industries at a rapid pace. As of right now, the calls for policymakers to take on AI regulation continue to grow.