In an open letter, over 1,000 technology leaders and researchers are calling to pause AI development citing “profound risks to society and humanity.” The letter is calling for an immediate six-month pause of training for AI systems more powerful than GPT-4 by all AI labs. Signatures of the letter include co-founder of Apple Steve Wozniak, entrepreneur Andrew Yang, President of the Bulletin of Atomics Scientists Rachel Bronson, and more.
Citing possible lack of control and accountability over future systems, the letter warms, “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” It goes on to request new AI systems that, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
This request for a voluntary pause isn’t for all AI systems. The group points to systems that are in a “dangerous race to ever-larger unpredictable black-box models with emergent capabilities.” But if labs and researchers choose not to take a step back from their research, the authors of the letter ask that governments step in and force a pause for the safety of society. This all comes shortly after the release of GPT-4, which has already shown to far exceed its predecessor GPT -3.
In one week alone, users took to Twitter to display what they have been able to do with the new LLM with examples of coding, personas, and more. What is also interesting to note, is how other researchers made the claim that the new model currently has limited evidence of Artificial General Intelligence. Other signatories of the open letter include engineers from major tech companies such as Amazon, Google, DeepMind, Google, Meta, and Microsoft.
In closing, the letter states that AI systems have great potential to benefit humanity’s future and the rewards of which will be felt for a long period of time. But to do so there shouldn’t be a rush to AI without first understanding its possible consequences. They closed by saying, “We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.“