The Frontier Model Forum is a new organization created by OpenAI, Microsoft, Anthropic, and Google to focus on the “safe and responsible” development of frontier AI models. This new industry body brings together four of the most influential tech giants developing AI.
According to the group, their focus will be on what’s called frontier AI models, or AI models that are more advanced than what is currently available. Brad Smith, the president of Microsoft said of their motivation, “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,…This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”.
Founding members of the forum have stated that their goals are to promote research in AI safety. Part of that involves developing standards for the elevation of models, encouraging responsible development of advanced AI models, trust and safety risk discussion with policy leaders and academics, and helping develop positive uses for AI.
So those positive uses involve healthcare and climate science. Membership is open to any organization that meets specific criteria. According to a blog post from Microsoft, the be a member of Frontier Model Forum, an organization must meet the following criteria.
- Develop and deploy frontier models (as defined by the Forum).
- Demonstrate a strong commitment to frontier model safety, including through technical and institutional approaches.
- Are willing to contribute to advancing the Frontier Model Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
This comes as calls for AI regulation grow on both sides of the Atlantic in the United States and Europe. China is currently already pushing out new generative AI regulatory frameworks with South Korea and Japan following suit with their plans.
A few days ago, industry leaders spoke at the White House on this topic. It shows that there is a growing interest in ensuring that AI development is done so with minimal harm. This past March, the US Chamber, the largest business trade group in the United States, asked that policymakers get on board with AI regulation.
That came around the time when Goldman Sachs released reports that estimated that AI would affect hundreds of millions of jobs while also boosting the overall GDP of the global economy. All of this also comes as major tech companies also pledged greater AI safety commitments, such as watermarking.
Editor’s Note: Responsible AI is becoming a critical topic in AI development, and if you want to stay on the frontlines of the latest developments, then you need to hear from the industry leaders leading the charge. You’ll get that at the ODSC West 2023 Machine Learning Safety & Security Track. Save your seat and register today.