fbpx
OpenAI and DeepMind Employees Raise Alarms of AI Risks OpenAI and DeepMind Employees Raise Alarms of AI Risks
A group of current and former employees from AI giants, including OpenAI, backed by Microsoft, and Alphabet’s Google DeepMind, have raised... OpenAI and DeepMind Employees Raise Alarms of AI Risks

A group of current and former employees from AI giants, including OpenAI, backed by Microsoft, and Alphabet’s Google DeepMind, have raised alarms over the potential risks posed by emerging AI technologies. In an open letter published on Tuesday according to Reuters, the group highlighted the financial motives of AI companies as significant obstacles to effective oversight.

The letter, signed by 11 current and former employees of OpenAI and two from Google DeepMind, expresses concerns that existing corporate governance structures are inadequate for addressing these issues.

The letter stated, “We do not believe bespoke structures of corporate governance are sufficient to change this,“. And it goes on to underscore several risks associated with unregulated AI, ranging from the spread of misinformation to the loss of control over independent AI systems and the exacerbation of existing inequalities.

It starkly warns that these risks could potentially lead to “human extinction.”. This is a similar claim made last year as over a thousand AI professionals stated in an open letter that the risks associated with advanced AI needed to be better understood.

One specific concern raised by the researchers involves image generators from companies like OpenAI and Microsoft, which have been found producing photos containing voting-related disinformation, despite existing policies against such content.

Another critical point in the letter is the “weak obligations” AI companies have to share information about the capabilities and limitations of their systems with governments. The authors argue that these firms cannot be relied upon to voluntarily disclose such crucial information.

This lack of transparency and accountability is seen as a significant hurdle in ensuring the safe and ethical development and deployment of AI technologies. Due to this, the open letter calls for AI firms to establish processes that allow current and former employees to raise concerns about risks without fear of retribution.

Specifically, it urges companies not to enforce confidentiality agreements that prevent employees from speaking out about potential issues. This move is seen as a necessary step to improve internal oversight and accountability within the industry.

All of this comes after OpenAI announced that it had disrupted five covert influence operations attempting to use its AI models for “deceptive activity” across the internet. It’s no surprise that Blackhat actors are hard at work utilizing AI to enhance their own illicit abilities.

As mentioned, the concerns pushed in this open letter aren’t new. Over the past two plus years, professionals, tech leaders, and others have welcomed AI into multiple domains, but have also asked for caution as the effect AI will have on labor, and other aspects aren’t fully known yet.

As of this reporting OpenAI and DeepMind have no commented on the letter.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1