fbpx
Google Pledges to Fix Gemini Calling Responses “Unacceptable” Google Pledges to Fix Gemini Calling Responses “Unacceptable”
It’s been a rough week for Google as after the launch of Gemini, users found major issues with the large language... Google Pledges to Fix Gemini Calling Responses “Unacceptable”

It’s been a rough week for Google as after the launch of Gemini, users found major issues with the large language model. In many cases, several hallucinations in both text and image responses showed clear bias and historical inaccuracies.

According to Reuters, CEO Sundar Pichai told employees in a note that the model was producing “biased” and “completely unacceptable” responses. On social media, particularly on platforms such as X, users mocked the model’s widely inaccurate responses.

In short, the CEO said that the tool’s responses had offended users and showed a clear bias. Pichia continued in the note, “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement in a wide range of prompts… And we’ll review what happened and make sure we fix it at scale.”

In-Person and Virtual Conference

September 5th to 6th, 2024 – London

Featuring 200 hours of content, 90 thought leaders and experts, and 40+ workshops and training sessions, Europe 2024 will keep you up-to-date with the latest topics and tools in everything from machine learning to generative AI and more.

 

Due to the required fixes, Google is planning on relaunching Gemini AI in the next few weeks. Currently, many functions, such as image generation were disabled as generated content was being shared online and seen as inappropriate for a model that claims no bias.

This is a clear blow for the company that has been racing over the last year to play catch up with OpenAI’s chatbot, ChatGPT. In some ways, Google has been successful in other AI integrations, such as integrations in its Google suite of products, Gmail, Docs, and Sheets.

But Gemini’s failure this week is a pretty nasty blow to the tech giant.  It’s clear that the company likely needs to revisit redlining and other bias monitoring SOPs in-house as the model found it difficult to produce historically arcuate images.

Though images were only one issue. The model was also found to have produced offensive texts to users who asked basic questions about current and past figures. All in all, this debacle from Google is a clear signal to other AI firms the importance of redlining teams, and engineer-focused tests to ensure model behavior runs as expected.

If you’re interested in responsible AI, ODSC East 2024 has an entire track dedicated to providing practical and ethical frameworks for developing AI technology.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1