HomeTechGoogle Pauses Gemini's Image Generation of People to Fix Historical Inaccuracies

Google Pauses Gemini’s Image Generation of People to Fix Historical Inaccuracies

[ad_1]

UPDATE 2/22: Early Thursday morning, Google said it had disabled Gemini’s ability to generate any images of people. A quick PCMag test of Gemini on a Mac using the Chrome browser today delivered the following message when Gemini was asked to create an image of a person, historical or otherwise: “We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.”


Original story 2/21:
Do AI-generated images have to be historically accurate, down to the racial identity of the characters created? Some users of Google’s generative AI tool Gemini think so, and have taken to social media platforms like X and Reddit to complain.

Google Senior Director of Product Jack Krawczyk, who’s overseeing Gemini’s development, wrote Wednesday that the Gemini team is working to tweak the AI model so that it generates more historically accurate results.

“We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Krawczyk said.

The product director emphasized in the same post that Gemini was designed to “reflect our global user base, and we take representation and bias seriously,” suggesting that the results may have been generated as part of the AI’s effort to be racially inclusive.

Some Gemini users posted screenshots claiming that Gemini considered a Native American man and Indian woman to be representative of an 1820s-era German couple, an African American Founding Father, Asian and indigenous soldiers to be members of the 1929 German military, and diverse representations of a “medieval king of England,” among other examples.

“Historical contexts have more nuance to them and we will further tune to accommodate that,” Krawczyk said, adding that non-historical requests will continue to generate “universal” results.

But if Gemini is altered to enforce more strict historical realism, it could no longer be used to create historical re-imaginings.

Recommended by Our Editors

Generative AI tools more broadly are designed to create content within certain parameters, using specific data sets. That data can be flawed, or simply incorrect. AI models are also known to “hallucinate,” meaning they may make up fake information just to provide a response to users. If AI is being used as more than a creative tool—for educational or work purposes, for example—hallucinations and inaccuracies pose a valid concern.

Since generative AI tools like OpenAI’s ChatGPT launched in 2022, artists, journalists, and university researchers have found that AI models can display inherent racist, sexist, or otherwise discriminatory biases with the images they create. Google has explicitly acknowledged this problem in its AI principles, and says it’s striving as a company to avoid replicating any “unfair biases” with its AI tools.

Gemini isn’t the only AI tool that’s given users unexpected results this week. ChatGPT reportedly went a bit off the rails Wednesday, providing nonsensical responses to some user queries. OpenAI says it’s since “remediated” the issue.

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.



[ad_2]

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments