Connect with us

Hi, what are you looking for?

Computing

Google nerfs Gemini’s image generation capabilities following blunders from AI model

Google nerfs Gemini’s image generation capabilities following blunders from AI model


Google will not allow you to generate imagery of people in Gemini for the time being as it works on fixing certain glaring issues.

Source: Google

Key Takeaways

  • Google’s Gemini AI chatbot faces backlash over inaccurate and potentially offensive image generation capabilities.
  • The company is temporarily nerfing the feature to address bias and sensitivity issues discovered in its use.
  • Google plans to re-enable image generation of people in Gemini after fixing the problems, but warns of ongoing imperfections in AI models.


It’s clear that the AI chatbot market is getting saturated as tech companies flood the space with multi-purpose conversational interfaces powered by large language models (LLMs). A few weeks ago, Google announced that it is rebranding Bard to Gemini, along with offering an advanced version of the model via a paid subscription. Gemini also packs the ImageFX utility based on the Imagen 2 AI model for image-generation capabilities, but now, Google has decided to nerf access to this tool following widespread reports of inaccurate and potentially offensive imagery being generated by the service.


What’s the issue with Gemini’s image generation?

Google has strongly emphasized that Gemini is a separate product entity that isn’t tied directly to Search, its AI models, and other services. When it built the app, it manually tuned the underlying model so that it could generate diverse images rather than showing bias towards a certain group or ethnicity. This was done in order to ensure that users from all over the globe could effectively use the app without feeling underrepresented.


However, Gemini didn’t account for use-cases where users specifically wanted a certain ethnicity to be present in an image, which led to prompts like “Black teacher in a classroom” generating incorrect or offensive images as the model tried to compensate for the inherent and intended bias in the prompt. In some cases, it even refused to generate images for harmless prompts, marking them as sensitive.

What’s next for Gemini?

Following widespread reports of these instances, Google has decided to shut down the ability to generate imagery of people in Gemini. The company says that it will enable it again once it has fixed the issues in its implementation and performed extensive testing. That said, it has cautioned its customers that AI models are not perfect due to their tendency to hallucinate, so it’s likely that issues will still pop up from time to time even after a patch is deployed.



The article was first published here

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

iPhone

10:29 a.m. ET, February 22, 2024 AT&T recently applied for a waiver to allow it to stop servicing traditional landlines in California From CNN’s...

Entertainmment

Everyone in Kyle Hausmann-Stokes’ compassionate feature My Dead Friend Zoe has suffered a loss. Merit (Sonequa Martin-Green), a nervous Afghanistan war veteran, is reeling...

Gaming

Video: Bandai Namco Reveals New Gameplay Footage Of Dragon Ball Z: Kakarot DLC 6  Nintendo Life Dragon Ball Z: Kakarot DLC Trailer Previews Goku’s Next...

Computing

Using 3D storage techniques, scientists at the University of Shanghai for Science and Technology developed an optical disk capable of accommodating 1.6 petabits of...