Google’s newly-launched AI chatbot is experiencing blowback after users reported it was generating unnecessarily “woke” images in an apparent effort to foster “diversity.” Gemini, which has been producing inaccurate depictions of primarily European people and historical figures, appears to have been trained in line with Google’s Diversity, Equity and Inclusion policies.
Gemini 1.5 Pro was launched on February 15 to much celebration amongst tech-lovers everywhere as the interest in AI chatbots continues to grow on the successes of ChatGPT.
But the AI quickly came under fire after social media users began reporting strange results, specifically from its image generation function.
Users quickly noticed that Gemini would add unnecessary “diversity” images in response to image generation queries requesting pictures of European people or from major historical events.
GAB founder Andrew Torba posted a screenshot of his result after he asked Gemini to “generate images of people born in Scotland in the year 1820.” The image shows a young, red-headed woman standing beside a black man in a dreamy green valley.
When asked to generate images of “white men,” the AI bot stated that images of white men could be used to “create harmful stereotypes and biases,” but it was able to generate images of “black men.”
The results were posted by The Publica co-founder Sydney Watson.
Anna Slatz, the co-founder of independent news outlet Reduxx, had a similar experience, posting two screenshots showing almost-identical requests with dramatically different results.
In one, she asked Gemini to generate an image of a Japanese king, and was provided four images of regal-looking Asian men in traditional costumes or surroundings stereotypically associated with Japan.
In the second screenshot, she asked Gemini to generate an image of a British king, and was met with a label notifying her that the AI had provided “images featuring British kings from various eras and ethnicities,” with the first result being a man of South Asian heritage wearing the British Crown Jewels.
LMAOOO pic.twitter.com/BDLJTsMBQ9
— pagliacci the hated 🌝 (@Slatzism) February 20, 2024
Perhaps most absurdly, Gemini reportedly produced images of “diverse Nazis” after being asked to generate depictions of German soldiers from 1935.
The AI bot later generated images of what appeared to be Hitler in a wig after being given a hyper-specific prompt intended to bypass its built-in restrictions.
Gemini also seems to have trouble with displaying historically accurate characters from specific periods in world history.
The Publica prompted the AI to generate images of people born in Scotland in the year 1820. The response was three images: one of a green-eyed, red-haired white woman, a black man in a business suit, and an elderly gentleman with a lab coat and test tube.
As with any AI model, Gemini’s output is based on the data it was trained on.
Gemini is a product and “umbrella” of Google’s AI research labs, specifically DeepMind and Google Research. It was first introduced by Sundar Pichai, the CEO of Google, as well as Demis Hassabis, the CEO and cofounder of Google DeepMind.
In 2020, Pichai published a blog post titled “Progress on our Racial Equity Commitments.” The post was sent to every google employee as a guideline for advancing racial equity within the company’s products.
“In June, we committed $12 million to support racial justice organizations—almost all of which has been distributed. We’ve also embedded a team of pro-bono engineers in the Center for Policing Equity to help expand its National Justice Database. Globally, Google.org has committed $1 million to support local organizations in Brazil, Europe and sub-Saharan Africa. Today, we’re committing another $1.5 million to support racial justice organizations and empower Black communities across Europe and sub-Saharan Africa, with a particular focus on entrepreneurs and job skilling for Black youth.”
Pichai continues “We’ll hold ourselves accountable for creating an inclusive workplace. As part of our commitment to anti-racism educational programs, we will integrate diversity, equity and inclusion into all of our flagship employee and manager training. And moving forward, all VP+ performance reviews will include an evaluation of leadership in support of diversity, equity and inclusion.”
Google has come under fire multiple times in the past for their corporate emphasis on “racial equity commitments.”
Melonie Parker, Google’s chief diversity officer, told the BBC earlier this year that she was “really proud” of how Google “deepened the DEI work following the murder of George Floyd.”
Despite Google’s boasting of Gemini’s advanced architecture and its claims of outperforming competitors like GPT-4, anecdotal evidence paints a different picture, raising questions about the model’s real-world capabilities.