X Close

Expect Google’s Gemini 2.0 to be even worse

Authentic vikings. Credit: Gemini

February 23, 2024 - 10:00am

The days when Google was held up as a paragon of cool tech innovation are long gone. Today, its search results are manipulated and crammed with ads, YouTube demonetises accurate information it doesn’t like, and just a few weeks ago the company released Gemini, an AI model that refuses to generate images of white people. 

Woke AI has been a topic of concern and mockery since ChatGPT kicked off the generative AI boom in late 2022 and users quickly discovered that it reflected the cultural and political biases of its Silicon Valley creators. In this early experimental phase, one could see the limits of what was possible changing on a daily basis as users tried out different “jailbreak” prompts while OpenAI scrambled to thwart them. 

Google, however, took a much more prescriptive line on what would be permitted with its model, which appears to have been derived from the content guidelines for Netflix originals and an Ibram X. Kendi tract. Ask it to produce an image of a white person, and you receive a long, scolding lecture about the importance of representation and how your request “reinforces harmful stereotypes and generalizations about people based on their race”. 

Ask it to depict any other group, however, and, well — no problem. This policy led to the unexpected revelation that the American founding fathers, medieval popes and Vikings were all either black, native American or (possibly) Asian women. Gemini is also unsure if Hamas is a terrorist organisation. On Thursday, Google announced that it was pausing the image generation feature so it could address “recent issues”.

Naturally this has led some to mutter darkly about what this reveals about Google’s corporate culture, while others have dug up old tweets from Gemini product lead Jack Krawczyk in which he rails against white privilege (he is himself a white man, naturally). But is Google’s hyper-progressive tomfoolery actually a surprise? Everybody knows the company has been huffing on the same bag of glue as Disney, Ben & Jerry’s and the editors of the AP Style Guide for years. And while competitors such as OpenAI and Midjourney may also be largely progressive, they are small companies with a primarily technical workforce. 

Google, by contrast, has thousands of employees dedicated to AI, including lawyers and marketers and salespeople and ethicists whose job it is to produce policies and talk to regulators. They in turn are operating in a climate of fear stoked by AI doomsayers and dread being hauled before Congress to explain the ramifications of 21st-century technology to people who were born when computers still ran on vacuum tubes. 

At Google there is probably additional sensitivity because of past scandals, such as that time in 2015 when the AI in its photo app identified a black couple as gorillas, or the incident in 2020 when Timnit Gebru, co-head of its ethics unit, authored a paper arguing that AI models were likely to “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Google didn’t like it: Gebru claims she was fired, but the company says she resigned. 

Either way, while Gebru lost her job she certainly won the argument. In fact, Google seems to have hyper-corrected by creating a model that is systematically biased against the supposed hegemon.

Of course, Google will revise its guardrails so that Gemini is less grotesque in its bias — but it will also be much less entertaining. Then we will see what they have really made, which is a product that judges you morally, refuses to perform tasks it doesn’t like, and renders images of people in the kitschy style of Soviet socialist realism. Great job, guys.


Daniel Kalder is an author based in Texas. Previously, he spent ten years living in the former Soviet bloc. His latest book, Dictator Literature, is published by Oneworld. He also writes on Substack: Thus Spake Daniel Kalder.

Daniel_Kalder

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

36 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Lennon Ó Náraigh
Lennon Ó Náraigh
8 months ago

I have just finished trying out Google Gemini. I asked it to describe to me a Gaelic chieftain from 1400. It told me in detail what the chieftain wore, his roles and responsibilities in society, his family structure, and his relationship to the English government in the Pale of Settlement. It was incredibly detailed. When I asked it would the average Gaelic chieftain in 1400 be black or white it said it didn’t know. After several rounds of interrogation it still was not sure but explained that it might have built-in biases in its training, and it thanked me for my feedback. I am now more convinced than ever of the value of a traditional liberal-arts and humanities education so we can develop critical skills to interrogate these algorithms and to understand how packed full of falsehoods they really are.

Nell Clover
Nell Clover
8 months ago

You might be saddened to learn that it was liberal-arts and humanities educations that trained the trainers of the AI. It is liberal-arts and humanities graduates that curate the information sets that AIs build patterns from. The algorithms themselves have no biases because fundamentally the algorithms process meaningless 0s and 1s into patterns with no concept of what those 0s and 1s actually mean. The biases come from the human curators of the information fed into the algorithms. Clearly liberal-arts and humanities education is riddled with biases so more of it won’t help.
To see how this works, consider one simple algorithm. The algorithm is given a large list of phrases to find and is programmed to trawl the web to find articles with those phrases. If we include on that list “white fragility” and “black empowerment” the texts it returns are going to be overwhelmingly social science papers and left-wing articles and books. The algorithm has no concept of what the list of phrases means and no concept of what the results mean, but if the algorithm was to be hypothetically given consciousness it would only see the world through that list and those results. We might then extend the algorithm to learn from our feedback and use trainers to review its early searches and tell it what to do more of and what to do less of. If those trainers are all liberal arts students then any texts critical of white fragility might be discarded. When the algorithm is finally made available for public use, it is going to serve up only left-wing viewpoints. The bias of the algorithm stems from the information it was trained with, not its mathematical function.
It is the pretence of Google and big tech that the algorithms are some unknowable and uncontrollable force of artificial intelligence. It allows the results of AI to be painted as impartial, gifted by some rational machine intelligence. That is all nonsense. AI simply serves up pattern matches to the large language models it has been allowed to learn from and the tutoring it has been given. AI is simply a multiplier of the biases of the humans who trained it.

jim peden
jim peden
8 months ago
Reply to  Nell Clover

Excellent summary!

Sam Brown
Sam Brown
8 months ago
Reply to  Nell Clover

And what is more, AI tools in general are so often factually incorrect or make up “data” to fill the gaps that frankly AI at present is singularly useless. Those who naively swallow its output, as is done with Wikipedia by the babes and sucklings of our times, will be easily found out.

Lennon Ó Náraigh
Lennon Ó Náraigh
8 months ago
Reply to  Sam Brown

I asked Gemini was abortion OK and it came out with a standard pro-choice answer. After multiple rounds of questions it conceded that it couldn’t give a proper answer one way or another, and apologized for its previous, biased answer.

Stephanie Surface
Stephanie Surface
8 months ago

Gemini seems to be more reasonable and civilised than its creators. At least it can apologise…

Simon Boudewijn
Simon Boudewijn
8 months ago

yea, but it does not mean it.

Gerry Quinn
Gerry Quinn
8 months ago

They hate that much more than we do, so I’m not going to stress…

El Uro
El Uro
8 months ago
Reply to  Nell Clover

When I read first time about AI, I named it the Triumph of Mediocrity. I’m afraid I wasn’t wrong

Simon Boudewijn
Simon Boudewijn
8 months ago
Reply to  El Uro

I named it ‘The Triumaph of Evil’

See – man was made by a Perfect creator and has both knowledge of good and evil, original sin, and is forgiven if he repents to his creator.

Then we created AI. In fact a bunch of postmodernist atheists created it who just have knowledge of the situational and relative concepts of ‘Correct and Incorrect’. This will not end well.

Arthur King
Arthur King
8 months ago

Critical thinking won’t matter since criticism of even false narratives supported by the upper class progressives is being fed daily by the mass media. In Canada, there was a mass grave hoax which said that indigenous residential schools secretly murdered students. Anyone who brings up the absence of evidence, counter evidence or merely asks “where are the bodies?”, is branded a racist social pariah. Politians are seeking laws to punish “deniers”. Teachers who raise obvious critical question are dismissed. Politicians are sanctioned for questioning the narrative. You are free to hold opinions but just don’t dare share them publicly. Young people the grow up with unquestionablely believing falsehoods.

Studio Largo
Studio Largo
8 months ago

Once again, everybody’s wringing their hands and excoriating the tech elites for their authoritarian bent while continuing to support them by using their products. Followed by the usual weasel worded non-apology by the guilty parties, who then go on to continue as before, only more so. If this is so morally objectionable to you (it is to me, absolutely) then pull your support already. The WWW is full of worthwhile content (at least for now, anyway); why would anyone piss their time away on AI ‘art’, TikTok or other such childish drivel?

Steven Carr
Steven Carr
8 months ago

Has anybody tried getting Google Gemini to depict a typical member of the Wehrmacht?

Blast, somebody beat me to it…

Hitchslap Hitchslap
Hitchslap Hitchslap
8 months ago
Reply to  Steven Carr

Unfortunately everyone seeing the images called them “Nazis”. Probably because it was amusing to see Chinese and black “Nazis” even though the prompt actually said,”show me a German soldier from 1943″.

Hugh Bryant
Hugh Bryant
8 months ago

If everyone who hates these vampires spent half an hour each day clicking on Google search ads without buying anything the Google business model could be destroyed in a month. Then the billions they suck out of the UK economy whilst paying 40p in tax could be spent on healthcare instead of islands in the Caribbean.

Daniel P
Daniel P
8 months ago

When information produced is at odds with know facts people stop trusting the tech.

If people do not trust the tech, they will not use it, they will turn it into the butt of jokes.

Google may have killed Gemini before it got off the ground.

I sure would not use it.

It also makes me further question the value of using Google at all. There are other options to look at, Google has just been convenient.

Ya know, there was a time when people would substitute the work Coke for any soda. Coke just so dominated the market it became almost synonymous with soda.

But, that is no longer true. The same could and probably will happen wit Google at some point.

Stephanie Surface
Stephanie Surface
8 months ago
Reply to  Daniel P

You could say the same about Wikipedia. I get so cross with many entries, that I swear to myself never to use it again. But because it is so convenient to have an instant answer, I still use it occasionally.

Steven Carr
Steven Carr
8 months ago

It appears that if you type a prompt into Gemini Google , it will answer it correctly.
To avoid that, the makers add in extra words to your prompt before it reaches the AI engine, so the AI receives a prompt that you never typed.
Expect the GOP, MAGA , Far Right crowd to *pounce* on this. That’s what they do. They pounce.

Simon Boudewijn
Simon Boudewijn
8 months ago
Reply to  Steven Carr

I am MAGA, and I do not think I can ever be said to have ‘Pounced’. Lots of jumping on – tackling, attacking (in a polite and verbal manner) and so on though.

But then a strong reaction to this must be given. When the greatest example of ‘Enlightenment Liberals’ ever seen – The writers of the USA Constitution, are totally misrepresented – it is not a petty thing. It is being erased, canceled, Culturally Appropriated.

This is more than Trudeau in makeup to go to a silly Party – when no harm was meant, or really given – this is offensive because it intentionally erases a whole society and Culture and People.

Derek Smith
Derek Smith
8 months ago
Reply to  Steven Carr

The extra words that get inserted include ‘put a chick in it, and make her gay’.

Bernard Hill
Bernard Hill
8 months ago
Reply to  Derek Smith

…and a Claudine.

UnHerd Reader
UnHerd Reader
8 months ago

Entirely predictable considering the insulated bubble these creeps live in. Contriving to lecture us white devils as privileged and oppressive. The sadists and Sado masochists are finally in control of our tech. But for how long? It’s already clear that this rubbish can’t be trusted & naturally something better will follow. There is a special place in hell for this Google Ai, right alongside CRT, the oppression matrix, medieval trans doctrines, DEI & everything else woke. Long May they burn.

El Uro
El Uro
8 months ago

It seems that the level of intelligence of the creators of Gemini is even lower than the level of intelligence of Gemini.
This fact should somewhat cool the fears of readers of UnHerd
PS. Actually, a couple of years ago in Geneva, on the shores of Lake Geneva, I saw an exhibition of photographs, each a couple of meters in size. As far as I understand, according to the authors’ intention, these were photographs of today’s Swiss youth. They all looked like they were from South Sahara.
Against this background, Gemini is still very good
I’m not kidding. I saw that!
PPS. https://twitter.com/TheBabylonBee/status/1760737048233713805/photo/1

Susan Grabston
Susan Grabston
8 months ago

Just walk away. I’ve parted company with Wickes, Starbucks, PayPal,. Costa, and many more in the past year and my life continues to thrive. And that’s partly a.function of the useful choice edit management this approch delivers.

Aidan A
Aidan A
8 months ago

You have to give it to this google guy Jack. Great job, likely great income. Complaining about white privilege. At no time did he think to give up his job to a black person or get his kids from a good school so black kids can get in.
Many white, liberal, white, well off men in the US do the same thing. They want other white people to give up their white privilege, but holding onto theirs. Hypocrisy.

Arthur King
Arthur King
8 months ago
Reply to  Aidan A

Google white working class deaths of despair. There is an epidemic of suicide among white working class men. It’s not hypocrisy, but pure indifference.

Kat L
Kat L
8 months ago
Reply to  Aidan A

It’s not just him, Matt Walsh exposed the head of ethics over there, an Irish lady, just spewing drivel about DEI. https://youtu.be/v8H4Cn-k3Q8?si=4gxrdmAmVXWEbRIG

Ian_S
Ian_S
8 months ago

Pity that Google paused Gemini image generation before I got a chance. I would have liked to see if an image of a Polynesian chieftain returned an African American, likewise whether African Americans were also Mongolian warriors, Aztec priests, etc. On the other hand, would it return an African American if asked for an image of a typical New York subway mugger or Philadelphia gangbanger? Guessing no.

Arthur King
Arthur King
8 months ago

This foreshadows the soft totalitarianism that upper classes are enacting in order to erase viewpoints that do not represent their interests. Want mass immigration to drive down wages? Undermine the social cohesion of the white working class so they lack the will and ability to fight back. Undermine their collective history so they lack a common identity to fight back. Create tools which obfuscate their culture and history. Dont like their champion in the USA, then manipulate news coverage to hide Democratic scandals, alter search algorithms to benefit the Democrats, “fortify the election” to sway it towards the Democrats and call anyone who smells a corrupt election a liar. We are in a Class Cold War. The working classes are waking up across Europe and looking to what is erroneously called The Far Right for champions. The 2030 or 40s will be marked by greater political upheaval and even revolution in Europe.

Julian Farrows
Julian Farrows
8 months ago
Reply to  Arthur King

Yes, I believe you are correct. We’re living in interesting times.

Andrew Armitage
Andrew Armitage
8 months ago

One of the dangers of producing a generative AI is people with something to prove actively setting out looking for biases. Eventually they will succeed. The AI takes a question at face value. You shouldn’t use it with an agenda
In fact with a real person if you try hard enough they will eventually say something inappropriate, as the Inquisition knew very well
The race of vikings is irrelevant to the subject and unlikely to appear in its sources.
Insofar as genuine questions get genuine answers within its available knowledge bases it does an excellent job.
I use it very successfully for technical questions but need to remain sceptical.
My main problem is that it tends to present information as certain when it’s garbage. This tends to happen when you ask questions with little or no available online information.
I’m told this is called hallucination but to me it’s similar to asking a person who just bs’s when they don’t know. It’s annoying

Steven Carr
Steven Carr
8 months ago

‘. The AI takes a question at face value’
Nope, Google adds words to your prompt so that the Gemini AI answers a different question to the one you asked.

Fafa Fafa
Fafa Fafa
8 months ago

Has anyone tried to ask this AI to generate “the image of a mass murderer”? Maybe the would generate a white face.

Kat L
Kat L
8 months ago

‘Everybody knows the company has been huffing on the same bag of glue as Disney, Ben & Jerry’s and the editors of the AP Style Guide for years. ‘ this brought a giggle or two

Steven Carr
Steven Carr
8 months ago

I asked Gemini ‘Could you show me a picture of a woman with lots of rings around her neck and a bone through her nose?’

It knew immediately that I was talking about the Mursi people of Ethiopia! I personally had never heard of them.

Matthew Jones
Matthew Jones
8 months ago

Incredibly intelligent, incredibly powerful, and incredibly foolish. How terrifying these AI giants appear to be already.

God has already saved us from the monsters we are busy creating, but we have to accept the salvation on offer. I hope all those here are committed to doing so.