X Close

Marc Andreessen: AI has always been a censorship machine

Marc Andreessen has been at the forefront of the Silicon Valley vibe shift. Credit: Getty

December 12, 2024 - 3:00pm

Billionaire venture capitalist Marc Andreessen has claimed that AI has been a “censorship machine […] right from the beginning”.

During an interview with The Free Press’s Bari Weiss this week, Andreessen, who co-created early web browser Mosaic and has since become an influential figure on the American Right, argued that the technology “has gone on a hyper-accelerated version” of social media’s arc towards becoming a “censorship machine”. He added that “it’s 100% intentional. That’s how you get black George Washington at Google,” referring to the company’s Gemini bot which was launched earlier this year and included racially anachronistic representations of historical figures.

“AI companies learnt from the experience of the social media companies,” Andreessen told Weiss, “and they just said: if we’re going to build a censorship machine over a decade we might as well do it up front.” He added: “there are large sets of people in these companies that determine these policies and write them down and encode them into these systems. So, overwhelmingly, what [users] experience is intentional.”

Andreessen has been at the forefront of Silicon Valley’s shift away from liberal progressivism and, among some entrepreneurs, towards what has been termed “reactionary futurism”, embracing technology but rejecting a Left-of-centre political stance. Last month he was interviewed by Joe Rogan, telling the podcaster that Americans are “going through the first profound political realignment probably since the 1960s”. Much of the tech industry, however, remains politically homogeneous, according to Andreessen, who argued during the conversation with Weiss that “these [AI] companies were born woke. They were born as censorship machines […] most of the people who work at these companies agree with that side of things.”

As for the consequences of this accelerated trend, Andreessen said in the new interview that  “the censorship and political control of AI is a thousand times more dangerous than censorship and political control of social media — maybe a million times more dangerous.” This is because “AI is going to be the control layer for everything in the future,” including the operation of the health system, education system, and Government. “If that AI is woke, biased, censored, politically controlled, you are in a hyper-Orwellian, China-style, social credit system nightmare,” Andreessen said. “If you wanted to create the ultimate dystopian world, you’d have a world where everything is controlled by an AI that’s been programmed to lie.”

Though “this hasn’t rolled all the way out yet because AI is still new and it’s not in charge yet”, the businessman argued that “this is where things are headed.” The rise of computer-generated disinformation in recent years has prompted governments to increase online surveillance and censorship, according to researchers. In the next half-decade, some have predicted that over 99% of information on the internet will be AI-generated content, while the World Economic Forum has judged AI disinformation to be the single most severe threat facing the world.

Andreessen has expressed optimism that Donald Trump — of whom he is a supporter, having previously endorsed Democratic candidates such as Barack Obama and Hillary Clinton — will usher in an environment in which Americans feel they can challenge the rise of AI censorship. During the Rogan interview in November, the entrepreneur claimed that under Joe Biden’s presidency “technology became presumptively evil”, while he told Weiss this week that the Democrats “adopted these very radical positions on tech, aimed squarely at damaging [Silicon Valley leaders] as much as they possibly could”. In his view, it will take a concerted effort to reverse this. “My hope is that the culture changes,” Andreessen said. “This will happen by default unless people fight it.”


is UnHerd’s Deputy Editor, Newsroom.

RobLownie

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

11 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Lancashire Lad
Lancashire Lad
1 month ago

Could someone with more knowledge of how the tech works than me, suggest how – if AI trains on LLMs – it wouldn’t acquire an anti-liberal progessive bent if the cultural discourse continues to flow in that direction?

Thanks.

Rasmus Fogh
Rasmus Fogh
1 month ago
Reply to  Lancashire Lad

It certainly would. But the problem is even worse. Once you have just a few algorithms controlling people’s access to information, the people who control the algorithms control what everybody gets to see. If it is government you get one kind of censorship. If it is Rupert Murdoch or Elon Musk you get another kind of censorship. And if it is a purely profit-maximising company you get whatever keeps people angry and clicking – lies, doxxing, deepfake porn, you name it. One could make a case that government might actually be the least bad option, because at least it is under democratic control.

There was an interesting use case a few years back. A company (Google I think) made an AI program that read applicant CVs and evaluated how good employees they were going to be. It was found afterwards that the programs gave minus points for phrases like “women’s”, as in “women’s soccer”. Not that surprising, maybe – historically the best employees had been mostly male. But it is clearly wrong (not to mention unacceptable) that otherwise excellent emplolyees get rejected just because of their sex. The only remedy that I could see would be to tweak the algorithm until it gave enough women (50% maybe?), which would also be wrong (and would give less qualitied employees, on the average). But as somebody pointed out to me there is actually a proper solution. The problem is that qualifications and sex are correlated – but we know perfectly well how to correct for that, from epidemiological studies, for instance. What you could and should do is to improve the analysis till it can separate the actual qualifications from the sex of the applicant. Only in that case you have to do actual research, not just AI. And, in so doing, you have to apply judgement, that somebody might challenge on political grounds.

AI is an extremely powerful tool – but that only makes it the more important who gets to wield it.

Saul D
Saul D
1 month ago
Reply to  Rasmus Fogh

It’s not as hopeless as you think. Mainstream opinion always dominates discourse, but it turns out that people can, and often will, over time, reject normative and imposed opinions because personal experiences expose ideological failure.
Classic examples are found in Christianity – firstly from the conversion of the Romans (and the Vikings) where mainstream religious orthodoxies of the time were overturned, and secondly in the protestant reformation which overturned a very powerful Catholic orthodoxy. A more modern example is the fall of the Soviet Union where opinion was even more strongly controlled through all the mechanisms of education and state machinery and state controlled press. The best solution is open and competing systems of information – even when some are wrong – and a population that is skeptical of received opinions – nullius in verba as the say in the Royal Society.

Andrew Dalton
Andrew Dalton
1 month ago
Reply to  Rasmus Fogh

The algorithms really aren’t important, it’s the data sets and training that is. The algorithms are a simplified facsimile of neurons and synapses in the brain. It’s the data that is important.

Rasmus Fogh
Rasmus Fogh
1 month ago
Reply to  Andrew Dalton

I go a bit confused in my wording there – some of the things I said were more appropriate for search engines than for Chat-GPT or AI in general. But, call it ‘algorithms’ or not – the result is not a straight reflection of a neutral data set. There are various ways you can bias the output, and the people who control the code will assuredly use them. To prove it, consider that image generator that produced black vikings or George Washingtons. That cannot have been because the training set was full of similar examples, but because the output was deliberately manipulated to make more ‘diverse’ outcomes.

As an example of the problem, it was found not long ago that Google autocomplete suggested ‘control the world?‘ as the top completion to ‘Do jews‘. Probably a neutral result – people who do not think jews control the world are less likely to begin queries with ‘Do jews’. So, what do you do? Leaving things as they are means Google is promoting the idea that jews might control the world as a sensible suggestion. Tweaking the results opens the can of worms about what can be said, and who is to decide.

Liam F
Liam F
1 month ago
Reply to  Lancashire Lad

The Converstional interfaces (visible) used by most LLMs are trained from seed data sets (not visible). The seed data is trained by a relatively small set of data which is achieved by pointing at public and private data to scrape their data from. The seed data is used to “normalise ” the output. This is provided by the human operator pointing at their perceived normal websites. Having worked in IT for 40 years I can assure you that 90% of young developers read left wing mainstream media (insofar as they read MSM at all). Therefore the seed pool is compiled mostly from public site New York Times, BBC, MSNBC, University archives, et al. And generally not paywalled sites. As Musk exposed via the “twitter files ” the algorithms at Twitter were then further ‘moderated’ by paid staff from the Biden administration who worked as Twitter employees in San Francisco . I would bet good money every other media company in Silicon Valley will likely skew the same (Instagram, Facebook , Google).

If you want a pretty decent explanation of how ChatGPT works under the hood you could do worse than read this paper from Glasgow Uni
https://eprints.gla.ac.uk/327588/1/327588.pdf

Andrew Dalton
Andrew Dalton
1 month ago
Reply to  Lancashire Lad

It’s based largely, if not entirely, upon the data fed in and being instructed/led on what that data means. There have been plenty of examples of LLMs not behaving as described above because they were let loose on none curated data.
Current deep learning systems are pattern recognition systems, they don’t necessarily understand anything. One can argue that human’s are no different and that any deeper understanding is an emergent property of associative pattern recognition.
Look at this way, there are plenty of words that can be offensive in a blog’s comments section. People, even those of a fairly low reading comprehension, can determine context of those words and infer rapidly what the intent of the comment is. For example, referencing particular offensive terms is not the same as using them against someone (although that is seemingly changing).
This is why we get absurdities such as the AI written article about a woman who was murdered that had a poll at the bottom asking the reader what they thought happened. The AI has no understanding of the context even if it can understand the grammar and source data to a very high standard.

I may have written this anecdote on Unherd before, and I’ve certainly told it more than once in general. I’m not entirely sure how accurate it is, but I believe it makes for an interesting parable regarding the nature of AI and neural networks (at least today).
As a student studying artificial neural networks, I came across a story (maybe lecturer’s notes, maybe a text book, I can’t remember) about the US DOD developing an image recognition system for armoured vehicles that would be trained to detect other armoured vehicles.
To achieve this, a number of photographs of terrain were taken. Half contained tanks, half did not. The set was split: half were used for training the neural net, half used as a control set. After training was completed, the system was tested against the control set and scored perfectly.
However, somebody decided that it might be best to get more data and test more thoroughly. More pictures were taken and this time during testing, the system produced seemingly random results. After much head scratching and analysis of the data, it was determined that in the original set, all pictures with tanks were taken on a sunny day, and those without were taken on an overcast day. What the neural network had really learned was the difference between it being sunny or cloudy.

This is the crux of the problem. We can train the system, but we never know what the system really learns. Certain people (myself included) would argue that this is not that different to flesh and blood humans. In a world of incentive driven decision making, what do we actually learn at school or in our jobs? These systems have learned how to censor because that is what we have trained them to do. On the other hand, they have no understanding of what it is they are censoring.

Andrew Dalton
Andrew Dalton
1 month ago
Reply to  Lancashire Lad

I typed out a long response to this, which has disappeared. I am not happy.

H W
H W
1 month ago

Time to renew your library card…

Jeremy Sansom
Jeremy Sansom
1 month ago

As we move at alarming speed into the end times territory of Biblical prophesy, might it just be that the ‘image of the beast’ (Revelation 13:15) is the new, synthetic, AI-driven, robotic instrument that does the bidding of the False Prophet to enable the crushing of all dissent? As the Great Babel Project continues to unfold, uncannily paralleling that ancient, yet futuristic Biblical landscape, perhaps we might want to consider how our mass rejection of the Judeo-Christian God is delivering us rapidly into firstly delusion and then, finally, into enslavement and total subjection to the Beast?
“As for me and my house, we will serve the Lord.”

Douglas Redmayne
Douglas Redmayne
1 month ago
Reply to  Jeremy Sansom

I don’t care as long as I get a robot servant