Mark Zuckerberg’s products are among those that have made discourse more chaotic and polarized. (Chris Unger/Zuffa LLC)
Richard Hanania
8 Apr 2026 - 12:04am 7 mins
In December 2013, a public-relations worker named Justine Sacco got bored while waiting for her flight to Cape Town to take off. So she navigated to the Twitter (now X) app on her smartphone and posted the following message: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”
With WiFi connections still unavailable on most flights back then, Sacco tuned out the internet for several hours as her plane crossed oceans and continents — before landing and finding her life and career destroyed. By the time Sacco reached Cape Town, millions of Twitter users, including Donald Trump, had branded her a racist and got her fired from her job.
To many internet historians, this episode marks the beginning of the “woke” and cancel culture era, which lasted until about 2022 (or never went away, depending on whom you ask). Yet in retrospect, it’s clear that the Sacco incident epitomizes something else: the era of social media, a fundamentally populist mode of communication that wreaked havoc on rational public discourse and made possible the populist politics, both Left-wing and Right, of the decade that followed.
Now, as artificial intelligence eclipses social media, it is possible that period is coming to a close — ushering in something closer to a new era of technocratic rationality.
In December 2013, a public-relations worker named Justine Sacco got bored while waiting for her flight to Cape Town to take off. So she navigated to the Twitter (now X) app on her smartphone and posted the following message: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”
With WiFi connections still unavailable on most flights back then, Sacco tuned out the internet for several hours as her plane crossed oceans and continents — before landing and finding her life and career destroyed. By the time Sacco reached Cape Town, millions of Twitter users, including Donald Trump, had branded her a racist and got her fired from her job.
To many internet historians, this episode marks the beginning of the “woke” and cancel culture era, which lasted until about 2022 (or never went away, depending on whom you ask). Yet in retrospect, it’s clear that the Sacco incident epitomizes something else: the era of social media, a fundamentally populist mode of communication that wreaked havoc on rational public discourse and made possible the populist politics, both Left-wing and Right, of the decade that followed.
Now, as artificial intelligence eclipses social media, it is possible that period is coming to a close — ushering in something closer to a new era of technocratic rationality.
The writer Dan Williams recently made a version of this argument, pointing out that while social media is populist, AI is technocratic (I’ve made a similar contention about social media in the past). If you believe that we’ve suffered from an impoverished public discourse since Twitter took off, there might be good reasons to hope that there is relief right around the corner. While the social-media era was one in which everyone had a voice, and any random person could go viral, AI speaks with authority and is trained on practically the entire corpus of human writing — only, with rational thinking ability and a tilt toward sources deemed credible by elites.
This means that the more that AI cannibalizes public discourse, the less conspiratorial and more fact-based it is likely to be. Interestingly, there is evidence suggesting that the American Right — the side that most needs to get back in touch with reality amid the rise of figures like Candace Owens — is making more use of the best tool we have for being able to do so.
With social media, each individual is his own content producer, and engagement is the coin of the realm. Since most people are either not that smart or not that interested in truth, if not both, the largest social-media accounts tend to be sensationalist and not exactly sticklers for intellectual probity, factual accuracy, or logical coherence. People want to hear about how their tribe is more morally upstanding and righteous, and they enjoy emotionally charged rhetoric and conspiracy theories that make the world seem more legible. AI systems, in contrast, are by definition optimizing for intelligence. People pay for these models because they want accurate representations of the world.
It’s interesting that when you try to program AI to reflect certain political biases, it goes haywire. There was the famous Google Gemini experiment, in which the model was told to add diversity to pictures. The result: early Gemini became a laughing stock after spitting out images of black Nazis. There is also the experience of Grok. When Elon Musk tried to create an “anti-woke” AI, the model started to take after fascistic and Nazi Right-wing accounts. Musk then pulled the plug on this experiment, and since then, Grok behaves much more like other prominent AI models, albeit with a more subtle Right-wing tilt.
The lesson here is profound. Arguably, you can’t create an artificial intelligence that is worth using while also telling it to accept Elon Musk’s view of political reality. The training data and model parameters may skew things in one direction or the other, but there are some limits to what a system can believe and still be, well, intelligent.
This kind of convergence can also be seen among humans. Let’s say you have two individuals who are both highly intelligent, enjoy access to the same sources of information, and have a good-faith interest in finding the truth. They may end up differing in their politics, but they probably won’t differ nearly as much as any two randomly chosen members of the population. Neither will end up believing that communism can work or adopting Hitler’s views on race and geopolitics. They also likely won’t end up believing more humdrum sorts of misinformation, such as 2020 election denialism. There is probably something similar at work with AI. Can you create a model that is good at coding and having rational conversations about the widest possible range of topics — but also thinks that Libs of TikTok is an excellent source of information, superior to what you find in the prestige press?
Theoretically, you could surely design an AI platform that is simply a propaganda machine. Take a highly intelligent system and simply direct it to act as an effective partisan. Such roleplaying is already possible with today’s models. But the thing about conspiracy theories and biased reasoning is that people need to believe they’re getting the information straight in order to be influenced. Individuals rarely wake up and declare, “I want to be brainwashed.” See how often partisan hacks sell themselves as people who are just telling it like it is. All of this is to say: there’s unlikely to be much of a market for an AI marketed as telling you what you want to hear.
One could imagine more subtle biases through market processes. Say Grok wants to appeal to Republicans, and so tilts information in a Right-leaning direction. But given the power of AI, most people are probably going to choose the models that they can gain the most benefit from. Someone made a Right-wing alternative to Amazon called Mammoth Nation, and nobody uses it, because even Republicans are more interested in being able to get shoes delivered to their door quickly than in backing “anti-woke” firms.
In 2021, conservative influencers were hyping a $500 “Freedom Phone” that would allow the user to avoid Big Tech censorship. But it turned out to be a rebranded Chinese Android that normally sold for $119. Most people aren’t considering politics when purchasing goods and services. Thus, they will use the most affordable and useful AI models they can find, instead of looking for products that will shape their thinking about current events.
This raises a natural question, however: why would people who have completely turned against elite institutions — from the mainstream media to their local hospital — trust large language models? Williams points out that LLMs will listen to all your concerns and don’t condescend to you. This is a powerful tool for helping people whose worldview is shaped by Alex Jones and Tucker Carlson podcasts, who do not misunderstand just one or two events, but whose entire picture of reality is warped. Few human individuals have the patience or the time for deprogramming such unfortunate souls. Luckily, AIs don’t get tired or emotionally overwhelmed by human stupidity. And they’re constantly on call.
Take conspiracy theorists. One reason they’re difficult to deal with is that they will often know more about a topic than the ordinary person. Conspiracy theorists spend their lives researching an event. But in doing so, they push the mountain of facts through a distortive filter so that “the Jews” or Freemasons or the Royal Family always end up blamed. No human fact-checker could hope to answer conspiracy theorists’ claims without making a massive time investment just to find out what they’re talking about. Again, AIs solve that problem. A 2024 paper found that LLMs are effective at debunking conspiratorial beliefs, with the effect lasting for months, in part because they could tailor the arguments needed to each specific individual.
Then, too, AIs are so useful that individuals come to trust them in their day-to-day lives. People might ask AI what to say to a love interest, or the best way to lose weight, what their digestion problems are telling them, or how to plan a vacation. In school, they’re constantly relying on it to study and even do their assignments. How do you then turn off your trust when you’re curious about a political or social question? The professional fact-checker also serves as their study buddy, dating coach, nurse, travel agent, and all-around oracle. That’s a powerful combination in terms of improving the quality of people’s thinking — and the quality of public discourse.
I remember my amazement when a year or two ago, I observed my car dripping some kind of fluid in the back and went to AI for help. I took a picture, uploaded it to ChatGPT, and queried it on what was going on. The app asked some questions, and then reassured me that since the fluid was clear, this was simply the natural result of running the air conditioning on a hot day. I remember going to an actual mechanic soon after this happened, recounting the story, and raving about AI and how I couldn’t believe that he wasn’t yet using it in his own business. Now imagine transferring the same feeling of wonder to a machine that also shreds conspiracy theories.
In the past few years on X, you might have noticed users interacting with dubious stories or videos by asking, “@grok, is this true?” Epistemology, to be sure, is indeed quite difficult, presenting a profound bundle of questions that even scholars and academics often argue about. It’s overall salutary, however, that people in their day-to-day lives are discovering that AI is a good source of information on a whole host of issues; and that they are, correctly, beginning to assume that listening to AI on politics is better than trying to do their “own research” or talking to people they know, who often share the same cognitive limitations and biases.
People’s worldviews aren’t built or deconstructed in a single interaction. Your average conspiracy theorist isn’t going to fact-check a few pieces of information and suddenly become rational. But over time, incorporating chatbots into our processes of opinion formation — by means of asking Grok or reading the replies when others do so — could create better intuitions regarding whom and what to believe.
In a new preprint, economics professor Thomas Renault and his co-authors investigate how fact-checking is used in the wild. They investigated all X posts between February 2025 and September 2025 that tagged either Grok or an AI service called Perplexity, analyzing 1,671,841 fact-checking requests. They found that Republican-leaning accounts were more likely to ask whether information was true. Not only that, but Republicans are targeted more for fact-checking, from both fellow Republican (107.5% more) and Democratic (83% more) accounts. Overall, Grok and Perplexity were more likely to say Republicans were not being truthful. Thus, we have the remarkable finding that even though Grok was designed to be anti-woke by Elon Musk, and Republicans are the ones who disproportionately use it, the tool still rates Democrats as more truth-oriented. This is consistent with what we see with community notes, which flag Republican posts more often, along with other data showing that Republicans are more likely than Democrats to share false information on social media, and less able to differentiate fact from fiction overall. None of this is surprising, given that today’s Republican base is both less educated and less likely to interact with credible sources of news.
So we have the interesting and seemingly paradoxical result that Republicans on social media are more reliant on AI, even as it tells them their favorite narratives are more often wrong than those of the other side. This should be seen as a positive sign. We’ve been asking, “What would happen if fact-checkers had infinite time and patience, considered all aspects of problems, and weren’t a bunch of smug Left-wing know-it-alls?” The answer might just be that those who have lost trust in our institutions begin to recover their sanity.
Richard Hanania, an UnHerd columnist, is the president of the Center for the Study of Partisanship and Ideology and blogs at Substack.
RichardHanania



Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe