He's telling you to worry about fake news, so that he can control you. Credit: MANDEL NGAN/AFP via Getty Images


March 17, 2022   5 mins

Just before the Russian invasion of Ukraine consumed the media, New York Times columnist Jay Caspian Kang and Substacker Matthew Yglesias published near-simultaneous critiques of the notions of “disinformation” and “misinformation”. This convergence among prominent liberals was significant. These and related concepts like “fake news” have shaped press coverage of a range of issues since the presidential contest of 2016 and have legitimised a new, censorious speech regime on tech platforms. But they usually go unquestioned on the Left.

Kang and Yglesias both consider the possibility that “misinformation” and “disinformation” are misleading frameworks for making sense of the world today. Indeed, Yglesias argues that the “misinformation panic could, over time, make discerning the actual truth harder”. This is because “misinformation” talk seems to lead inexorably to the suppression and censoring of dissent.

But Yglesias’s title — “The ‘misinformation problem’ seems like misinformation” — hints at a more paradoxical possibility: what if these concepts are the result of a deliberate and coordinated attempt to mislead the public?

In an earlier critique of the “misinformation” and “disinformation” framework, cited by Kang, tech journalist Joe Bernstein argued that the broad acceptance of these ideas reflects the rising influence of what he calls “Big Disinfo”: “a new field of knowledge production that emerged during the Trump years at the juncture of media, academia and policy research.” Its ostensibly neutral agenda offers ideological cover for centrist and liberal politicians by aligning them with values like objectivity, science, and truth, while defaming their opponents as conspiracy theorists.

Bernstein argues that Big Disinfo covertly serves the interests of the major tech platforms themselves, whose profit model relies on digital ads. This might seem counterintuitive, since the misinformation panic helped generate the “techlash” that tarnished Silicon Valley’s previously benign reputation among liberals and centrists. But the notion that online content is highly effective at changing people’s views is crucial to the sales pitch companies such as Meta (formerly Facebook) make to advertisers and investors. Hence, for Bernstein, the tech industry’s embrace of Big Disinfo’s claims is “a superficial strategy to avoid deeper questions” — and also valorises tech platforms as guardians of the information ecosystem.

Alongside journalists like Bernstein, Yglesias, and Kang, some academics are beginning to question the prevalent account of misinformation. Social Engineering, a new book by communications scholars Robert Gehl and Sean Lawson, helpfully reorients the discussion about these issues by offering deeper historical context and a new conceptual framework.

Terms like “misinformation”, “disinformation”, and “fake news”, Gehl and Lawson argue, fail “to grasp the complexity of manipulative communication” because they “reduce everything to a stark true/false binary” and thus disregard “the blurry lines between fact and falsehood”. Moreover, these terms imply a radical discontinuity between pre and post-internet worlds: they cast the former as a halcyon realm of clear, accurate, truthful communications, overseen by benevolent eminences like Walter Cronkite, while depicting the latter as a cesspit of lies and delusions. Bernstein parodies this view: “In the beginning, there were ABC, NBC, and CBS, and they were good”. This short-sighted perspective disregards the widespread concerns about propaganda that prevailed when network TV was at the height of its influence, which recent anxieties often echo.

The alternative terminology Gehl and Lawson propose is “social engineering”, a term that, as they show, has a two-stage history. The first widespread use of this phrase began in the early 20th century. Progressive reformers began to envision the possibility of employing mass communications technologies to reshape thought and behaviour on a vast scale. Their vision informed the coevolution of state propaganda and private-sector public relations, advertising, and marketing. Initially an optimistic project of benevolent technocratic rule, mass social engineering fell into intellectual disrepute by the late 20th century, although industries such as advertising and PR never abandoned its basic premises.

In the Seventies and Eighties — the same era when the older, top-down project of social engineering was being discredited as elitist and paternalistic — a new, bottom-up understanding of the same concept took hold among a loose cadre of hackers and “phone phreaks”. As Gehl and Lawson document, these communications outlaws developed an array of personalised techniques, such as impersonating clients and obtaining data under false pretences, to gain illicit access to technological systems belonging to phone companies, banks, government agencies, and other entities.

Applying the term “social engineering” to these sorts of tricks may seem grandiose, but Gehl and Lawson argue that they are continuous with the older technocratic enterprise: both types of social engineers “hide their true purposes, use sociotechnical knowledge to control others, and seek to manipulate others into doing things”.

The “hacker social engineers” of the past few decades have an easier time proving the efficacy of their techniques than mass social engineers, not least because their aims are typically more modest and practical. Consider an infamous incident from the 2016 election, part of the larger sequence of events that prompted the misinformation panic. The phishing scheme targeting Hillary Clinton’s campaign chairman, John Podesta — a classic act of hacker social engineering — was a success in that it achieved the limited practical goal of gaining access to his email account. Conversely, the attempts by the Trump campaign and its allies at mass social engineering (including via the publication of Podesta’s hacked emails) had no clear effect on the outcome of the 2016 election. There were too many other causal factors at work.

It’s not surprising, given the demonstrable successes of hacker social engineers at manipulating thought and behaviour, that larger entities have attempted to scale up their personalised techniques. This is how Gehl and Lawson recontextualise two of the most notorious alleged cases of “misinformation” from the 2016 period: the political consulting firm Cambridge Analytica and Russia’s Internet Research Agency. The first claimed, dubiously, to be able to perform “psychographic microtargeting” of voters based on data obtained under false pretences; the second deployed hacker techniques (like phishing) as well as paid trolls and fake accounts. Both “demonstrated the same ambitions of the older mass social engineers, but… also used the more targeted interpersonal techniques developed by hackers”.

Gehl and Lawson coin a term for these contemporary efforts that fuse mass social engineering with more personalised hacker methods: “masspersonal social engineering”. Although the aim of their book is to document the emergence of masspersonal social engineering, they concede that it’s unclear whether it has influenced thought and behaviour enough to, for instance, alter the results of elections. However, they caution that “future masspersonal social engineering may better implement the successful model of interpersonal hacker social engineering on a large scale”.

But they follow this warning with a more intriguing observation: many “sociotechnical developments that did not have particularly great immediate effects… are now recognised as having been vitally important despite (or perhaps because of) their failures.” One way to interpret this is that while the direct impacts of Russian hackers, Cambridge Analytica, and “fake news” have been modest, they have had a major indirect effect. They furnished the pretext for the misinformation panic, which offered embattled corporate media outlets, centrist politicians, and tech platforms a way of restoring their reputations and asserting more direct control over the information sphere.

As Gehl and Lawson note, “those in power are the ones in a position to wield the… capacities of social engineers”. This is why, historically, “publicity, propaganda, public relations, and consent engineering all have deep ties to the national security state”. In the face of inchoate populist tumult, declaring war against “misinformation” has enabled establishment institutions to shift legitimate anxieties about the manipulative use of media technologies towards upstart entities that, however unscrupulous, cannot claim anything like the influence of state-aligned media organisations or, for that matter, the tech platforms themselves.

Behind all talk of “misinformation” and “disinformation” is the tacit assumption that these terms designate exceptions to an otherwise pristine media landscape. Gehl and Lawson’s alternative framework points to the messier reality: a contest between “social engineers” (many of them only aspirational) operating at a variety of different scales and targeting populations, individuals, and everything in between. The misinformation panic, which has obscured this reality over the past half decade, is itself one of the most effective social engineering campaigns of our time.


Geoff Shullenberger is a writer and academic. He blogs at outsidertheory.com.

daily_barbarian