He's telling you to worry about fake news, so that he can control you. Credit: MANDEL NGAN/AFP via Getty Images

Just before the Russian invasion of Ukraine consumed the media, New York Times columnist Jay Caspian Kang and Substacker Matthew Yglesias published near-simultaneous critiques of the notions of âdisinformationâ and âmisinformationâ. This convergence among prominent liberals was significant. These and related concepts like âfake newsâ have shaped press coverage of a range of issues since the presidential contest of 2016 and have legitimised a new, censorious speech regime on tech platforms. But they usually go unquestioned on the Left.
Kang and Yglesias both consider the possibility that âmisinformationâ and âdisinformationâ are misleading frameworks for making sense of the world today. Indeed, Yglesias argues that the âmisinformation panic could, over time, make discerning the actual truth harderâ. This is because âmisinformationâ talk seems to lead inexorably to the suppression and censoring of dissent.
But Yglesiasâs title â âThe âmisinformation problemâ seems like misinformationâ â hints at a more paradoxical possibility: what if these concepts are the result of a deliberate and coordinated attempt to mislead the public?
In an earlier critique of the âmisinformationâ and âdisinformationâ framework, cited by Kang, tech journalist Joe Bernstein argued that the broad acceptance of these ideas reflects the rising influence of what he calls âBig Disinfoâ: âa new field of knowledge production that emerged during the Trump years at the juncture of media, academia and policy research.â Its ostensibly neutral agenda offers ideological cover for centrist and liberal politicians by aligning them with values like objectivity, science, and truth, while defaming their opponents as conspiracy theorists.
Bernstein argues that Big Disinfo covertly serves the interests of the major tech platforms themselves, whose profit model relies on digital ads. This might seem counterintuitive, since the misinformation panic helped generate the âtechlashâ that tarnished Silicon Valleyâs previously benign reputation among liberals and centrists. But the notion that online content is highly effective at changing peopleâs views is crucial to the sales pitch companies such as Meta (formerly Facebook) make to advertisers and investors. Hence, for Bernstein, the tech industryâs embrace of Big Disinfoâs claims is âa superficial strategy to avoid deeper questionsâ â and also valorises tech platforms as guardians of the information ecosystem.
Alongside journalists like Bernstein, Yglesias, and Kang, some academics are beginning to question the prevalent account of misinformation. Social Engineering, a new book by communications scholars Robert Gehl and Sean Lawson, helpfully reorients the discussion about these issues by offering deeper historical context and a new conceptual framework.
Terms like âmisinformationâ, âdisinformationâ, and âfake newsâ, Gehl and Lawson argue, fail âto grasp the complexity of manipulative communicationâ because they âreduce everything to a stark true/false binaryâ and thus disregard âthe blurry lines between fact and falsehoodâ. Moreover, these terms imply a radical discontinuity between pre and post-internet worlds: they cast the former as a halcyon realm of clear, accurate, truthful communications, overseen by benevolent eminences like Walter Cronkite, while depicting the latter as a cesspit of lies and delusions. Bernstein parodies this view: âIn the beginning, there were ABC, NBC, and CBS, and they were goodâ. This short-sighted perspective disregards the widespread concerns about propaganda that prevailed when network TV was at the height of its influence, which recent anxieties often echo.
The alternative terminology Gehl and Lawson propose is âsocial engineeringâ, a term that, as they show, has a two-stage history. The first widespread use of this phrase began in the early 20th century. Progressive reformers began to envision the possibility of employing mass communications technologies to reshape thought and behaviour on a vast scale. Their vision informed the coevolution of state propaganda and private-sector public relations, advertising, and marketing. Initially an optimistic project of benevolent technocratic rule, mass social engineering fell into intellectual disrepute by the late 20th century, although industries such as advertising and PR never abandoned its basic premises.
In the Seventies and Eighties â the same era when the older, top-down project of social engineering was being discredited as elitist and paternalistic â a new, bottom-up understanding of the same concept took hold among a loose cadre of hackers and âphone phreaksâ. As Gehl and Lawson document, these communications outlaws developed an array of personalised techniques, such as impersonating clients and obtaining data under false pretences, to gain illicit access to technological systems belonging to phone companies, banks, government agencies, and other entities.
Applying the term âsocial engineeringâ to these sorts of tricks may seem grandiose, but Gehl and Lawson argue that they are continuous with the older technocratic enterprise: both types of social engineers âhide their true purposes, use sociotechnical knowledge to control others, and seek to manipulate others into doing thingsâ.
The âhacker social engineersâ of the past few decades have an easier time proving the efficacy of their techniques than mass social engineers, not least because their aims are typically more modest and practical. Consider an infamous incident from the 2016 election, part of the larger sequence of events that prompted the misinformation panic. The phishing scheme targeting Hillary Clintonâs campaign chairman, John Podesta â a classic act of hacker social engineering â was a success in that it achieved the limited practical goal of gaining access to his email account. Conversely, the attempts by the Trump campaign and its allies at mass social engineering (including via the publication of Podestaâs hacked emails) had no clear effect on the outcome of the 2016 election. There were too many other causal factors at work.
Itâs not surprising, given the demonstrable successes of hacker social engineers at manipulating thought and behaviour, that larger entities have attempted to scale up their personalised techniques. This is how Gehl and Lawson recontextualise two of the most notorious alleged cases of âmisinformationâ from the 2016 period: the political consulting firm Cambridge Analytica and Russiaâs Internet Research Agency. The first claimed, dubiously, to be able to perform âpsychographic microtargetingâ of voters based on data obtained under false pretences; the second deployed hacker techniques (like phishing) as well as paid trolls and fake accounts. Both âdemonstrated the same ambitions of the older mass social engineers, but⌠also used the more targeted interpersonal techniques developed by hackersâ.
Gehl and Lawson coin a term for these contemporary efforts that fuse mass social engineering with more personalised hacker methods: âmasspersonal social engineeringâ. Although the aim of their book is to document the emergence of masspersonal social engineering, they concede that itâs unclear whether it has influenced thought and behaviour enough to, for instance, alter the results of elections. However, they caution that âfuture masspersonal social engineering may better implement the successful model of interpersonal hacker social engineering on a large scaleâ.
But they follow this warning with a more intriguing observation: many âsociotechnical developments that did not have particularly great immediate effects⌠are now recognised as having been vitally important despite (or perhaps because of) their failures.â One way to interpret this is that while the direct impacts of Russian hackers, Cambridge Analytica, and âfake newsâ have been modest, they have had a major indirect effect. They furnished the pretext for the misinformation panic, which offered embattled corporate media outlets, centrist politicians, and tech platforms a way of restoring their reputations and asserting more direct control over the information sphere.
As Gehl and Lawson note, âthose in power are the ones in a position to wield the⌠capacities of social engineersâ. This is why, historically, âpublicity, propaganda, public relations, and consent engineering all have deep ties to the national security stateâ. In the face of inchoate populist tumult, declaring war against âmisinformationâ has enabled establishment institutions to shift legitimate anxieties about the manipulative use of media technologies towards upstart entities that, however unscrupulous, cannot claim anything like the influence of state-aligned media organisations or, for that matter, the tech platforms themselves.
Behind all talk of âmisinformationâ and âdisinformationâ is the tacit assumption that these terms designate exceptions to an otherwise pristine media landscape. Gehl and Lawsonâs alternative framework points to the messier reality: a contest between âsocial engineersâ (many of them only aspirational) operating at a variety of different scales and targeting populations, individuals, and everything in between. The misinformation panic, which has obscured this reality over the past half decade, is itself one of the most effective social engineering campaigns of our time.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe