X Close

The truth about the ‘techlash’

Credit: Jack Taylor/Getty Images

Credit: Jack Taylor/Getty Images


January 30, 2019   5 mins

Big tech is catching heat. Countless exposes, scandals and critiques have seen the firms dragged in front of parliamentary select committees and Congress. On almost every topic – from how they handle hate, fake news, Russia, and cyber-bullying on their platforms, to who they ban, and what they amplify – they seem to get it wrong. This phenomenon even has its own name: the ‘techlash’.

As I’ve watched this fall from grace, there has been something I’ve struggled to reconcile. Over the last decade, I’ve grown to know people working within all of the tech companies. And rather to my disappointment, I’ve never met the crazed, Ayn Rand-reading digital libertarian of our collective nightmares. None of them want to destroy governments or states. None of them wish the laws and ethics of normal life to dissolve away.

At least from my experience, the vast majority of them are thoughtful, reasonable people who are genuinely trying to do the right thing and act with a responsibility equal to the power that they know they hold. They worry about the harms their platforms have created, and genuinely try to do the right things to prevent and mitigate them.

So why are they unable to convince us that they are making good faith, reasonably un-biased decisions about difficult, complex and new problems? Why do we never give them the benefit of the doubt?

By their own admission, some of the decisions the tech companies have made have been dramatically out of step with their consumers and society. Facebook apologised for the “breach of trust” exposed by Cambridge Analytica. Apple (kind of) apologised for deliberately slowing down iPhones as they aged. Twitter first said they wouldn’t ban conspiracy theorist Alex Jones because he hadn’t breached their rules. Then, in the face of rising criticism, promptly changed their position and banned him. Google apologised for running ads next to videos espousing terrorism (yet not for the videos themselves).

The list goes on. Google has been fined more than $7 billion by the European Competition Commission over eight years for anti-competitive behaviour, and social media platforms are currently being criticised (again) over the self-harm and pro-suicide content on their sites. And in my eyes the most serious of all, Mark Zuckerberg apologised for ridiculing the idea that Russia had used their platform to influence the 2016 American election.

The rise of these platforms have disrupted everything; from the role of the press, to what privacy means, to the rules around speech and harm. The sweep of responsibilities that these companies have taken on, often against their wishes, is vast, as is the number of problems that they’re currently tangling with.

Listing the flaws of big tech, however, isn’t the point of this article. The techlash would be happening whatever the specific decisions that they’ve made. It exists because of something deeper; an important, incredibly dangerous contradiction at the heart of the digital revolution that isn’t to do with the decisions themselves, but how they are made.

Whether it’s Google, Facebook, Twitter, Reddit or Patreon, we know that they are private services. But we feel that they are public commons. The way that we act, campaign, raise money, argue and unite on these platforms screams public space. And public spaces, we feel, are subject to rules that are publicly made.

Free speech, inclusion, hate, bullying, privacy, democratic participation and so on are all issues that are dissected and discussed in the open. We may frequently disagree with these rules – we may sometimes hate them – but we see that they arise out of the messy business of a democratic polity.

Except online, of course, they do not. The tech giants may try to consult with experts, or talk to communities, but the actual decision-making process itself is – as with almost all commercial entities – totally opaque. It happens between policy officers under non-disclosure agreements. It happens in boardrooms away from the press.

To all of us, online environments are shaped by processes that are completely mysterious. We have no role in them. We can’t even see that they’re happening. So while these decisions are shaping democracy and public assembly, and indeed are often made in the name of democracy and freedom, they are not made democratically.

This problem is dangerously systemic. Far more serious than the wrong person being banned, or the wrong image being left up on the site: this is about the basic way that online services are created, owned and served up to us. Put simply, private companies have never had to make the kinds of decisions that the tech giants are routinely faced with.

Trust in tech fell by between 10% and 20% around the world in 2018, according to the Edelman Trust Barometer, and one of the biggest falls (around 15%) was in the belief that tech companies are adequately transparent in how they operate. The good news for the tech giants is that it’s not too late: almost two thirds of Americans, for example, believe they are “more good than bad” for society. But there’s no doubting, trust is on a downward trend.

So how can that trend be reversed? A new approach is needed, not just for tech but also for how the rules and policies are made that govern it. And there are a number of options for what this might look like.

First, the tech giants could develop digital democracies for deciding the policies that govern them, allowing their users to discuss and shape their rules. As I’ve argued previously for UnHerd, Taiwan already shows how this could work in practice. A new online process called ‘vTaiwan’ includes tens of thousands of people in discussion and decision-making. It’s a rare case where a Government has become more visionary and energetic than big tech itself.

Option two is the Wikipedia model. Alone among the tech giants, the rules that govern Wikipedia are written not by the employees that run the organisation, but by the community that animates it. It’s a constant, rolling debate that can sometimes turn nasty, but no-one can accuse Wikipedia’s rules of being anything less than transparent – anyone, including you or me, could have a say if they want to.

Option three is an idea probably unfamiliar to most; ‘multi-stakeholderism’. Every year, the Internet Governance Forum (IGF) is held somewhere in the world. Held by the United Nations, it is different from any other diplomatic event: anyone can turn up. It’s not just for officialdom, elites, CEOs or celebrities, but for all the weird and varied tribes that have a stake in how the internet works, from fleece-wearing engineers to Icelandic human rights activists, suited spies to big tech. A strange, raucous kind of talking shop, it has one huge advantage – everyone can have a seat at the table. Like the internet itself, the way that it is governed is an open network that anyone can join.

Each of these models has their own downsides and frustrations. Nothing is actually decided at the IGF – so for the cynic, it’s a talking shop, a PR exercise, a sideshow. Wikipedia presents another kind of problem. Although anyone can edit its pages, the number of people who actively do so has been on a long decline, and about 90% of them are male, leading to bias in the kinds of knowledge that they create. And vTaiwan has to confront the same problem as any attempt to engage people in politics: most people are simply too busy to commit to time-consuming deliberations on most political decisions.

But even with these flaws, these models offer a way forward; because the challenge for big tech isn’t technological, it’s organisational. How do they genuinely give more power to their users to shape the online spaces that they inhabit? Inventing new technology is easy; creating new kinds of organisations with novel governance arrangements is much more difficult. But the tech giants have little choice. They need to change how they make decisions, because without more transparent, participatory governance of online spaces the ‘techlash’ will only worsen.


Carl is the co-founder and Research Director of the Centre for the Analysis of Social Media at Demos and author of The Death of the Gods: The New Global Power Grab, out on 23rd August in hardback from William Heinemann. You can read more of Carl’s work at www.carlmiller.co.

carljackmiller

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments