Credit: Sean Gallup / Getty


February 6, 2019   7 mins

As corporate mottos go, “Don’t be evil” is hard to beat. If your company’s ambitions are to change the world, having those little three words in the back of every employee’s mind is sure to lead to good outcomes – isn’t it? That’s why Google adopted it after a meeting about corporate values in 2000 or 2001 (the history is hazy), where it was suggested by Paul Buchheit, who also went on to create Gmail. “Don’t be evil!” Seriously, how hard can that be to follow?

Yet, as the novelist Stephen King points out, nobody considers themselves the “bad guy”. People start with good intentions and then, somehow, bad things happen. That’s what has happened to the lofty goals of the big Silicon Valley companies. They started with a raw-ingredient mix of idealism, social networks, mobile phones and software. And we cooked it into a stew of partisanship, hate, abuse and even murder.

Take Google’s video-sharing site YouTube. It is funded by advertising, so its owners want to maximise the length of time people stay glued to the screen. It’s definitely not evil to develop an algorithm that tries to make people watch more and more videos. It’s definitely not evil to note which videos tend to lead to people spending more time on the site, or to show those to people who are watching other things, because the videos you’ve picked are proven to increase dwell time.

And yet one day you’ll wake up to find that your video site is being blamed for everything from radicalising people towards Islamic or far-right extremism, to encouraging flat-earth and anti-vaccine conspiracy theories. How did that happen? You weren’t being evil: the evil just sort of… happened.

Google has also had to defend itself in front of US politicians against charges that it rigs search results in favour of liberal voices over conservative ones, and against charges that its search algorithm favours Holocaust deniers; deniers of all stripes, in fact, because it promotes pages that are popular, not those that are accurate.

Facebook has been cited in a United Nations report as being a contributing factor in genocide in Myanmar, and separately has accepted that Russians used it to influence American voters in the 2016 presidential elections, and that a British company used its data to influence the Brexit vote. In India the WhatsApp chat service, which Facebook owns, has been blamed for the spread of false claims about attacks which have, in turn, led to people being killed. At the beginning of February the fact-checking website Snopes said it was no longer checking articles or news stories that appeared on Facebook. Though it was vague about why it was ending the deal, it seemed uncomfortable with the effects on its reputation.

And Twitter… ah, Twitter. The “free speech wing of the free speech party”, as it once styled itself, looked like the hero when it acted as the channel for resistance during the 2011 Arab Spring; but in the years since, from bullying scandals to Donald Trump’s unhinged megaphone provocations, the world has come to realise that it’s not a good idea to let anyone say anything without holding them to account.

You don’t have to look far to find people who used to, or still do, work in the big tech companies and think that there is some evil – or at least bad – happening. Guillaume Chaslot, an ex-Googler who is now an adviser at the Center for Human Technology, remarked on Twitter recently about a Pew Research Center survey. It found that teenagers in the US have “mixed views” on the impact on their lives of social media. Why, asked Chaslot? “Because we didn’t design it to improve their lives. We designed it to make them hooked to the platform. It worked.”

In the same vein, François Chollet, who works on “deep learning” (machine-driven artificial intelligence, which learns how to produce outputs by analysing huge amounts of input data) pulls no punches on what he considers the most insidious company of them all:

“Everything Facebook does, big or small, reflects a complete lack of ethics and an intent to game their way into maximum profits no matter the damage inflicted on the world. Facebook is [the tobacco company] Philip Morris combined with Lockheed Martin, but bigger. The most frustrating part is that their awfulness is singlehandedly destroying the reputation of the tech industry. Please do not compare Facebook to Microsoft, or Amazon, or Google, or Apple. They are in a league of their own. No one else is even 10% as bad.”

Chollet emphasises that this is purely a personal view – which, given that he works at Google, seems wise. Still, he sees a silver lining: Facebook’s actions mean there will be more regulation, he thinks, “and more pressure on everyone else to act ethically. Not a bad thing.”

But the big companies continue to blur ethical boundaries, so when exactly is that pressure going to take effect? Google’s Boston Dynamics subsidiary, acquired in 2013, developed walking robot dogs whose purpose was clearly military. Then it became involved in Project Maven, a US Pentagon project to “explore the potential of artificial intelligence, big data and deep learning” to its main task of killing people and winning wars.

Most recently came word of “Dragonfly”, a Google project to create a censored version of its search engine for China. This was most at odds with the firm’s original ethos: Google’s co-founders, and particularly, Sergey Brin, had pulled Google out of China in 2010 in outrage after state-sponsored hackers were discovered trying to break into the Gmail system. Yet now, with Brin abstracted to a higher level in the corporate structure, the imperative of making money from ads seemed to have overcome those objections.

In January, Techcrunch revealed that Facebook had been paying people $20 per month to run an intrusive app that monitored everything they did, so that it could figure out what services and even companies to buy next. The app wasn’t on Apple’s App Store, but instead distributed using an internal “certificate” meant to be used only by employees for developing apps. (As soon as Apple discovered this, it yanked the certificate – throwing Facebook’s entire development process into chaos. Within hours, Google was found to be doing the same with an app: Apple yanked its certificate too. Both were restored once Google and Facebook promised not to do it again. But it was a colossal shot across the bows of two companies that had seemed indifferent to any sort of disapproval.)

Then there’s facial recognition: Amazon, Google, Microsoft and Facebook have all developed systems. Amazon, which is already testing its – Rekognition – with the FBI, indicated that it would work with the US Immigration and Customs Enforcement (ICE) agency. Just imagine the technology being used by governments that found it expedient to be oppressive – just briefly, you understand, to track “dangerous” opponents. Or you, and your friends.

So where did it all go wrong? “Hubris,” says Carl Miller, co-founder of the Centre for Analysis of Social Media at Demos, UnHerd columnist and  author of The Death of the Gods, published last year, about how technology is altering economic models and changing where power lies.

Miller explains: “It really comes down to the fact that – at least for their decision-making – there’s this huge gap between what they are as ‘profit-maximising companies’ and ‘what we want them to be’.” In other words, we think of Facebook as a kind-hearted organisation that wants to get us chatting with all our friends. But it’s not, says Miller. “We miss the implications of their real fiduciary duties [to their shareholders]. It becomes most stark whenever profit motives conflict with areas we think are morally significant.”

He points to two examples: banning malicious accounts; and hiring moderators. Both are problems that Facebook and Twitter grapple with. “Taking down malicious accounts technically reduces the number of active users on the platform, a key metric that goes out to investors,” says Miller. “And how many moderators is enough? That’s a direct cost to companies – and pretty much every tech company is trying to get an idea of the right balance.”

The solution? The tech companies – and perhaps the users – won’t like it. Create friction in the process (rather than between users), suggests Miller. “Say, a delay on a tweet, or more information [being required] before you sign up to Reddit,” he says. “By creating friction [you] might reduce problematic behaviour; but you also would decrease user signup or activity.”

That might not be enough, though. Cass Sunstein, who worked in the Obama White House on information and regulatory affairs, and is now a professor at Harvard Law School, has shown that when you put groups of “mostly like-minded” people into an online forum, their aggregate view trends towards the most extreme of them. (It’s like the process by which neighbourhoods gentrify or decay: people move in and others move out because of what they find there.) Those radical members are often the busiest on the forums, too. Becoming extremists seems to be in our nature.

And that’s even before we think about other unintended consequences. When, for example, Uber enters a city and the average take-home pay of the licensed taxi drivers plummets, should the company have foreseen and be held responsible for the concomitant fall in the value of the taxi licence? Or when scooter companies put their two-wheelers on pavements for anyone with a smartphone and a credit card to ride, who should pay for the extra load they put on hospital A&Es.

The creeping realisation that even with the best intentions you can get extraordinarily evil results – boosting Holocaust denial, enabling social media bullying, and in extremis facilitating genocide – suggests that we’ve not grasped how radically the internet’s ability to put everyone in each other’s face, and to let extremists and obsessives determine agendas, is changing us.

Yet there may be signs of hope, and often they come from inside the companies themselves – typically, from the staff rather than the executives. Internal objections at Google led to it selling off Boston Dynamics in June 2017, and retreating from Project Maven after thousands of staff complained about Google’s role in the project to the chief executive, Sundar Pichai. As for Dragonfly, internal objections seem to have squashed it. And finally, there were walkouts by Google staff in November 2018 over the company’s approach to sexual harassment claims, which seemed to have been hushed up in favour of men accused of misconduct.

As for Rekognition, hundreds of Amazon staff wrote to Jeff Bezos to protest at its role in the ICE facial recognition project. In January, 90 human and civil rights groups wrote to Amazon, Google and Microsoft asking them to promise not to sell or license the technology to governments because of the clear potential for abuse. So far, Microsoft and Google have both indicated that they will comply.

What next? Nice though it would be to have a pat answer, nobody has one. Break up Facebook? (Into what?) Make it harder to tweet? (People will migrate to other services.) Make companies responsible for what happens? (How do you measure that?) We’re 25 years into widespread public use of the internet. Let’s hope we get smarter about it in the second quarter-century.


Charles Arthur is the author of Digital Wars: Apple, Google, Microsoft and the Battle for the Internet, published by Kogan Page. From 2005-2014 he was technology editor at The Guardian newspaper

charlesarthur