In January, Techcrunch revealed that Facebook had been paying people $20 per month to run an intrusive app that monitored everything they did, so that it could figure out what services and even companies to buy next. The app wasn’t on Apple’s App Store, but instead distributed using an internal “certificate” meant to be used only by employees for developing apps. (As soon as Apple discovered this, it yanked the certificate – throwing Facebook’s entire development process into chaos. Within hours, Google was found to be doing the same with an app: Apple yanked its certificate too. Both were restored once Google and Facebook promised not to do it again. But it was a colossal shot across the bows of two companies that had seemed indifferent to any sort of disapproval.)
Then there’s facial recognition: Amazon, Google, Microsoft and Facebook have all developed systems. Amazon, which is already testing its – Rekognition – with the FBI, indicated that it would work with the US Immigration and Customs Enforcement (ICE) agency. Just imagine the technology being used by governments that found it expedient to be oppressive – just briefly, you understand, to track “dangerous” opponents. Or you, and your friends.
So where did it all go wrong? “Hubris,” says Carl Miller, co-founder of the Centre for Analysis of Social Media at Demos, UnHerd columnist and author of The Death of the Gods, published last year, about how technology is altering economic models and changing where power lies.
Miller explains: “It really comes down to the fact that – at least for their decision-making – there’s this huge gap between what they are as ‘profit-maximising companies’ and ‘what we want them to be’.” In other words, we think of Facebook as a kind-hearted organisation that wants to get us chatting with all our friends. But it’s not, says Miller. “We miss the implications of their real fiduciary duties [to their shareholders]. It becomes most stark whenever profit motives conflict with areas we think are morally significant.”
He points to two examples: banning malicious accounts; and hiring moderators. Both are problems that Facebook and Twitter grapple with. “Taking down malicious accounts technically reduces the number of active users on the platform, a key metric that goes out to investors,” says Miller. “And how many moderators is enough? That’s a direct cost to companies – and pretty much every tech company is trying to get an idea of the right balance.”
The solution? The tech companies – and perhaps the users – won’t like it. Create friction in the process (rather than between users), suggests Miller. “Say, a delay on a tweet, or more information [being required] before you sign up to Reddit,” he says. “By creating friction [you] might reduce problematic behaviour; but you also would decrease user signup or activity.”
That might not be enough, though. Cass Sunstein, who worked in the Obama White House on information and regulatory affairs, and is now a professor at Harvard Law School, has shown that when you put groups of “mostly like-minded” people into an online forum, their aggregate view trends towards the most extreme of them. (It’s like the process by which neighbourhoods gentrify or decay: people move in and others move out because of what they find there.) Those radical members are often the busiest on the forums, too. Becoming extremists seems to be in our nature.
And that’s even before we think about other unintended consequences. When, for example, Uber enters a city and the average take-home pay of the licensed taxi drivers plummets, should the company have foreseen and be held responsible for the concomitant fall in the value of the taxi licence? Or when scooter companies put their two-wheelers on pavements for anyone with a smartphone and a credit card to ride, who should pay for the extra load they put on hospital A&Es.
The creeping realisation that even with the best intentions you can get extraordinarily evil results – boosting Holocaust denial, enabling social media bullying, and in extremis facilitating genocide – suggests that we’ve not grasped how radically the internet’s ability to put everyone in each other’s face, and to let extremists and obsessives determine agendas, is changing us.
Yet there may be signs of hope, and often they come from inside the companies themselves – typically, from the staff rather than the executives. Internal objections at Google led to it selling off Boston Dynamics in June 2017, and retreating from Project Maven after thousands of staff complained about Google’s role in the project to the chief executive, Sundar Pichai. As for Dragonfly, internal objections seem to have squashed it. And finally, there were walkouts by Google staff in November 2018 over the company’s approach to sexual harassment claims, which seemed to have been hushed up in favour of men accused of misconduct.
As for Rekognition, hundreds of Amazon staff wrote to Jeff Bezos to protest at its role in the ICE facial recognition project. In January, 90 human and civil rights groups wrote to Amazon, Google and Microsoft asking them to promise not to sell or license the technology to governments because of the clear potential for abuse. So far, Microsoft and Google have both indicated that they will comply.
What next? Nice though it would be to have a pat answer, nobody has one. Break up Facebook? (Into what?) Make it harder to tweet? (People will migrate to other services.) Make companies responsible for what happens? (How do you measure that?) We’re 25 years into widespread public use of the internet. Let’s hope we get smarter about it in the second quarter-century.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe