Credit: Zach Gibson/Getty


April 30, 2018   7 mins

Recent months have seen a welter of embarrassing failures at big Silicon Valley companies. In early March, a design firm reported that Chinese counterfeiters had been selling knockoffs of its product through Amazon, and lambasted the retailer for lax enforcement.

Two weeks later, it was discovered that Google’s YouTube Kids app, touted as a safer destination than the main site, was displaying disturbing conspiracy videos to children. That same day, Facebook admitted that Cambridge Analytica had improperly harvested the data of what turned out to be millions of its users, and an Uber self-driving car struck and killed a 49-year-old woman in Tempe, Arizona.

Facebook, Amazon and similar reach more customers with more personalised experiences, while using fewer people and far less physical infrastructure to do it
-

These seemingly unrelated incidents share a root cause. In each case, an automated system, ultimately designed to help generate huge profits by displacing human workers, failed in a crucial way and caused significant harm. In that common failure, they all point to cracks in the foundational conceit of the digital economy: the dream of ‘scaling’.

As far back as the mid-nineties, technologists including Bill Gates himself were predicting that artificial intelligence and “big data” would enable digital companies to grow in unprecedented ways. Those dreams are coming to fruition, as Facebook, Amazon and similar companies reach more customers with more personalised experiences than any companies that came before them, while using fewer people and far less physical infrastructure to do it. This vision has made digital companies immensely valuable and forced the old guard to play catch-up, while genuinely making life better for millions of customers, users – and above all, shareholders.

But the instances where these systems fail – often what engineers refer to as “edge cases”  – have proven deeply troubling to the public and regulators. And technological solutions may not be achievable before the broader idea of digital scaling itself is undermined, along with a global economy whose future is increasingly premised on it.

Facebook, according to the stock market, is nearly five times more valuable than Exxon per employee
-

Henry Ford, building on Frederick Winslow Taylor’s early 20th century drive for scientific management, built a dynasty on the drive to industrial scale. By producing a uniform product using semi-skilled workers doing repetitive tasks, he flooded the world with affordable cars. Scaling in the digital era is something almost entirely different – it requires no factory floor, and almost no workers, because its products are largely intangible and its distribution systems are automated.

The most exemplary companies, in fact, produce no physical product at all, acting as mere intermediaries between buyers and sellers, drivers and passengers, content creators and eager eyeballs. Adding another customer or user costs a digital company nearly nothing – just a bit more bandwidth and data storage. In exchange, each user is a new chance to harvest data, which in turn is used to better serve (read: more precisely target) other users. Growth also means ‘network effects’, such as the compounding benefit to social media users of being on the same platform as everyone else. Those effects can ultimately add up to de facto, winner-take-all monopolies, in which the harvesting of gargantuan profits is nearly effortless.

It can take years for this model to come to fruition, though, which is why tech companies are often valued at huge multiples of their present profits – and with little connection to the size of their workforce. Instagram only had 13 employees when it was acquired for $1 billion by Facebook in 2012. Facebook itself, in December of last year, had a reported 25,105 employees and a $550 billion market value, while Exxon Mobil at around the same time had 69,000 employees and a market cap of $319 billion.

Of the 10 most valuable public companies in the world today, at least six depend substantially on data-driven, automated scaling
-

The gap is even more striking when comparing companies’ physical footprints. WalMart, slightly more recently, had $160 billion in hard assets on a $300 billion market valuation, while Facebook had just $9 billion in hard assets at a $500 billion valuation.

That makes Facebook, according to the stock market, nearly five times more valuable than Exxon per employee, and nearly 30 times more valuable than WalMart per dollar of machinery, real estate, and physical product. Such numbers have for years provoked intense anxiety about how society will cope when all the jobs are automated away, but for investors, they’re deeply compelling. Of the ten most valuable public companies in the world today, at least six depend substantially on data-driven, automated scaling.

These are no longer just speculative, long-term bets. Facebook and Google have become like malfunctioning ATMs, spewing advertising profits into the hands of giddy investors. Amazon, long a punchline of unprofitability, has now made money every quarter since mid-2015, including a gobsmacking $1.9 billion profit in the last quarter of 2017.

But those long bets – including the ones that already seem to be paying off – could blow up dramatically if the dream of automated scaling itself goes south. This has already happened in individual cases, such as when the startup Theranos couldn’t deliver on its promise to automate and streamline medical testing. After providing unreliable test results that may have endangered patients, the company, valued at as much as $9 billion, imploded.

A similar fate could easily await Uber, which has been operating at a massive financial loss for years. Investors have continued to feed it cash largely on the premise that the company will become as wildly profitable as Amazon once it masters the technology of self-driving cars and eliminates the cost of drivers. Founder Travis Kalanick has described autonomous driving as an “existential” priority for the company.

Uber will become as wildly profitable as Amazon once it eliminates the cost of drivers
-

But success in that effort is looking increasingly distant. After the Arizona crash, Uber chose to suspend its testing program, and new CEO Dara Khosrowshahi canceled a planned visit to the testing site. Then the Arizona government, which had taken a hands-off approach to regulating autonomous vehicles, itself suspended Uber’s right to test there.

But both Uber and Theranos are individual companies premised on solving major, novel technical challenges. More fundamentally worrying is the prospect that automated systems that seem to already be working may have major hidden flaws.

That possibility is illustrated, above all, by Facebook’s current catastrophe. The site has kept users and advertisers happy for more than a decade, but after the Cambridge Analytica revelations, governments around the world called for investigations, users called on one another to #deletefacebook, and the company’s market value sunk by as much as $95 billion dollars – more than the entire value of Starbucks.

The outrage was driven by a laundry list of perceived fouls – the fact that user data was leaked, the discovery of just how much of it was collected, the fact that it might have helped elect Donald Trump, and perhaps the very idea that political actors could use highly detailed personal data to sway public opinion. But Cambridge Analytica also may have simply been the straw that broke the camel’s back, after months of reports of the insidious spread of ‘fake news’ and Russian manipulation through Facebook.

Content screeners have to make subtle judgments, for instance between someone articulating a fantasy (acceptable), and someone actually trying to buy or sell sex (illegal)
-

Those problems hinge on Facebook’s various uses of automation, including news feeds and ad sales processes that only inconsistently involve actual human judgment. Content-suggestion algorithms, which place a premium on user attention, have been shown to systematically amplify shocking and extreme material, including fabricated or misleading news. Facebook’s lightly-screened ad sales process allowed Russia-linked buyers to spend as much as $150,000 to try and influence US politics, and made it possible to run racially discriminatory housing ads – both illegal.

Facebook has taken some steps to address these issues, including identity checks for political ads and fact-checking for news. Those efforts, which haven’t yet proven successful, depend on adding much more human judgment to Facebook’s operation. That necessity could prove very costly for Facebook, as shown by how it and other social media companies already screen for violent, obscene, or illegal content. In 2014, it was estimated that there were over 100,000 content screeners working for various companies worldwide – many times the number of Facebook employees. Those content screeners have to make sometimes subtle judgments, for instance between someone articulating a fantasy (acceptable) and someone actually trying to buy or sell sex (illegal).

Sharing platforms such as Facebook use humans to screen extreme content precisely because automated systems are bad at making those types of judgments. One way of understanding those limitations is by dividing artificial intelligence systems into two categories – ‘limited’ and ‘general’. A limited AI works in a closed, often numerically-readable environment, such as a chess board. Computers excel in these contexts, as they’re able to crunch numbers at a scale humans can’t touch. Business applications of limited AI are so widespread we may not even think of them as AI – Expedia’s algorithm for collecting and ranking available flights, for instance. Uber’s routing software, which essentially moves cars around a fixed map like game pieces, also fits the definition of a ‘limited’ AI.

Platforms such as Facebook and Amazon have worked to turn more complex challenges into this sort of quantifiable and contained problem. Digital ad targetting systems, for instance, reduce individuals to as few as 50 data points about demographics or taste – a process called matrix factorisation which simplifies the complexity of human motivation into the barest sketch.

But these companies haven’t been nearly as successful in automating the detection of malicious actors, fake news stories, hate speech, or terrifyingly violent children’s cartoons. Successfully making such judgments involves variables, such as tone and context, that aren’t currently easy to reduce to machine-readable numbers.

Algorithmic systems, for instance, can quickly recognise a nude image or sexual language, but they can’t always distinguish between pornography and a discussion of gay culture. In media, making these subtler distinctions has long been the province of human intermediaries, such as newspaper editors. Facebook and Google would love for machines to take over those roles, not only to keep profit margins high but to avoid regulation as media outlets. That would require a quantum technological leap: the creation of something at least approaching what’s known as a ‘general’ artificial intelligence.

General AI is the stuff of sci-fi dreams, from C-3PO to Skynet, capable of performing any intellectual task a human can and then some. If regulators or the public successfully pressure YouTube to begin drawing distinctions between Alex Jones’ feverish conspiracies and distraught Yemeni refugees lambasting Saudi intervention, automated systems to effectively do so would require a command of human geopolitics, emotions, standards of fact, humor, and moral values, just for a start.

Companies pursuing a path to automation at scale are willing to use real people, and even entire societies, as guinea pigs
-

Some of these capabilities are under development, including emotional judgment. But there is wide disagreement about the potential for linking those discrete capabilities into something capable of actual human-like decision-making. It could be many decades away. It might even be outright impossible – after all, we still only vaguely understand how our own minds work.

Efforts are now underway to automate not just driving, media distribution, and medical diagnosis, but also legal aid, elder care, and many other tasks. But while it may be easy to automate some core aspects, these endeavours all include elements that are well beyond the ability of current or near-future technology to tackle, and which can arise unexpectedly and demand quick reaction.

The last few months have shown that companies pursuing a path to automation at scale are willing to use real people, and even entire societies, as guinea pigs as they test the limits of their technology. Their experiments are becoming more risky, not less, as their reach expands. More regulations like Europe’s new data privacy act might be on the horizon, introducing the possibility of constricting regulation that will limit platforms from reaping the greatest possible profit. But if those profits are premised on stripping away human supervision and waiting for the machines to fail, that might be for the best.


David Z. Morris is a writer and researcher who regularly covers business and technology for Fortune. His work has also appeared in The Atlantic, Pacific Standard, Slate, and other outlets. He is a former social scientist with a focus on media, and has served as a research fellow with the Japan Society for the Promotion of Science and the University of South Florida.

davidzmorris