A similar fate could easily await Uber, which has been operating at a massive financial loss for years. Investors have continued to feed it cash largely on the premise that the company will become as wildly profitable as Amazon once it masters the technology of self-driving cars and eliminates the cost of drivers. Founder Travis Kalanick has described autonomous driving as an “existential” priority for the company.
Uber will become as wildly profitable as Amazon once it eliminates the cost of drivers
-
But success in that effort is looking increasingly distant. After the Arizona crash, Uber chose to suspend its testing program, and new CEO Dara Khosrowshahi canceled a planned visit to the testing site. Then the Arizona government, which had taken a hands-off approach to regulating autonomous vehicles, itself suspended Uber’s right to test there.
But both Uber and Theranos are individual companies premised on solving major, novel technical challenges. More fundamentally worrying is the prospect that automated systems that seem to already be working may have major hidden flaws.
That possibility is illustrated, above all, by Facebook’s current catastrophe. The site has kept users and advertisers happy for more than a decade, but after the Cambridge Analytica revelations, governments around the world called for investigations, users called on one another to #deletefacebook, and the company’s market value sunk by as much as $95 billion dollars – more than the entire value of Starbucks.
The outrage was driven by a laundry list of perceived fouls – the fact that user data was leaked, the discovery of just how much of it was collected, the fact that it might have helped elect Donald Trump, and perhaps the very idea that political actors could use highly detailed personal data to sway public opinion. But Cambridge Analytica also may have simply been the straw that broke the camel’s back, after months of reports of the insidious spread of ‘fake news’ and Russian manipulation through Facebook.
Content screeners have to make subtle judgments, for instance between someone articulating a fantasy (acceptable), and someone actually trying to buy or sell sex (illegal)
-
Those problems hinge on Facebook’s various uses of automation, including news feeds and ad sales processes that only inconsistently involve actual human judgment. Content-suggestion algorithms, which place a premium on user attention, have been shown to systematically amplify shocking and extreme material, including fabricated or misleading news. Facebook’s lightly-screened ad sales process allowed Russia-linked buyers to spend as much as $150,000 to try and influence US politics, and made it possible to run racially discriminatory housing ads – both illegal.
Facebook has taken some steps to address these issues, including identity checks for political ads and fact-checking for news. Those efforts, which haven’t yet proven successful, depend on adding much more human judgment to Facebook’s operation. That necessity could prove very costly for Facebook, as shown by how it and other social media companies already screen for violent, obscene, or illegal content. In 2014, it was estimated that there were over 100,000 content screeners working for various companies worldwide – many times the number of Facebook employees. Those content screeners have to make sometimes subtle judgments, for instance between someone articulating a fantasy (acceptable) and someone actually trying to buy or sell sex (illegal).
Sharing platforms such as Facebook use humans to screen extreme content precisely because automated systems are bad at making those types of judgments. One way of understanding those limitations is by dividing artificial intelligence systems into two categories – ‘limited’ and ‘general’. A limited AI works in a closed, often numerically-readable environment, such as a chess board. Computers excel in these contexts, as they’re able to crunch numbers at a scale humans can’t touch. Business applications of limited AI are so widespread we may not even think of them as AI – Expedia’s algorithm for collecting and ranking available flights, for instance. Uber’s routing software, which essentially moves cars around a fixed map like game pieces, also fits the definition of a ‘limited’ AI.
Platforms such as Facebook and Amazon have worked to turn more complex challenges into this sort of quantifiable and contained problem. Digital ad targetting systems, for instance, reduce individuals to as few as 50 data points about demographics or taste – a process called matrix factorisation which simplifies the complexity of human motivation into the barest sketch.
But these companies haven’t been nearly as successful in automating the detection of malicious actors, fake news stories, hate speech, or terrifyingly violent children’s cartoons. Successfully making such judgments involves variables, such as tone and context, that aren’t currently easy to reduce to machine-readable numbers.
Algorithmic systems, for instance, can quickly recognise a nude image or sexual language, but they can’t always distinguish between pornography and a discussion of gay culture. In media, making these subtler distinctions has long been the province of human intermediaries, such as newspaper editors. Facebook and Google would love for machines to take over those roles, not only to keep profit margins high but to avoid regulation as media outlets. That would require a quantum technological leap: the creation of something at least approaching what’s known as a ‘general’ artificial intelligence.
General AI is the stuff of sci-fi dreams, from C-3PO to Skynet, capable of performing any intellectual task a human can and then some. If regulators or the public successfully pressure YouTube to begin drawing distinctions between Alex Jones’ feverish conspiracies and distraught Yemeni refugees lambasting Saudi intervention, automated systems to effectively do so would require a command of human geopolitics, emotions, standards of fact, humor, and moral values, just for a start.
Companies pursuing a path to automation at scale are willing to use real people, and even entire societies, as guinea pigs
-
Some of these capabilities are under development, including emotional judgment. But there is wide disagreement about the potential for linking those discrete capabilities into something capable of actual human-like decision-making. It could be many decades away. It might even be outright impossible – after all, we still only vaguely understand how our own minds work.
Efforts are now underway to automate not just driving, media distribution, and medical diagnosis, but also legal aid, elder care, and many other tasks. But while it may be easy to automate some core aspects, these endeavours all include elements that are well beyond the ability of current or near-future technology to tackle, and which can arise unexpectedly and demand quick reaction.
The last few months have shown that companies pursuing a path to automation at scale are willing to use real people, and even entire societies, as guinea pigs as they test the limits of their technology. Their experiments are becoming more risky, not less, as their reach expands. More regulations like Europe’s new data privacy act might be on the horizon, introducing the possibility of constricting regulation that will limit platforms from reaping the greatest possible profit. But if those profits are premised on stripping away human supervision and waiting for the machines to fail, that might be for the best.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeFacebook is known as the world’s largest social network. It helps people to join the people from the other parts of the world. People who are searching for the Facebook headquarters address in USA then they can get information by visiting us.
Considering spending this pre-summer, yet free, we are remaining nearby. Changing the flydubai muscat office will make your mid year mess up and pour out. You can chat with us.
Thanks for sharing about the digital economy. The digital economy is a term that has come to describe a range of economic activities that involve digital technologies and the Internet. It includes activities such as online commerce, online banking, online advertising, online gaming, online streaming, online music and video downloads, online news and information services, and social networking.
Air Conditioner chemical wash