I know where you are. I know what you’re doing. I know who you’re with. I know how you’re feeling.
Only joking, of course I don’t – who do you think I am? Mark Zuckerberg?
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
But within a decade, maybe two, I’ll know all those things about you. What’s more, you’ll know all those things about me. And just about anyone else.
How can I be so sure? Well, it’s a matter of looking at the technology we’ve already got and extrapolating a little. To this end, I’d recommend two seemingly unrelated articles from the Economist. The first article is about sensors:
“The word ‘smart’ is ubiquitous these days. If you believe the hype, smart farms will all employ sensors to report soil conditions, crop growth or the health of livestock. Smart cities will monitor the levels of pollution and noise on every street corner. And goods in smart warehouses will tell robots where to store them, and how. Getting this to work, however, requires figuring out how to get thousands of sensors to transmit data reliably across hundreds of metres.”
Headed-up by Shyam Gollakota, a team of researchers from University of Washington are working on a solution – an adaptation of LoRa technology, in which chips use lower-frequency radio waves to transmit data:
“Dr Gollakota reckons that such chips can be made for less than 20 cents apiece. The signals they generate can be detected at ranges of hundreds of metres. Yet with a power consumption of just 20 millionths of a watt, a standard watch battery should keep them going a decade or more. In fact, it might be possible to power them from ambient energy.”
This technology isn’t designed to transmit data at great speed, but that doesn’t matter. If the chips are cheap and likely to get cheaper, then they provide the basis for the ubiquitous deployment of sensors. The individual dribbles of data might be modest in volume, but together they could be combined into a constantly updated digital model of the world about us. Imagine something like a live version of Google Street View, but in immensely more detail – and without people’s faces being blurred out.
Which brings us to the second article, which is about facial recognition technology:
“Technology is rapidly catching up with the human ability to read faces. In America facial recognition is used by churches to track worshippers’ attendance; in Britain, by retailers to spot past shoplifters. This year Welsh police used it to arrest a suspect outside a football game. In China it verifies the identities of ride-hailing drivers, permits tourists to enter attractions and lets people pay for things with a smile.”
Assuming one has access to the necessary data (as gathered by CCTV and other sensory systems), facial recognition can be used to track the location of an identified individual in real time. And if that isn’t creepy enough, there’s more:
“The face is not just a name-tag. It displays a lot of other information—and machines can read that, too… Researchers at Stanford University have demonstrated that, when shown pictures of one gay man, and one straight man, the algorithm could attribute their sexuality correctly 81% of the time. Humans managed only 61% (see article). In countries where homosexuality is a crime, software which promises to infer sexuality from a face is an alarming prospect.”
Our faces can also be ‘read’ to reveal detailed, and hitherto private, information about medical conditions, emotional states and behavioural patterns. Even the ‘benign’ applications of such a system have sinister implications – for instance, predicting criminal activity before it happens or drawing an emotional map of a city to see what kind of surroundings make people happiest.
Then there’s the question of who gets to use this all-seeing eye. Access to existing CCTV networks is generally controlled by private businesses or public bodies (and ultimately by the security services), however the ultra-cheap (and therefore widely owned) internet-connected sensors of the future may operate on a different model – feeding information into a shared system in much the same way that we currently feed our photographs into Facebook and Instagram.
Whichever website does the best job of integrating data from the ubiquitous sensors of the near future will become the dominant digital platform of the 21st century.