X Close

Why you’re right to worry about Apple’s new surveillance

August 9, 2021 - 5:25pm

Not planning to store child porn or send nudes to 11 year olds? Good. But you should still care about Apple’s new initiative.

Two new initiatives, in fact.

One is AI software to detect sexually explicit photographs in the Messages app. Parents will be able to activate it on a child’s iPhone in a Family Sharing account. When activated, it will ask the under-18-year-old if they really want to view or send the image. Under 13s will be warned that, if they do, their parent will be notified. So much for end-to-end encryption guaranteeing that nobody but the sender and recipient know what is sent.

The other innovation will compare photographs uploaded to iCloud storage with a database of known Child Sexual Abuse Material (CSAM). Using a process called hashing that converts images to a reference number, the new system can check for matches without decrypting photos. If enough images from one iPhone match CSAM images in the reference file, however, a human will check the flagged images. Obviously, doing this means Apple will decrypt the images in question.

In fact, Apple can already access photos stored in iCloud, which means they can, if legally required to do so, hand the keys to law enforcement agencies. And they do so, thousands of times every year. Apple sell themselves as a more privacy-friendly tech option, but several years ago they decided against offering encryption for iCloud backup without having a spare set of keys.

Their new plans to introduce client-side scanning adds a back door that could be used for other purposes.

The Global Internet Forum to Counter Terrorism (GIFCT), for example, already has a database of hashes identifying terrorist material. Shared with member organisations including Twitter, Facebook, YouTube and Microsoft, the database enables a similar matching process for rapid or automated removal of content.

The GIFCT reference database has no external oversight and often removes content that should, by most free speech standards, be freely available. Examples cited by the Electronic Frontier Foundation (EFF) include a satirical post mocking the anti-LGBT stance of Hezbollah, and evidence of human rights abuse in Syria, Yemen and Ukraine.

Many countries have passed laws against publishing ‘misinformation’ or ‘fake news’ which are, in effect, a licence to censor journalism and online content. For all Apple’s promises that client-side scanning will not be used for anything except protecting children from exploitation, it’s hard to see how they could resist local legislation that required them to check messages or photographs for forbidden material.

Closer to home, a tech company that sells itself on protecting its customers’ privacy now tells us that there is no such thing as a truly private communication. “Nothing to hide, nothing to fear” has never been a good argument. Our private exchanges should not be subject to routine scrutiny, even by machines.


Timandra Harkness presents the BBC Radio 4 series, FutureProofing and How To Disagree. Her book, Technology is Not the Problem, is published by Harper Collins.

TimandraHarknes

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

6 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
J Bryant
J Bryant
3 years ago

Many countries have passed laws against publishing ‘misinformation’ or ‘fake news’ which are, in effect, a licence to censor journalism and online content.
This is the real threat, imo. Most recently in the US, the organization that represents state medical licensing boards has issued a communication suggesting that any licensed medical practitioner who promotes misinformation about covid vaccines might be subject to disciplinary action up to revocation of their license. Who decides what’s ‘misinformation’ is the obvious question.

Andrew D
Andrew D
3 years ago

Not sure that I’m bothered. Anyone with an iPhone has already surrendered his privacy. Just don’t have one!

Galeti Tavas
Galeti Tavas
3 years ago
Reply to  Andrew D

I get where you are coming from. Like a meth-head complaining his dealer is infringing on some minor issue concerning his meth supply.

Its like, Hey – you’re an addict, you are going to keep on getting your fix 24 hours a day, why sweat the tiny details? Just get high and quit whining.

A Spetzari
A Spetzari
3 years ago
Reply to  Andrew D

Agree to a point.
It is however increasingly prohibitive to not have a smart phone. Want to go out for a meal/drink – please check in using Track and Trace. No? Ok sorry you cannot.
Similar situations with online banking, parking etc etc.
More traditional methods for these still remain of course – but usually only in the name of “accessibility”.
Also Apple are perhaps worse than most for privacy issues, but other companies and operating systems are close behind.
Kudos to you if you remain stoic in the face of all this, but you have to be particularly bloody minded on the matter these days to avoid it.

Last edited 3 years ago by A Spetzari
Edward H
Edward H
3 years ago

I am not an Apple fanboy, but I have a philosophical/ethical interest in the limits of privacy in the digital space (by analogy, ponder whether I should be allowed to maintain a locked basement into which no-one, not even police executing a lawfully obtained warrant, can physically gain entry, and the key to which I am not required to provide).
In the interests of accuracy, there are multiple mistakes in this article about how Apple’s solutions to child abuse and the proliferation of CSAM work.
Firstly, it is not correct that the mechanism for notifying a child’s parent that an inappropriate image has been viewed or sent breaks encryption. If enabled by the parent account, after the child’s device has identified a suspect image and the child has ignored 2x warnings (in child-comprehensible language) to go on to view/send the image, then the parent account is merely notified that their child has either viewed/sent an image that the child’s device thinks is inappropriate. The parent is not notified of what the image was – they need to go talk to their kid to find that out – and the image analysis is done on the child’s device, not on Apple’s servers. Encryption is left intact. See here https://www.apple.com/child-safety/
Secondly, the CSAM detection happens on-device at the point at which the user’s device uploads images to iCloud Photos. The device creates a hash value for each image being archived, then compares this hash to the hash values from the NCMEC database of hash values (on device). If there is a match, then a voucher is generated and sent along to Apple’s servers. Once enough vouchers have been generated – indicating that the user has a collection of known CSAM – then the contents of their vouchers is decrypted for human review and account suspension/referral to law enforcement if necessary. The only images available for human review are images based on known CSAM content. At no point does Apple see your images stored on iCloud Photos generally. See here https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf.
Thirdly, this initiative is only being rolled out in the USA (for now?).
Finally, the privacy absolutist can avoid all scrutiny by backing up (with encryption) their iPhone photos to a computer (if they need to backup at all) and opting out of iCloud Photos backups.
I agree that we should be incredibly wary of governmental overreach in this space and Apple acquiescing to pressure in particular jurisdictions. At first blush, however, Apple’s proposed solutions to child grooming/abuse and CSAM seem to strike a better balance between a legitimate and urgent need to reduce child abuse than other services, which routinely scan customer images whether or not they have hash values matching NCMEC’s list of known CSAM.

Jeffrey Chongsathien
Jeffrey Chongsathien
3 years ago

The problem is not the technology or the application but simply that both are within the purview of a corporation, as opposed to government and a legal system?

Last edited 3 years ago by Jeffrey Chongsathien