Apple recently announced its plan to perform client-side scanning of photos on its platform to prevent it from being used to share child pornography. It then delayed its plans after the backlash from technologists and security researchers. But why all the fuss when the stated goal is so laudable? Let’s find out.
In early August, Apple articulated the impending changes it planned to make to its operating systems, which would have new “protections for children” baked into iCloud and iMessage. And there are two mechanisms in the proposed framework: one concerning iMessage and the other concerning iCloud photos.
iCloud photo scanning
The first mechanism concerns iCloud. Apple provides all of its users with a certain amount of server space to store things like photos, videos, and documents, making them accessible from any of the user’s devices over the internet. That’s iCloud.
With the proposed changes, whenever a user uploads a photo to their iCloud account, it is scanned locally on their device before it’s uploaded to Apple’s servers. The purpose of the scan is, of course, to see if it matches any photo contained in the Known Child Sexual Abuse Material (CSAM) database. That database is maintained by the National Center for Missing & Exploited Children (NCMEC).
This scanning is achieved using digital fingerprints of the photos called hashes rather than the photos themselves. The CSAM database contains hashes, and your phone derives hashes from your photos before they are scanned to find any matches. Apple can only tell that a match was found after a sufficient number of photo hashes have matched a preset (and unknown) threshold. Once that number of matches is reached, the photos in question are sent to Apple for human review. If the human reviewer confirms the matches, the photos are sent to NCMEC, and the user’s account is disabled.
iMessage scanning & parental notifications
The second mechanism is related to the iMessage messaging app. iMessage is somewhat different from regular text messages in that it works over the internet and is end-to-end encrypted. With end-to-end encryption, your message (and any attachments it includes) is encrypted on your device before it’s sent out over the internet. That way, only you and the intended recipient are able to read the message. If any third party were to intercept the message in transit, all they would see is gibberish. Even if the end-to-end encrypted message is stored on a server somewhere before the message’s recipient can download it, it’s already been encrypted. So again, anyone attempting to read the message would see nothing but a nonsensical string of random characters.
The proposed mechanism in regards to iMessage would work as follows:
When iMessage users under the age of 13 share photos with one another, those photos are scanned using a machine learning algorithm. If the image is deemed to be “sexually explicit” material, a prompt is displayed to the user, offering them a choice to either:
- not send or receive the photo, or
- to send or receive the picture.
If the user chooses option 1, nothing happens. If the user chooses to send or receive the photo anyway, the parent account is notified as configured in the Family Sharing plan. This can be disabled in the parent account.
The proposed system would also scan the photos sent over iMessage by users between 13 and 17 years old. In these cases, a warning about sending or receiving a “sexually explicit” image is displayed without sending a notification to their parents.
So what’s wrong?
A New York Times op-ed, penned by security researcher Matthew D. Green and security researcher and former chief security officer at Facebook, Alex Stamos, eloquently outlines the crux of the problem with Apple’s proposed scheme:
The technology involved in this plan is fundamentally new. While Facebook and Google have long scanned the photos people share on their platforms, their systems do not process files on your own computer or phone. Because Apple’s new tools do have the power to process files stored on your phone, they pose a novel threat to privacy.”
What are those novel threats to privacy, exactly?
Scanning of iCloud uploads
Many companies already scan content uploaded to their servers. And Apple’s plan looks just like Microsoft’s PhotoDNA scheme. However, there is a significant difference in the way both systems work. Microsoft’s PhotoDNA happens on its servers after the photo has been uploaded. Apple’s scanning would happen right on your device before it’s uploaded to its servers. Apple’s approach raises a number of issues.
- The CSAM database, which would be included in the operating system, is unauditable. That raises the question as to who really owns the iPhone. Your phone automatically scans your content and checks it against an opaque database to which you have no access.
- Your device will not notify you if a match has been found.
- Even if users were notified, the processed images are converted to hashes, so the user cannot identify the photo(s) that may be flagged.
Another threat to privacy that is regularly cited is that Apple’s iCloud photo scanning will scan every single photo uploaded to iCloud – not just under 13s’ photos, and not just “sexually explicit” photos, but every single photo users choose to upload to iCloud.
One could point out that every photo uploaded to Microsoft’s servers is also scanned, and they’d be correct. However, the scanning doesn’t happen on your device. It happens on Microsoft’s servers. Microsoft’s approach is less invasive in that it doesn’t turn your device against you. It doesn’t load a virtual moral police officer onto your phone. As the Electronic Frontier Foundation (EFF) – a non-profit organization dedicated to defending digital privacy, free speech, and innovation, states,
Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement.”
And again, nobody is against virtue. Fighting child sexual exploitation is laudable and important. And child sexual exploitation is a very real problem. But it’s also crucial that the solutions we choose are proportionate and don’t create new problems that affect all users.
Breaking the promise of end-to-end encryption
As I mentioned above, Apple’s messaging app, iMessage, is end-to-end encrypted. And the promise end-to-end encryption makes is that only you and your intended recipient can view the contents of the messages you send each other.
Apple’s proposal doesn’t break the encryption, and Apple claims it isn’t creating a backdoor. But as security researcher Bruce Schneier states,
The idea is that they wouldn’t touch the cryptography, but instead eavesdrop on communications and systems before encryption or after decryption. It’s not a cryptographic backdoor, but it’s still a backdoor — and brings with it all the insecurities of a backdoor.”
Suppose the system scans your messages before they are encrypted and potentially sends a copy of the content (without your knowledge) – hashed or not – to a third party. That’s more people than just you and your intended recipient and it breaks the promise of end-to-end encryption.
Schneier further states that Apple changed its definition of “end-to-end encryption” in the FAQ it released to explain how the scheme works. From the FAQ:
“Does this break end-to-end encryption in Messages?
No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications, which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.”
Schneier rightfully points out that, as per Apple, end-to-end encryption no longer means that only you and your intended recipient are the only ones who can view the messages. It now sometimes includes a third party under certain circumstances. Whether or not the third party is effectively notified, the promise of end-to-end encryption is broken (or, in this case, redefined).
False positives?
Then there’s the question of false positives. “Sexually explicit” can mean many things: from the glaringly obvious to the more subtle and not so clear-cut. And many so-called “porn filters” already over-censor content. Why would Apple’s framework be any different?
What will be the plight of swimsuit photos, photos of breastfeeding, nude art, educational content, health information, or advocacy messages?
Tech companies don’t have the best track record in distinguishing between pornography and art or other non-pornographic content. And the LGBTQ+ community, in particular, is at risk of over-censorship. It stands to argue that a young person exploring their sexual orientation or gender may seek to view nude pictures of male and female bodies. That shouldn’t be controversial. But that person would risk getting their photos flagged and sent off to a third party.
Not only that but if one of those images were to be sent over iMessage by an under 13, with the feature enabled by their parents, Apple’s scanning mechanism could out them to their potentially unsympathetic parents, which could be devastating and have enormous consequences for the youth. And what about just a young person simply enticed by a nude photo (we’ve all been there, right?), but who happens to have abusive parents? And there are many youths in that situation. Apple’s reporting scheme could end up causing more harm than good to those it’s trying to protect.
Some slopes are genuinely slippery
Many security experts have been telling us for years that it simply isn’t possible to build a client-side scanning mechanism that can only be used to target sexually explicit images that are sent or received by children. As well-intentioned as it may be, such a system breaks the fundamental privacy expectation of the message’s encryption and provides fertile ground for broader abuses, they say.
That point hinges on the fact that it really wouldn’t take much effort to widen the scanning to additional types of content. Apple would simply need to expand the machine learning parameters to achieve this. Another simple “tweak” would be to modify the configuration flags to scan everyone’s accounts – not just those belonging to children.
It’s exactly the kind of “expansion” we’ve seen with the Global Internet Forum to Counter Terrorism (GIFCT). It was originally conceived to scan and hash child sexual abuse images but was repurposed to create a known “terrorist” content database. The GIFCT operates without oversight, and it has censored legitimate speech, such as documentation of violence, counterspeech, art, and satire, as “terrorist content.”
Expanding the system couldn’t be easier, really, and there are no technical means in place to limit the scanning to CSAM. All one needs to do is add additional items to the hash database, and those items will be blocked or reported as well. This could mean things like protest photos, pirated media, whistleblower evidence, etc. And because the database contains only hashes of images and that hashes of CSAM images are indistinguishable from non-CSAM hashes, there are no technical means to limit the scheme to CSAM hashes only.
The EFF even wrote an open letter to Apple CEO Tim Cook on behalf of a coalition of more than 90 U.S. and international organizations dedicated to civil rights, digital rights, and human rights, including the EFF themselves. In it, they ask Apple to renounce implementing the scheme. From the letter:
“Once this capability is built into Apple products, the company and its competitors will face enormous pressure — and potentially legal requirements — from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable. Those images may be of human rights abuses, political protests, images companies have tagged as “terrorist” or violent extremist content, or even unflattering images of the very politicians who will pressure the company to scan for them. And that pressure could extend to all images stored on the device, not just those uploaded to iCloud. Thus, Apple will have laid the foundation for censorship, surveillance, and persecution on a global basis.”
A backdoor is a backdoor is a backdoor
This was an odd move by Apple, a company that has championed the cause of user privacy in recent years. And the almost unanimous backlash that came after the initial announcement has compelled Apple to delay its implementation. But this seems to be just the latest chapter in what is referred to as the Crypto Wars – the U.S. government and its allies’ attempts to limit the access of foreign nations and the general public to strong encryption.
Over the years, there have been various attempts to weaken, break, or ban encryption for the general public. This latest attempt sidesteps the encryption issue by proposing on-device, pre-encryption scanning of content. It may, at first glance, appear more innocuous. But, as they say, you can put lipstick on a pig, but it’s still a pig.
You can’t break (or sidestep) encryption for the bad guys without simultaneously breaking it for the good guys. And breaking encryption for everyone is not a viable solution, regardless of how commendable the stated goal is meant to be. The societal harms that can arise without the ability to protect our communications are immeasurable. And while it is admittedly a complex problem to solve, we should tread with care.