Apple

Apple to scan iPhones for images of child abuse

In a move that has been lauded by child protection entities but alarmed online privacy entities, Apple has announced it will begin scanning iPhone devices to detect if they contain images related to child sexual abuse material (CSAM).

Called neuralMatch, the tool will automatically scan images stored on an iPhone before those images are uploaded to iCloud to check if they are CSAM. If a match is found it is sent to a human reviewer working for (or on behalf of) Apple. If confirmed to be CSAM, the Apple account will be suspended and the authorities notified via the National Center for Missing and Exploited Children (NCMEC). The aim of the tool is to dramatically reduce the proliferation of CSAM material within the Apple eco-system.

It’s a move that, while celebrated by child protection entities, has been criticised by privacy groups including the Electronic Frontier Foundation who called it a backdoor to user privacy. Apple has often been the company that waves the privacy flag the highest, most notably during the on-going “crypto wars” saga where Apple quarrelled with law enforcement which wanted Apple to stop end-to-end encryption and implement a “backdoor” into their devices, which Apple refused to do.

How does neuralMatch work?

Firstly, if you’re worried that nude photos or naked baby photos etc. stored on your phone may accidentally be tagged as CSAM, then you needn’t be. The NeuralMatch software works by turning photos on an iPhone into what is called a “hash”, which is like a digital fingerprint unique to a specific photo. It then compares that hash to a database of hashes of known CSAM images to see if there is a match.

That is to say, its only checking your phone for known existing CSAM material. It’s not performing any analysis of a photo itself to see if it could potentially be an offending image.


Sponsored Content. Continued below...




What makes Apple’s neuralMatch a little different to a number of existing photo-scanning technologies is that the processing takes place on the device itself. Yes the tool is designed to detect offending images uploaded to iCloud, but it is the iPhone itself – not an Apple server – processing the image and comparing it to a database of known offending hash “fingerprints” that will also be stored on the device.

A secondary concern is does this mean Apple employees have access to photos stored on a user’s iPhone. Again it is important to note that this analysis is performed solely by software, not people. Apple has emphasized that the software tool works with user privacy in mind, and that they use a clever series of cryptographic tools that ensure no humans can access content on a user’s phone or iCloud account unless 1. a CSAM match is detected and 2. Those matches exceed a certain threshold. Only then will the tools flag that specific content and send it to Apple for a human review.

Apple has said in its blog post that these technologies will ensure there is a “less than a one in one trillion chance per year of incorrectly flagging a given account“.

Still though, privacy groups see the move as an about-face by a company that has hailed user privacy as a primary marketing grab. The Electronic Frontier Foundation has said that while the technologies employed may be “carefully thought-out”, the tools still remain a backdoor into user privacy that has the potential to be abused or expanded upon.


Sponsored Content. Continued below...





Photo Analysis for iMessage

NeuralMatch isn’t the only tool announced by Apple. The company also revealed a new tool to help accounts identified as belonging to children (using Family plans) from sending or receiving explicit photos.

This technology is also an on-device tool but works differently to neuralMatch. This tool will analyse a photo – be it one sent or received – in iMessage (but only from a child’s account) and if deemed to be explicit, iMessage will censor the image and notify the parents of the child. Unlike neuralMatch, however, this tool isn’t used to detect CSAM and doesn’t share any content with Apple under any circumstances.

Again, however, privacy entities have raised concerned that just like neuralMatch, this tool is effectively a backdoor into end-to-end encryption and could potentially be expanded to detect other types of content sent between users under the illusion of total privacy.

The entire debate is another facet of the privacy versus crime and national security saga. Should we risk, compromise or erode our natural right to privacy if it helps curb serious crime and helps protect some of its most vulnerable of victims? If Apple’s recent announcement is any indication, there are plenty of people who come down on either side of the fence.

The updates announced by Apple are expected to be rolled out to iPhones in the United States towards the end of the year.

Share
Published by
Craig Haley