Apple unveiled plans Thursday to scan U.S. users’ iCloud Photos and text messages for child sexual abuse images in a sweeping effort to identify and implicate predators.
The tool that will be used to detect child sexual abuse material (CSAM) is called “NeuralHash,” which will scan images uploaded to iCloud. If the scan finds the material to match known images of child sexual abuse, the image will undergo manual review. If determined to be CSAM, the user’s account will be disabled, and law enforcement will be contacted. The technology will also scan encrypted messages for sexually explicit material. Siri will also intervene when a user searches for queries related to CSAM.
Apple’s technical summary states, “Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the database of known CSAM hashes.” The detection system will flag images that are already deemed CSAM in a database. Apple will analyze an image by converting it to a specific number, which will then be compared to numbers in the CSAM database. The mathematical fingerprints will be reviewed by a human when an account crosses the threshold of known CSAM content. For security reasons, Apple did not reveal what the threshold is.
In the released document, Apple attempted to reassure users that the threshold is “selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account . . . If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.”
Apple’s announcement garnered both support from child advocates and backlash from technology security experts.
John Clark, the president and CEO of the National Center for Missing and Exploited Children, praised the initiative: “Apple’s expanded protection for children is a game changer . . . [w]ith so many people using Apple products, these new safety measures have life saving potential for children.”
Berkeley computer scientist Harry Farid, who over a decade ago invented PhotoDNA, the technology used by law enforcement to identify child pornography online, acknowledged that there is a potential avenue for Apple’s system to be exploited by corrupted hands. However, he argues that the mission to combat child sexual abuse far outweighs this potential of technological abuse.
Yet one expert is not so sure. Matthew Green, a top cryptography researcher at Johns Hopkins University, explained that “these scanning technologies are effectively (somewhat limited) mass surveillance tools. Not dissimilar to the tools that repressive regimes have deployed — just turned to different purposes.”
Green further elaborated, “In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you.”
An online advocacy group for digital civil liberties, the Electronic Frontier Foundation, echoed Green’s worries. In a response to Apple’s announcement, they said, “It’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children . . . even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.”
Though Apple’s initiative may shock some, this technology has been in the talks for some time. Jane Horvath, chief privacy officer at Apple, appeared on a 2020 panel at CES, the most influential tech event on the planet, to discuss how Apple was working on scanning user’s iCloud Photos for this type of material.
The CSAM detection technology will be included in the iOS15 update, which is scheduled for later this year.

