Apple has published a CSAM FAQ, which answers frequently asked questions about the features, in response to misconceptions and concerns about its picture scanning announcements.
While child safety organizations applauded Apple’s efforts to assist detect and protect children from predators by detecting the presence of child sexual abuse materials (CSAM), there has been a mix of knowledgeable and misinformed criticism…
There has also been a dearth of awareness regarding Apple’s procedures. In the case of iMessage, on-device AI is used to recognize photographs that appear to be nudes, while in the case of CSAM, digital fingerprints are compared to fingerprints created from a user’s stored photos.
With the exception of someone identified for having multiple CSAM photographs, when someone at Apple manually checks low-resolution copies to ensure they are true matches before law enforcement is notified, no one at Apple gets to see any of the photos.
There has also been a mix-up between the current level of privacy and misuse threats (which are nil to extremely low) and the possibility for future abuse by authoritarian governments. Experts in the field of cybersecurity have been warning about the latter, not the former.
Apple has already taken steps to address worries about repressive governments by launching solely in the United States for the time being, and stressing that expansion would be done country by country, taking into account the legal settings in each. This and other concerns are now addressed in the FAQ.
Apple CSAM FAQ
Apple has released a six-page FAQ in response to some of the issues that have been voiced. It starts by acknowledging the conflicted reaction.
We want to protect children from predators who use communica- tion tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM). Since we announced these features, many stakeholders including privacy organiza- tions and child safety organizations have expressed their support of this new solution, and some have reached out with questions. This document serves to address these questions and provide more clarity and transparency in the process.
The business goes on to say that the first two qualities are completely distinct.
What are the differences between communication safety in Messages and CSAM detection in iCloud Photos?
These two features are not the same and do not use the same technology.
Communication safety in Messages is designed to give parents and children additional tools to help protect their children from sending and receiving sexually explicit images in the Messages app. It works only on images sent or received in the Messages app for child accounts set up in Family Sharing. It analyzes the images on-device, and so does not change the privacy assur- ances of Messages. When a child account sends or receives sexually explicit images, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view or send the photo. As an additional precaution, young children can also be told that, to make sure they are safe, their parents will get a message if they do view it.
The second feature, CSAM detection in iCloud Photos, is designed to keep CSAM off iCloud Photos without providing information to Apple about any photos other than those that match known CSAM images. CSAM images are illegal to possess in most countries, including the Unit- ed States. This feature only impacts users who have chosen to use iCloud Photos to store their photos. It does not impact users who have not chosen to use iCloud Photos. There is no impact to any other on-device data. This feature does not apply to Messages.