cloudThing logo in white
Menu open icon
email: info@cloudthing.com
tel: +44 (0) 121 393 4700
Menu closed icon

General

Generally useful pages

Sectors

We know loads about this stuff

What we do

The Building Blocks for cloudThing Magic

Apple Clarifies Child Safety Features After Privacy Concerns

Mon Aug 23 2021

Apple have clarified the security model of their child safety features after privacy criticism.

We mentioned Apple’s new, controversial photo-scanning feature previously, but the discussion is ongoing.

Apple announced it would be adding three new features to iOS, all with the intention of fighting against child sexual exploitation and the distribution of child abuse imagery. The features are the following: one adds new information to Siri and search, another checks messages sent to children to see if they might contain inappropriate images, and the third compares photos on an iPhone with a database of known child sexual abuse materiel (CSAM) and alerts Apple if it is found.

Critics say the third feature contravenes Apple’s commitment to privacy. Although the intentions behind the feature are rooted in good, in the future it could be used to scan for images like political pictures on the phones of people living in authoritarian regimes.

Apple has defended itself against this claim repeatedly and said it will not allow the feature to be used for any other material, of if a phone does not store photos in the cloud, and of course a number of safeguards exist to ensure it upholds its commitment to users’ privacy. Despite having to defend this feature in multiple interviews since the announcement, Apple states that it will go ahead as planned.

It has a released a new paper, titled “Security Threat Model Review of Apple’s Child Safety Features”, with the aim of reassuring users that the new feature will only be used as intended as well as covering the other privacy concerns which have cropped up since it was introduced.

open quote mark

Apple's compromise on end-to-end encryption may appease government agencies in the US and abroad, but it is a shocking about-face for users who have relied on the firm's leadership in privacy and security. It is impossible to create a client-side scanning system that can only be used for sexually explicit images sent or received by children. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change.

Electronic Frontier Foundation (EEF)- Blog Post

close quote mark

Specific Announcements

Some announcements in the paper include the fact that the database of possible images will not be from just one nation’s official organisation – so pictures will only be matched if they come from at least two different groups’ databases. This means no government is able to inject other content into the database.

Apple auditors will look through said database with identifiers of what the feature is looking for being provided so that others can check it is only scanning for child sex abuse imagery.

One of the main security points is that an account will only be flagged if its photo library includes at least 30 CSAM apparent images. This ensures as few false positives as possible and should result in the chances of an account being incorrectly flagged is one in a trillion.

Not Quite Ready To Get Back To Work Just Yet?

UK GOV MUST IMPROVE THE ELECTRIC CAR CHARGING NETWORK NOW – CMA REPORT FINDS

RNLI CHIEF EXEC HITS BACK AT FARAGE’S “MIGRANT TAXI SERVICE” COMMENT

AUSTRALIAN COURTS RULES A.I CAN OWN INTELLECTUAL PROPERTY

© cloudThing 2021

Mon Aug 23 2021

email iconinfo@cloudthing.com
© 2020 Copyright cloudThing ltd. All rights reserved. Company registered in England & Wales no. 7510381, VAT no. 152340739