
Veel meer details over scannen mogelijke kinderporno door Apple
Dat ze bij Apple steken hebben laten vallen bij
de aankondiging dat ze in de komende tijd foto’s van Amerikaanse accounts in iCloud gaan scannen op kinderporno is een understatement. Kennelijk dachten ze dat hun goede bedoelingen en reputatie voldoende zouden zijn.
Maar het effect is dat juist die reputatie als voorvechter van de privacy nu onder vuur ligt, en bij Apple zijn nu dan ook 'alle hens aan dek' om duidelijk te maken waar het echt om gaat. Zo werd Erik Neuenschwander, hoofd van Privacy bij Apple,
uitgebreid geïnterviewd door Matthew Panzarino voor TechCrunch, die gelukkig veel scherpe vragen stelt en daar dan ook duidelijk antwoorden op krijgt.

Veel meer details over scannen mogelijke kinderporno door Apple
klik op de afbeeldingen voor een grotere versie
Hieronder een gedeelte daaruit, eerste de vraag van Matthew Panzarino, dan het antwoord van Erik Neuenschwander:
One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference if pressured or asked by a government to compromise the system?
Well first, that is launching only for U.S., iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the U.S. when they speak in that way, and therefore it seems to be the case that people agree U.S. law doesn’t offer these kinds of capabilities to our government.
But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the U.S. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

Veel meer details over scannen mogelijke kinderporno door Apple
Erik Neuenschwander bevestigde ook nog eens een keer dat dit allemaal alleen maar werkt als je iCloud Foto’s gebruikt:
If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos, is functioning if you’re not using iCloud Photos.
Joanna Stern en Craig FederighiOok Apple’s senior vice president of Software Engineering Craig Federighi werd hierover geïnterviewd, in dit geval door Joanna Stern van The Wall Street Journal.
Hieronder een paar van de belangrijkste citaten hieruit:
And if, and only if you meet a threshold of something on the order of 30 known child pornographic images matching, only then does Apple know anything about your account and know anything about those images.
En uit
het artikel waar deze video bij hoort:
Beyond creating a system that isn’t scanning through all of a user’s photos in the cloud, Mr. Federighi pointed to another benefit of placing the matching process on the phone directly. “Because it’s on the [phone], security researchers are constantly able to introspect what’s happening in Apple’s [phone] software,” he said. “So if any changes were made that were to expand the scope of this in some way — in a way that we had committed to not doing — there’s verifiability, they can spot that that’s happening.”
Critics have said the database of images could be corrupted, such as political material being inserted. Apple has pushed back against that idea. During the interview, Mr. Federighi said the database of images is constructed through the intersection of images from multiple child-safety organizations — not just the National Center for Missing and Exploited Children. He added that at least two “are in distinct jurisdictions.” Such groups and an independent auditor will be able to verify that the database consists only of images provided by those entities, he said.
Daarbij heeft Apple
de FAQ (PDF, 265 KB) over dit onderwerp ook een update gegeven, maar heeft tegelijkertijd ook te maken met eigen werknemers
die hier ongerust over zijn.
En misschien komt het grootste bezwaar nog niet eens aan bod in de interviews hierboven: hoeveel zin heeft het om dit probleem aan het einde van de keten aan te pakken, terwijl het probleem
toch echt bij het begin zit.
met dank aan forumlid ‘puk1980’, die dit nieuws als eerste meldde op het forum#Privacy #iOS #iPadOS #watchOS #macOS