We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses. All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.
geblokkeerd
met dank aan forumlid ‘puk1980’, die dit nieuws als eerste meldde op het forum
Daarnaast heeft Apple al een tijd een vergelijkbaar systeem voor het detecteren van kindermishandeling ...
Ik meldde dit nieuwtje gisteravond, puk1980 reageerde daarop. Dat maakt verder niet uit hoor
Ik zie geen verschil op het eerste gezicht en begrijp dan ook niet waarom ze dit nu als 'nieuws' brengen.
Other cloud storage providers, from Microsoft to Dropbox, already perform detection on images uploaded to their servers. But by adding any sort of image analysis to user devices, some privacy critics argue, Apple has also taken a step toward a troubling new form of surveillance and weakened its historically strong privacy stance in the face of pressure from law enforcement.“I’m not defending child abuse. But this whole idea that your personal device is constantly locally scanning and monitoring you based on some criteria for objectionable content and conditionally reporting it to the authorities is a very, very slippery slope,” says Nadim Kobeissi, a cryptographer and founder of the Paris-based cryptography software firm Symbolic Software. “I definitely will be switching to an Android phone if this continues.”
Apple’s new system isn’t a straightforward scan of user images, either on the company’s devices or on its iCloud servers. Instead it’s a clever—and complex—new form of image analysis designed to prevent Apple from ever seeing those photos unless they’re already determined to be part of a collection of multiple CSAM images uploaded by a user. The system takes a "hash" of all images a user sends to iCloud, converting the files into strings of characters that are uniquely derived from those images. Then, like older systems of CSAM detection such as PhotoDNA, it compares them with a vast collection of known CSAM image hashes provided by NCMEC to find any matches.
But critics like Johns Hopkins University cryptographer Matt Green suspect more complex motives in Apple's approach. He points out that the great technical lengths Apple has gone to to check images on a user's device, despite that process's privacy protections, only really make sense in cases where the images are encrypted before they leave a user's phone or computer and server-side detection becomes impossible. And he fears that this means Apple will extend the detection system to photos on users' devices that aren't ever uploaded to iCloud—a kind of on-device image scanning that would represent a new form of invasion into users' offline storage.
... that Green worries could open the door to governments around the world making other demands that it alter the system to scan for content other than CSAM, such as political images or other sensitive data.While the new CSAM detection features are limited to the US for now, Green fears a future where other countries, particularly China, insist on more concessions. After all, Apple has already previously acceded to China's demands that it host user data in Chinese data centers. "The pressure is going to come from the UK, from the US, from India, from China. I'm terrified about what that's going to look like," Green adds. “Why Apple would want to tell the world, ‘Hey, we've got this tool’?”
For now, Apple's new system represents a win, at least, for the fight against child abuse online—if one that's potentially fraught with pitfalls. "The reality is that privacy and child protection can coexist," NCMEC's president and CEO John Clark wrote in a statement to WIRED. "Apple’s expanded protection for children is a game-changer."Just how much it changes the game for its users' privacy—and in what direction—will depend entirely on Apple's next moves.
Uiteindelijk moet er een aantal keren een foto herkend zijn om een bepaalde drempelwaarde te overschrijden en een alarm te genereren. Dat allemaal om 'false-positives' te voorkomen. Waarmee meteen wordt geïmpliceerd dat dat een mogelijkheid is, best eng.
Het heeft niks met opslag te maken, maar alles met het beschermen van kinderen. Wat natuurlijk heel nobel is.Het is alleen de vraag of dat een taak is die bij Apple thuishoort.
Dat is waar. Bedenk wel dat de implicaties van het vals beschuldigd worden enorm zijn en iemands leven kunnen verwoesten (hier meer achtergrond over de 'fuzzy hashing' techniek die in het CSAM systeem gebruikt wordt).
Maar goed, de kans daarop is miniem.
Wat okkehel noemt dus: de techbedrijven worden een verlengstuk van het justitieel opsporingsapparaat. Dat lijkt me geen goede ontwikkeling.
Maar wat de kans op collissions is bij het fuzzy hash algoritme is dus een mysterie. Immers, het hele algortime is er op ontworpen om JUIST collissions te creëren als foto's een beetje op elkaar lijken.
Nooit iemand zich afgevraagd wat de bij-effecten zijn van een touch-ID (interessante database van vingerafdrukken), gezichtsherkenning en stem-vastlegging met siri ?[niet alleen bij Apple dus]
https://rentafounder.com/the-problem-with-perceptual-hashes/THE PROBLEM WITH PERCEPTUAL HASHESApple just announced that they will use “perceptual hashing” to detect illegal photos on iPhones. I have some experience to share on this technology.