Uncategorized

Why Apple’s new “child protection” systems are a really bad idea

Apple is taking action against child pornography. This is basically a positive concern, but: the opposite of good is well meant. Because Apple scans data on users’ devices without any suspicion: inside – also and above all of the majority who are innocent. Worse still: the systems open the door to abuse, for example through authoritarian regimes. That is fatal.

What is it about

On Thursday, Apple announced its plans to protect children in the Child Safety section of the website. Contents on iPhones, iPads, Mac computers and in the iCloud can therefore be searched later in the year for the sake of child protection, their display blocked and the authorities notified of their possession. The changes initially affect users in the USA.

Start the photo gallery(11 pictures)

MacBook Pro 2021 in 14 inches: Big draft for Apple’s small number

In a nutshell what Apple announced:

  • iMessage: If images are sent or received on devices with iOS 15, iPadOS 15, WatchOS 8 or macOS Monterey that show, for example, nudity in minors, these are initially not displayed. The device assesses whether the content shows this on the basis of locally executed machine learning algorithms. The user can confirm in a warning message that this content will still be displayed or sent. If the user is under 13 years of age and part of a family group, the parents will be informed. The function is “opt-in”, so it has to be activated separately by parents.
  • Photos / iCloud: To reduce the spread of child pornography, images uploaded to iCloud are cross-checked against a database of known child pornography images maintained by the US NGO National Center for Missing and Exploited Children (NCMEC). This comparison does not take place in the cloud, but rather via so-called “NeuralHashes” on the user’s device, which can identify similarities based on specific image features without the images themselves being compared. According to Apple, false reports are possible, but extremely rare. If a critical amount of relevant images in an iCloud account is exceeded and Apple determines that the allegation is justified, the case will be examined by Apple, the user account blocked if necessary and the authorities informed. Those affected should be given the opportunity to lodge an objection.
  • Siri / search: Users are warned if they are looking for child pornographic content.
IMessage parental control in action (screenshot: apple.com/child-safety)

Why the new features go too far

Of course, the creation and distribution of child pornography is shameful, should be prosecuted and punished. But how far should you go? I believe that reviewing all the images users have on their devices and uploaded to the cloud is for that goes too far. Yes, even if this meant that the material would spread a little less, some owners of such data could actually be caught because of the procedure.

The first important reason: I want to control my device, in particular the data that is stored on it and that is sent or received with it. I don’t want to fear that an algorithm will check the communication running through the devices to see whether the content sent is permissible, and whether the pictures I have shot or sent are suspect by any definition. I don’t want to even think about sending a selfie from the nude beach, a photo of the kids splashing in the bathtub, or, God forbid, a consensual photo of primary or secondary genitals to your partner. The moment this reflection occurs, it is Scissors in the head there – then you have lost the sovereignty over the technology that you have.

It is important to understand that the synchronization of the data in iCloud only takes place on the basis of hashes of known images, and only on the end device. Own pictures are only rated with iMessage when they are sent, and then only under certain circumstances and with manageable consequences, see above. I believe Apple that the features have been implemented with the best of intentions and a lot of technical expertise to be as invasive as possible. But trust is so destroyed that Apple has built up in recent years with its focus on privacy and data protection both in the architecture of its own hardware and software and in advertising. That’s a problem because the tech industry needs Apple in this role, as a counterweight to the data giants Google and Facebook.

But there is another serious reason for rejecting Apple’s precautions. Recent history has shown that the topic of child pornography, as serious and horrific as it may be, is repeatedly abused for technology or technology regulation that deprives its users of freedoms and leads them into a system of surveillance, keywords: data retention, upload filters and the like. The argument about child pornography was always only the beginning of a development, functioned as a grateful door opener in the form of that crime that everyone can agree to fight. You and I and Aunt Lise have nothing to hide and somehow it’s dubious if you don’t like measures against child pornography, right?

Problematic about it: Apple creates an infrastructure that can be misused for censorship right down to the user’s devices. Once the tools are there, state actors inevitably want to expand the content to be combated. Next would be political extremism, then content that is copyrighted. And finally those that are considered disreputable in some parts of the world and an expression of individual freedom in other parts. Representations of homosexuality. Pictures from the massacre in Tian’anmen Square. Memes about leading authoritarian states.

The end of this and similar developments can be a totalitarian surveillance of all communication, the censorship and suppression of unpleasant topics, the social ostracism and persecution of those who want to discuss them. No that is no dystopia, but in China a long time ago reality.

Who guarantees that it will stay with pictures, not even texts, videos or conversations where the phone is wrong? Who checks that the hash list really only contains child pornography? Who will protect us from these features being abused in totalitarian regimes? Only Apple? That is not enough.

More information

What Apple Says

We also spoke to Apple on the subject. In a personal conversation it was made clear that the child protection functions were developed together with experts, are technically precisely documented at apple.com/child-safety and will initially only be rolled out in the USA. It is not yet clear at this point in time whether something similar will happen in other countries and in cooperation with which authorities; the legal framework must first be clarified. Apple also attaches importance to the fact that third parties will not be able to directly access the data that is encrypted in iCloud in the future. The comparison also takes place here on the device, which is compared with a downloaded version of the NCMEC database. Regarding child protection in iMessage, Apple emphasizes that use is purely an “opt-in” feature that takes place on the device and that parents must explicitly activate for their family group.

Leave a Reply

Your email address will not be published. Required fields are marked *