Apple’s major privacy fiasco

Apple’s announcement on client side scanning for photos involving children is fraught with a major privacy fiasco. They wish to protect the “children from exploitation”, while a noble goal, but expand the power of surveillance to private devices. Their proposed technology monitors pictures saved on Apple devices and alerts authorities if it finds objectionable pictures. Currently, the service is scheduled to be rolled out in the US with expanse to other geographies.

EFF writes:

“It’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses […] That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.”

EFF further made a statement on the issue:

As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that’s not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users’ devices or send notifications to a wider audience, easily censoring and chilling speech.

(emphasis mine)

This blog doesn’t cover the impact on “democratic ideals” and definitely does not endorse EFF’s style activism, but broadly agrees on the contours of how the system is designed for failure. It gets an unprecedented insight into how the users’ device works – something that you have forked thousands of dollars and ticks off the compliance requirements for healthcare information. This is essentially a backdoor under the ambit of “end-to-end” encryption, and warrants careful scrutiny. This is a scalable architecture Apple can alter the terms and conditions at any time it wants, and has enough resources to manage the public relations exercise. This is called “privacy creep” – designed to encroach upon the private freedoms in a deliberate systematic fashion without “user revolt”, even though they understand the implications of continued usage.

Signing petitions won’t help (unless individuals believe someone is reading). User privacy is essential for the framework of AI-Healthcare, and these methodologies are only designed to run insights into how people are discussing events. The last bastion of priavcy in consumer technology has crumbled.