Very broadly speaking, the privacy invasions come from situations where “false positives” are generated — that is to say, an image or a device or a user is flagged even though there are no sexual abuse images present. These kinds of false positives could happen if the matching database has been tampered with or expanded to include images that do not depict child abuse, or if an adversary could trick Apple’s algorithm into erroneously matching an existing image.
The write up is around the “press-freedom”, but these images when “scanned” at scale have significant upsides to “train algorithms” with images of all sorts. As an aside, scanning your holiday pictures will give Apple incredible insight into sending you advertisements on “holiday destinations”. The data is incredibly granular – if you map it to location data and estimated income levels (and cross matched it to data on file for billing). Apple has complete details on your “life”. Apple Watch and wearables provide additional inputs around your sleep patterns and emotional triggers. The continuous onslaught of social media geared to “depress” an individual will make it worse. Augmented Reality will provide mechanisms for escape, rather than coping up with pressures.
All this will be done on the device through “efficient” chipsets. Privacy is nearly dead (and now at the mercy) of technology corporations. It will have far-reaching (and profound) effects on healthcare. Consumer technology will get Trojan access to one of the most regulated industries.