Coronavirus means AI will moderate social networks. Can it?

Whenever there’s “machine-learning” or “artificial intelligence” mentioned, it curdles up. Take it with a handful of salt. Social media offers nothing in concrete terms during pandemic outbreaks. We are accustomed to believing that it would “bind friends and families” to keep them “informed”.

Some of those advancements were already underway. Last fall, for instance, Facebook explained a new capability it’s deploying known as Whole Post Integrity Embeddings, which allows its AI systems to simultaneously analyze the entirety of a post — the images and the text — for signs of a violation. This can be especially helpful for posts where context is key, like, say illegal drug sales. That particular innovation seems to be making a difference: Facebook reported deleting about 4.4 million pieces of drug sale content in the third quarter of 2019, 97.6% of which was proactively detected. That’s compared with just 841,000 pieces of the same type of content in the first quarter of that year, of which 84% was flagged by automation.

These statistics are provided by Facebook itself without external moderation. Facebook and WhatsApp are the real perpetrators of mass panic that has been evoked by the spread of viral pandemics. These networks are sadly intertwined with the governmental decision making that relies on the virality of spread in numerous groups.

Western reporting stems from deeply embedded libertarian values that make it inherently stupid. These linked posts sound like a gentle rap to the knuckles by pointing out the “obvious flaws” without the advocacy of breaking their kneecaps or robbing them of their assumed legitimacy. Free press subsists on their dole-outs.

Be careful in what you read and have your cynicism filters switched on before you consume mainstream content.

via Coronavirus means AI will moderate social networks. Can it? – Protocol