Regulation of social media: Why it is not the right medium for “academic exchange”.

Jacques Mattheij writes:

Filtering out the bad from the good is going to be very important if we don’t want to accidentally lose such minor marbles as our democracies, our health and our safety.

The complot theory adherents are only going to sink further away in their own little quagmires, they are not interested in seeing their views challenged.

And when you do be prepared to be called all kinds of names, in the space of a day I got called a paid pharma shill, a paid government shill, and a whole bunch of descriptions that don’t bear repeating here besides. You are not going to be able reason with stupid and/or crazy.

The Social Media Problem · Jacques Mattheij

He makes an interesting, pertinent point here:

The fig leaf of Free Speech will no doubt be brandished as the greatest good and too sacred to be messed with but the facts are that every form of free speech has limitations. And that actively trying to harm others by abusing that right should come with some kind of limit or at a minimum a way to reduce its reach.

Most of the reasoning around free speech rights and such are from a time when getting a letter across the country took three weeks.

This risks institutionalizing things that are no longer as useful and innocent as they once upon a time were, the world has changed and there is a good chance that this will require adaptation rather than dogmatic adherence to our older customs.

I remember reading the earlier write ups on TechCrunch and the other silicon valley acolytes about the “frictionless” sharing. They have promoted Facebook as a place for “family and friends” but acts way beyond its mandate as an advertising medium (and more sinister form of surveillance). That social media conditions you to believe that societal ills merit a “discussion”.

Here’s another:

Third, people truly underestimate the impact that “scale” has on this equation. Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day, but large platforms are dealing with way more than that.

If you assume that there are 1 million decisions made every day, even with 99.9% “accuracy” (and, remember, there’s no such thing, given the points above), you’re still going to “miss” 1,000 calls. But 1 million is nothing.

On Facebook alone a recent report noted that there are 350 million photos uploaded every single day. And that’s just photos. If there’s a 99.9% accuracy rate, it’s still going to make “mistakes” on 350,000 images. Every. Single. Day. So, add another 350,000 mistakes the next day. And the next. And the next. And so on.

Therefore, the scale is really daunting!

I can only recount my experience to condition the behaviour of my staff to utilise the digital tools in the pandemic. It happened when I got the various stakeholders on a common platform; I discussed its advantages and importance of real time communication to provide feedback loops and improve the process of patient care. It didn’t change because I was tweeting about it. Likewise, I am genuinely amused when I end up seeing “breakthrough” sessions for “inclusion of diversity”. Those are pleasant words but have meaningless impact unless there is a change in health seeking behaviour. Those reforms require deep dialogue with policy makers and entrenched political calculations. Tweeting about it won’t make any difference as the rates of engagement are completely determined by algorithms. They can “trend” whatever they want to.

In conclusion, Twitter is a sad medium being “adapted” for “academic exchange”. They suffer from the same issues as others- poor engagement, limited scope of discussions and despite the notional benefits of a “common platform” only serves to push end users into “doom scrolling”