I have been writing extensively around the scourge of social media, but there are very few write-ups that determine the veracity of the algorithms, which undermine the experience of the web-sites. Therefore this brilliant write-up from Scientific American deserves recommendation.
Modern technologies are amplifying these biases in harmful ways, however. Search engines direct Andy to sites that inflame his suspicions, and social media connects him with like-minded people, feeding his fears. Making matters worse, bots—automated social media accounts that impersonate humans—enable misguided or malevolent actors to take advantage of his vulnerabilities.
Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. Unable to process all this material, we let our cognitive biases decide what we should pay attention to. These mental shortcuts influence which information we search for, comprehend, remember and repeat to a harmful extent.
While aware of this, I try to break the silos personally. I understand, I need to improve my “collection skills” but I can’t always trust all the sources. Information overload is real:
This is worsened by the “cognitive biases”. I also wish to highlight another unusual problem- bots. It is relatively easier to “launch” a bot on Twitter- the network itself is talking about labelling “good bots”. I have found an increasing engagement with the bots because much of my Twitter interaction is automated.
The bots skew the information patterns towards destructive behaviour. For someone willing to search for “political information”, the bots can easily push the user towards rabbit holes. It leads to generation of echo chambers.
There’s another Wall Street Investigation in the same phenomenon (and a fairly recent publication).
An earlier video investigation by the Journal found that TikTok only needs one important piece of information to figure out what a user wants: the amount of time you linger over a piece of content. Every second you hesitate or re-watch, the app tracks you.
Through that one powerful signal, TikTok can learn your most hidden interests and emotions, and drive users of any age deep into rabbit holes of content—in which feeds are heavily dominated by videos about a specific topic or theme. It’s an experience that other social-media companies like YouTube have struggled to stop.
I find it amusing that some commentators recommend TikTok for medical professionals (as influencers). It couldn’t be farther from truth because they are unaware of the algorithmic functioning and quoting NYT articles on the “phenomenon”.
This is disturbing:
Full scale of investigation sounds methodical and can be accessed here
Social media isn’t for medicine. Its out-and-out for “marketing” and to create a “buzz”. Specific individuals are felicitated and “rewarded” for their exceptional performances by their senior peers, which has no meaning for a vast majority of us. While recognition remains an essential human desire, it may open up doors for more “lucrative posts” to leave a legacy of “trailblazing work”, but comes with its own trade-offs. Traditional metrics to assess individuals’ capacity are overrated and have no meaning.