Artificial Intelligence or Natural Stupidity?

These are cultural contexts for a society that has collectively run into a wall for want of new “ideas”. The idea that West is in a continual decline has taken root. There is also a strong movement around “de-colonising” science (and possibly medicine) by moving away from the reductionist scenarios towards holistic healing.

There are well-financed individuals considering “singularity” and moving towards “artificial general intelligence”.

Planning for AGI and beyond

AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right


1.Authors. Planning For AGI And Beyond [Internet]. 2023 [cited 2023 Mar 6]. Available from: https://openai.com/blog/planning-for-agi-and-beyond

While the “manifesto” is to rally support (and fancy buzzwords), here’s something more disturbing:

We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very difficult

I am not opposed to the idea of “technological march” or the “furnace of ideas cooking up something” (to borrow some terribly cringe cliches); however, there has to be a path forward on the disruption that will eventually follow. For example, the tech layoffs relieve the economically productive age groups and burden the “social-care systems”. There is no follow up, because they cease to “make news”. Societal disruptions should have a “safety net” in terms of alternative employment, and take a long-term view instead. It is impossible to predict the trajectories fifty years hence. Yet, it is crucial to ensure immediate impacts are understood in the context.

It is easier to wade into conspiracy theories around eventual replacement of “democracies”; yet, they do ring some truth around.

Here’s another historical misrepresentation:

ChatGPT Heralds an Intellectual Revolution – WSJ

A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

It is immediately apparent that this is shilling (and drumming up support) for the “dreamers”. Generative AI will replace critical reasoning and comprehension. It will kill the motivation levels, and it happens in subtle ways that individuals don’t even realise. For example, mobile phones have led to “helicopter-parenting”; so much so that parents have ingrained themselves in the children’s lives without giving them any chance of failure. Real-life doesn’t work like that. Wikipedia replaced encyclopedias, for example; but that also took away the research skills. Wikipedia, by itself, has now become the battleground for cultural wars and is no longer the “authoritative source” of information.

The entry barrier for a new player is significant. It opens up several complex policy challenges (legal, administrative) and perhaps ethical. The reason I place “ethics” as the last is because the western templates around ethics are outmoded and worn out. The biases that the authors mention breathlessly are not exclusionist. They are there by design.

My bigger worry is the “adaption” and “adoption” in textbooks, for example. At least, there is some “editorial insight” into our current textbooks that requires a significant critical appraisal. Generative AI (or whatever new version comes) will replace that effort.

Use your critical reasoning. I am not a techno-phobe but I understand where the dark alleys lead. Collectively, we must shine light to “disinfect” it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.