When AI fails: British Grading Debacle Shows Pitfalls of Automating Government.

I won’t be able to comment much on a system where I have never worked. However, it is essential to imbibe the key takeaways. I don’t subscribe to “amplifying biases” (this is classical bullshit from New York Times) but I am linking to it because the government needs to develop its own capabilities “in-house”. By “outsourcing” to the vested private parties, it usually results in a shoddy implementation.

Mr. Sharpe-Roe, along with thousands of other students and parents, had received a crude lesson in what can go wrong when a government relies on an algorithm to make important decisions affecting the public.

Experts said the grading scandal was a sign of debates to come as Britain and other countries increasingly use technology to automate public services, arguing that it can make government more efficient and remove human prejudices.But critics say the opaque systems often amplify biases that already exist in society and are typically adopted without sufficient debate, faults that were put on clear display in the grading disaster.

British Grading Debacle Shows Pitfalls of Automating Government – The New York Times

The key takeaway is that policy planners need to understand the ramifications of their decisions to “automate government”. This idea requires further elaboration in a follow up post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.