FTC devolving AI rules/awareness

It is heartening to observe that FTC is pushing out the blog posts on AI (and fairness/equity). One example here:

Watch out for discriminatory outcomes. Every year, the FTC holds PrivacyCon, a showcase for cutting-edge developments in privacy, data security, and artificial intelligence. During PrivacyCon 2020, researchers presented work showing that algorithms developed for benign purposes like healthcare resource allocation and advertising actually resulted in racial bias. How can you reduce the risk of your company becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity? It’s essential to test your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.

I stumbled on this from some random link elsewhere, and therefore started following it. I have a strong trust in the government-backed institutions, because they often give the reality check to the hype otherwise prevalent on mainstream media/VC blog posts. Most of them sound like the next iteration will take over the world. Barring few exceptions, it requires significant efforts in lobbying/rules/structure; especially around the new breaking ground like AI. I still feel that its in it’s infancy. The use case scenarios like “self-driving cars” are far away and have been pushed out by indicriminate funding from the likes of SoftBank.

Therefore, these blog posts assume significance to educate lay users/small and emerging businesses around the ethical constraints of machine learning. We still don’t have specific guidelines for each sector and they will require a constant iteration of policy tweaks before we arrive at a broad consensus. I think, the guiding principle should be “primum-non-nocere”- First Do No Harm.

Do track the blog posts from them! Highly recommended understanding how the US policy in this regard is veering towards.