This is an interesting blog post from Federal Trade Commission (in the US). They write:
The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms.
I wouldn’t commend on the geo-nationalism (or geopolitics) of how the “western data constructs” are in direct confrontation with the Chinese models. Those are for the ethicists to hash out. However, seen from the western construct, the espoused philosophies are inherently contradictory. Therefore, there is an inherent resistance to accept them.
The FTC brought out an excellent series on Big Data report here:[embeddoc url=”https://radoncnotes.com/wp-content/uploads/2020/08/160106big-data-rpt.pdf” viewer=”browser”]
Here’s the clincher:
To avoid that outcome, any operator of an algorithm should ask four key questions:
- How representative is your data set?
- Does your data model account for biases?
- How accurate are your predictions based on big data?
- Does your reliance on big data raise ethical or fairness concerns?
How do you ensure the accuracy of predictions? How do you know that model is free from the biases? Ethical and Fairness are again abstract terms difficult to apply universally.
I will look at these questions sporadically.