This article in HBR is hazy and devoid of conclusions.
Their recommedation for having an “ethical framework” in an organisation:
The answer does not lie in abandoning ethical AI efforts altogether. Instead, organizations should ensure these frameworks are also developed in tandem with a broader strategy for ethical AI that is focused directly on implementation, with concrete metrics at the center. Every AI principle an organization adopts, in other words, should also have clear metrics that can be measured and monitored by engineers, data scientists, and legal personnel.
How can you measure something which is abstract, in the absence of any research? What metrics? Whither metrics? They haven’t mentioned any and I assume that there are none.
In the privacy domain, this is an interesting feedback:
In the world of privacy, there are a host of metrics that organizations can adopt to quantify potential privacy violations as well. While there are numerous examples of research on the subject (this study is one of my favorites), a set of techniques called “privacy enhancing technologies” may be one of the best places to start for operationalizing principles related to privacy. Methods like differential privacy, which have open source packages that data scientists can adopt out of the box, are based upon the explicit notion that privacy can be quantified in large data sets and have been deployed by many tech giants for years. Similar research exists in the world of AI interpretability and security as well, which can be paired with a host of commonly espoused AI principles like transparency, robustness, and more.
Privacy enhancing techniques were first pushed out by Apple by generation of “differential privacy techniques”-however, it has been junked several times as the data can be easily recreated to identify the individuals. As Apple got more intrusive by pushing out health kit, it needed a “privacy cloak” and hence the discussion around ethics and “frameworks”. I cannot reverse the clock back and steadfastly refuse to adopt an inferior operating system.
Here’s an interesting facet about its implementation and I partially agree-
Are there downsides to an increased emphasis on the role of metrics in ethical AI? Surely. There are some facets of algorithmic decision-making that are hard, if not impossible, to quantify. This means that many AI deployments are likely to have intangible risks that require careful and critical review. Overemphasizing metrics can, in some circumstances, lead to overlooked risk in these areas. Additionally, it takes far more than clear metrics to ensure that AI is not generating serious harms. Mechanisms for accountability, documentation, model inventories, and more must also form major components of any efforts to deploy AI responsibly.
Quantification. These frameworks are too complex to be understood easily. Without quantification, we are shooting in the dark devoid of an addressable target.
The legal landscape will also need to grow with the increasing application and would be a policy nightmare. Without purported agendas the simplest solution would be to define the need for automation. The benefits of automation to employees and a strong ROI on the existing AI solutions should be the priority. Replacing automation for complex decision-making processes will only complicate the debate around biases and say, for example, effective delivery of healthcare.
We can ill afford that.