Ethics in AI (especially bias) is assuming a “bigger problem” and there has been a lot of chatter in this direction. Bias in image processing, for example, needs to be “fine-tuned” to eliminate the possibility of false positives (or at least reduce it them to statistically acceptable proportions). What I have emphasised here in the quote represents the line of thinking from self-appointed “gate-keepers”. I wonder if the “global convergence” actually weighs in the opinion of those in the dark- communities that are not even aware of what the Internet is.
To quote an article in Nature Machine Intelligence from September 2019, while there is “a global convergence emerging around five ethical principles (transparency, justice and fairness, nonmaleficence, responsibility, and privacy),” what precisely these principles mean is quite another matter.
There remains “substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain, or actors they pertain to, and how they should be implemented.” Ethical codes, in other words, are much less like computer code than their creators might wish. They are not so much sets of instructions as aspirations, couched in terms that beg more questions than they answer.
This problem isn’t going to go away, largely because there’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to. Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve.
AI and healthcare are growing; yes. Not in the way that the commentators would want it to. Not in the way the tenured academia wants it to. It has assumed its own wind and despite dystopian pronouncements, is likely to improve efficiency in healthcare delivery.