Comparing the kinds of ethical risks present in medicine to the kinds present in AI is useful for a number of reasons. First, in both instances there is the potential for harming individuals and groups of people (e.g. members of a particular race or gender). Second, there exists a vast array of ethical risks that can be realized in both fields, ranging from physical harm and mental distress to discriminating against protected classes, invading people’s privacy, and undermining people’s autonomy. Third, many of the ethical risks in both instances arise from the particular applications of the technology at hand.
Applied to AI, the IRB can have the capacity to systematically and exhaustively identify ethical risks across the board. Just as in medical research, an AI IRB can not only play the role of approving and rejecting various proposals, but should also make ethical risk-mitigation recommendations to researchers and product developers. Moreover, a well-constituted IRB — more on this in a moment — can perform the functions that the current approach cannot.
It is an intriguing discussion around the importance of ethical boards to guide research in medicine though my experience has been to the contrary. The purpose of the ethics board is tto graspthe research perspectives and ensure that there is no harm. People sit in there as rubber stamps because the social dynamics are at play.
Consider a scenario where the dean or the head believes in a research proposal but blind to its implications. How much, as an independent reviewer, your voice counts? I am sure there are more legal nuances to these aspects but these issues can be magnified when millions of dollars are at stake- for example, a new drug. I am not suggesting that the existing research is hocus pocus but the failed or negative research is never published and therefore, there is a huge duplication of costs.
These are fascinating conundrums and I have been thinking of the eastern construct of philosophyto eexplainthis in a more rational manner. The western constructs of “individuality”, howsoever well argued, are flawed because they sound either hypocritical or flawed. We need to be aware of the practical realities.
Why spend astronomical amounts for a Stage IV terminally ill patient and instead use fraction of the same aount to push for prevention of illness in the first place? The pyramidal nature of the “top-heavy” research means that supportive measures like palliation get the least amount of attention.
AI is a fancy concept and the healthcare is not ready for deployment; though the edge case scenarios won’t replace the established clinical workflows. Those shiny papers are to get more grants, but the visible impact on deployments must be quantified.
Likewise, the IRB’s need to understand their perspective and stick with the principle of “agreeing to disagree” with a complete transparency on how a decision was arrived at.