As decision-makers in both government and industry create standards for algorithmic audits, disagreements about what counts as an algorithm are likely. Rather than trying to agree on a common definition of “algorithm” or a particular universal auditing technique, we suggest evaluating automated systems primarily based on their impact. By focusing on outcome rather than input, we avoid needless debates over technical complexity. What matters is the potential for harm, regardless of whether we’re discussing an algebraic formula or a deep neural network.
This is an interesting idea- assessing the impact. However, like all policy decisions, it is likely to get mired from interest groups. Unless, there’s a “dialogue”.
How do patient’s assess that they have been impacted? They have to trust the judgement of the medical professionals- which leaves the door wide open for debate and comprehensiveness of understanding. As healthcare has been subsumed as a “consumer-centric” and “participatory model of care” with “patient voices”, these broad ideas have taken the centre stage. While they may be appropriate in specific cultural contexts, I have a firm belief that the “march of AI will be inexorably riddled with the mess of debates”. Assessment of the affected party can never be quantified. What about the insurance models or the compensation laws as modelled by the labor courts?
It is a fanciful refrain of the “ethicists” who love to paint vast swathes in nothingness where debates remain hollow. I strongly believe that we need an alternative model of ethical constructs than the liability model that’s stemming the ideas from research and amplifying the voices of “marginalized”. It is derailing the full assessment and impact of what AI can reasonably achieve.