Government plans to scrap an EU rule guaranteeing human checks on decisions made by computer algorithms would not only remove a vital safeguard against inbuilt machine bias; it also risks adding, rather than cutting, burdens on business.
The government wants to create a “data dividend” to boost innovation, with artificial intelligence among the sectors to be nurtured. It is understandable that the government feels pressure to show Brexit’s benefits — and there are areas where the UK could jettison Brussels rules in its favour — but cutting key protections of the EU’s general data protection regulation, as mooted in a consultation unveiled on Friday, is not the right way forward.
The editorial board has “sounded” an alarm around the bias in AI around recruitment and loan sharing. They don’t offer proof that human oversight will be “free of biases”. Algorithmic decision making, especially for loans, is risky – financial lending itself is a risky process, especially around political or regulatory uncertainty.
I have been arguing that the EU’s proposals around digital data are burdensome and require extensive cross-border sharing. The startups will focus on geographical areas where data flows are away from the bureaucratic red-tape. Data localisation norms should instead be encouraged, which will guarantee an uptick in local employment and deployment of data-centres. Most lay users don’t understand the arcane legal provisions around which these shadow fights are fought. Algorithmic bias is a real problem. I agree. Therefore, address the issues around “bias” instead of writing editorials around data-sharing.
Another self-goal by Financial Times.