I have been constantly advocating for improvement in the institutional capacity to ideate and create policy measures. There’s an intellectual vacuum in the local bureaucracy that finds itself pushing files more than creating frameworks. As such, there has been an outsize role of the consultants in the “policy making”. I can’t go into specific details yet. The following post, though, is a refreshing take on the legal challenges faced while dealing with the challenges of AI.
[Vidhispeaks] Charting a separate course – Why India must establish its own thought leadership on AI governance
AI has increasingly piqued the interest of lawyers, regulators, policymakers, and civil society given its perceived high risk high reward nature. Over the past decade, AI, through the spate of data processing techniques, has exponentially escalated from a few recommender algorithms underlying on YouTube or social media, to now being deployed across sectors like health, education, agriculture, urban planning and mobility, and even the justice system.
Its ubiquity has bolstered ambition, yet also been a cause of concern, simultaneously. Increasingly, there are reports from researchers and AI ethicists which flag the serious risks of bias, discrimination, opacity, and exacerbation of deep rooted societal prejudices, that have triggered some legitimate concerns around unbridled adoption of AI and intelligent algorithms.
The “deep-rooted prejudices” are only from the ineffective data collection and ineptness of the surveyors. You can’t blame the AI for inadequate representation, even while the western construct of “diversity” has reached a fever pitch and a crescendo.
A global trend of developing ethical charters, principles for converging AI deployment and human rights, and implementing some soft touch measures, has become quite visible. Over 160 such frameworks are presently in play, yet the obvious gap to many dabbling in this discourse, is the lack of real, meaningful regulation and enforcement of these ethical credos.
I agree. It is fashionable (and politically correct) to put out the “ethical frameworks”, though without the legislative will and backing, they are vacuous statements. India isn’t investing enough to delineate the “Eastern philosophical principles” based on Vedic frameworks, instead. The duality and non-duality is perfectly explained in the Sanatana Dharma which may be at odd ends to other philosophical schools. The author doesn’t specifically mention about these Indian schools of thought that underpin the contemporary understanding of AI. We still need to formalise the ethical frameworks and explore the commonalities in the legislative efforts. Merely by copying the EU system (and fractured understanding of the AI) will do more harm in the long term.
Another useful insight here:
For Indian regulators, it will be crucial to truly understand how issues like AI biases, fairness, digital divide, and other risks, mean for our own populace, its aspirations, and social realities. Furthermore, while AI is being used by private corporations and entities, the dominant user of AI and algorithmic tools is likely to be the government and its agencies, as is evident from the National AI Strategy published by NITI Aayog in 2018. Any regulatory framework that is to be developed must consider the state’s use of AI and the risks it poses, present the requisite checks and balances needed, state liability standards, and exceptions to the rules. Future legislation cannot create sweeping exemptions for state usage in avenues like predictive policing and surveillance, while creating a stringent compliance regime for the private sector, which was one of the biggest criticisms of the proposed data protection bill.
The author is right about the governmental overreach, but the checks and balances are defined by the judiciary. How that creative interpretation happens will be only known once the provisions are challenged. This is an exciting domain of legal research (and healthcare), which surprisingly has seen little traction locally. I’d be on the lookout.