This generated a fair share of heated debates.
Pause Giant AI Experiments: An Open Letter – Future of Life Institute
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
First, how do you certify “accuracy”? Who determines the “source code” for the “black-boxes”? Who issues the certification?
Some more snake oil:
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
The authors have clearly taken a “middle of the road” approach. Not practical. No one would risk adding watermarks, even in the interests of “democracy”; especially, if you are in the rat race for the “funding”.
Who determines there is a pause for six months? Why six months? Why not a year? Or the year after?
If this isn’t laughable, I don’t know what else is. Cloaking illogical statements with unworkable solutions isn’t the way forward, but spell out the clear lines for regulation. Nope, we don’t want fancy councils or “representatives”, but an enlightened legislature and public representatives who understand the scope for the harms versus benefits ratio and steer AI towards a common public good, instead of profit maximisation for a handful.