Blog post:
Regulating AI (plus links & notes) – Baldur Bjarnason
The problem with regulating AI isn’t in coming up with effective regulations. That’s the easy part.
The problem is that tech and the authorities don’t want effective regulation because “effective” in this context always means “less profitable for the industry”.
That’s why they tend to either come up with ineffective half-measures or measures that strengthen the incumbents’ positions.
This nails the presumption here. Regulation is only a stop-gap arrangement, till the “industry cries out aloud” for regulation. In the entertainment industry, there is a “self-regulatory” code, which is assinine (different for mainstream movies through “censorship”).
The blog author proposes something logical, which I would never see in action.
First, you clarify that for the purposes of Section 230 protection (or the equivalent in your jurisdiction), whoever hosts the AI as a service is responsible for its output as a publisher. If Bing Chat says something offensive, then Microsoft would be as liable as if it were an employee.
The interesting conceptual idea around metadata:
You’d set a law requiring tools that integrate generative AI to attach disclosures to the content.
- Gmail/Outlook should pop up a notice when you get an email that their AI generated.
- Word/Docs should have metadata fields and notices when you open files that have used built-in AI capabilities.
- AI chatbots have to disclose that they are bots.
- GitHub Copilot should add a machine-parsable code comment.
Something even more radical which I would never even hear in the policy makers room:
- Heavy fines for non-disclosure.
- Heavy fines for violating opt-in.
- Even heavier fines for lying about your training data set.
- Make every AI model a “vegan” model.
- Remove every ethical and social concern about the provenance and rights regarding the training data.
Can you hear the big tech letting out a collective whine and din about this? Of course. It would immediately negate their efforts to “dominate” the segments and create a policy mayhem.
It won’t happen, but well, this is the only way to “control” the genie. Metadata, intention to remove metadata, can be counted as deception and make the big-tech responsible for the output. Let’s see how this goes forward then.