Currently, it doesn’t propose anything like that. Here’s what I asked it to generate:

The response is copied below:
As an artificial intelligence language model, I do not have the capability to propose or create regulatory frameworks to regulate myself. However, I can provide some general points that may be considered when creating regulatory frameworks for AI systems:
- Define the scope of the regulatory framework: Identify the types of AI systems that will be regulated and the sectors or industries where they are used.
- Identify the risks associated with AI systems: Determine the potential risks that could arise from the use of AI systems and the impact they could have on individuals, organizations, and society.
- Establish clear guidelines: Define clear guidelines for the development, deployment, and use of AI systems, including data privacy, security, accountability, and transparency.
- Foster transparency: AI systems should be designed and developed in a transparent manner, with clear explanations of how they work and the data they use.
- Develop ethical principles: Establish ethical principles that AI systems should adhere to, including fairness, safety, privacy, and accountability.
- Encourage stakeholder participation: Involve stakeholders from industry, government, academia, and civil society in the development of regulatory frameworks to ensure that their perspectives are taken into account.
- Build in flexibility: Allow for flexibility in the regulatory framework to accommodate evolving technologies and changing risks.
- Ensure accountability: Establish accountability mechanisms to ensure that AI systems are developed, deployed, and used in a responsible manner.
- Develop enforcement mechanisms: Create effective enforcement mechanisms, including penalties for non-compliance, to ensure that the regulatory framework is effective.
- Establish oversight bodies: Establish independent oversight bodies to monitor the development, deployment, and use of AI systems and ensure that they adhere to the regulatory framework.
This sounds plausible.
No one knows how to regulate it. What are the end points? Do you regulate it to the point of “stifling any new progress” or “innovation”? What is “inclusive development”? Is the ChatGPT model accessible to everyone? Does this impact the user on the street in any meaningful manner?
The UK has an interesting proposal:
With ChatGPT hype swirling, UK government urges regulators to come up with rules for A.I.
In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.
It possibly means a letter from the bureaucrat to the companies to “reflect” on their actions and their “societal obligations”. Your guess is as good as mine- whether this gets filed or goes to trash.
Industry quotes:
Lila Ibrahim, chief operating officer of DeepMind and a member of the U.K.’s AI Council, said AI is a “transformational technology,” but that it “can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.”
I don’t know what pioneering responsibility means. Then I realised they are paid to spout exactly that nonsense.
Countries don’t need “councils”, but I also realise it is difficult to propose legislation (or even frameworks) without the feedback taken from end users on how they would be impacted. I personally believe this model of legislation is broken, and we instead require academia to step in with points and counter-points while defining the legislative frameworks, and also to see them challenged in the judiciary. This interpretation requires clear commitment to revise the initial laws and amend them accordingly.