Where the World is on AI Regulation — October 2023
A few months have passed since I wrote my last article in June on the state of Artificial Intelligence regulation around the world. With the EU AI Act moving forward towards its final form, guidelines to regulate AI have appeared in many jurisdictions and China has put its first regulatory rules for generative AI into place. Covering four continents and lots of links for further reading, there is a lot to update, so let’s dive into what has happened since early summer.
European Union — AI Act on the finish line
Since the European Parliament agreed on their negotiating position in June, the AI Act negotiations moved to the trilogue, a series of informal discussions between the EU Commission, the EU Council and the EU parliament to agree upon a final draft that then can be voted on.
There have been five rounds of trilogue since June (the latest happened this week), and while normally no progress reports are being made on such negotiations, MEP Dragoș Tudorache gave some rare insight into the state of discussions this week¹:
There appear to be four remaining stumbling blocks until the EU AI Act can be finalized
There appear to be four remaining stumbling blocks: First are the issues with various exemptions sought by the EU Council for law enforcement, where the Council, which represents the governments of the member states, wants more leeway from the act, while the EU parliament favours a stricter approach to reign in authorities’ use of AI .
Secondly the handling of “foundation models”, machine learning models for more general purpose applications, has not yet agreed upon. How such models should be treated and regulated was not part of the original EU Commission’s draft for the act and has been pushed more to the forefront by parliament, not the least because of the rise of large language models (LLM) such as GPT.
The third point of contention concerns “governance and enforcement”, including potential fines. The fourth and last issue concerns details about the regulation of high-risk applications.
Discussions of the AI Act could push the final draft into 2024 but everything could be finalized by January 2024.
Tudorache concedes that the outstanding discussions could push the final draft into 2024 but he appears to be optimistic that compromises will be found and everything will be finalized by January 2024.
As expected, industry has used the trilogue phase to put some pressure on the EU Council to soften some elements in the AI Act. In an open letter signed by more than 150 executives, they state that the draft legislation “would jeopardise Europe’s competitiveness and technological sovereignty” and open “a gap” to the US². While the EU Council might be receptive to these concerns, I personally find it unlikely that the EU parliament will be open to any sweeping changes so late in the game.
Meanwhile Spain, holding the European Council Presidency since July, has put a stake in the ground by establishing the first regulatory agency in a member state.³ It would make sense for other member states to follow this example, even if details of the Act are still being negotiated.
United Kingdom — politics and summits
The UK Government — while rejecting the EU approach— has missed no occasion to present itself as the “leader in AI regulation” publicly while widely touting an innovation-friendly based approach to AI.
While no new details have emerged following the earlier AI white paper, the government has organised an AI Safety summit⁴ at Bletchley Park which will take place next week, on November 1st and 2nd.
According to the government the summit will be attended by about 100 guests both from politics and industry. Though it appears that it has not been made public who has been invited (the government confusingly states that they do not “want to speculate about the guest list”), it has been confirmed that US Vice President Kamala Harris and Google DeepMind CEO Demis Hassabis will be attending the event⁵, which hints at a rather high profile than expert based gathering.
In the run-up to the event, in a speech held on October 26th, prime minister Rishi Sunak also announced plans for an “AI Safety Institute” to “address potential threats” of AI while also stressing that AI Regulation “should not be rushed”.⁶
A bill or codified regulation is not expected to appear any time soon, and is probably unlikely to be presented before the election expected next year. I was not able to find out if anything of the actual summit will be made public other than press releases, but I will be looking out for it.
China — tackling generative AI
In July the Cyberspace Administration of China (CAC) released their final version of the “Interim Measures for the Management of Generative Artificial Intelligence Services.”⁷
it will be interesting to see how much China and Europe are staying in sync.
The regulation took effect on 15 August 2023 and as the title implies mainly deals with managing generative AI. Despite its “interim” title the detailed regulation covers a wide range of issues⁷ such as:
- Obligations for development and model training
- Curation of training data and requirements for quality assessments
- Intellectual property considerations
- Product liability
- Administrative licensing for providers of generative AI services
There appear to be quite some similarities with the planned obligations and requirements for generative AI outlined in the upcoming EU AI Act, and it will be interesting to see how much China and Europe are staying in sync.
United States — self regulation and state regulation
On 20 June a bipartisan group of legislators introduced a bill to create a commission focused on regulation of artificial intelligence.⁸ The proposed commission would “review the United States’ current approach to AI regulation”, make recommendations on any new office or governmental structure that may be necessary and develop a risk-based framework for AI.” However, with the current deadlock in the US House of Representatives it is not expected that the bill will move forward anytime soon.
Meanwhile, the White House published a statement on AI self regulation
in July⁹ signed by the “seven leading AI companies” in which the companies commit — between other things — to:
- internal and external security testing before release of a product
- sharing of information related to AI risk across the industry and with governments, civil society, and academia
- publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
It remains to be seen how well, or if at all, this self-regulated code of conduct will be monitored and how much sway the government will have over its “enforcement”.
New York City’s new regulations on the use of AI in hiring and promotion decisions has gone into effect
While little has moved forward on the federal level, some states are implementing their own AI related regulation: New York City’s new regulations on the use of AI in hiring and promotion decisions has gone into effect in July¹⁰. The law has a limited focus on racial and gender discrimination due to algorithmic bias, but has received some criticism for leaving out other forms of discrimination, like those based off of age or disability.
India — contradictory signals
India seems to be an interesting case when it comes to AI regulation, sending out widely contradictory signals to where it will go. In April this year the Ministry of Electronics called Ai an “enabler of the digital economy and innovation ecosystem” and spoke openly against regulation of the secotor¹².
However, only a month later It Minister Rajeev Chandrasekhar declared that India will “will regulate [AI services] through the prism of user harm” and that the Indian government has its “own views on how AI should have guardrails” when discussion the Digital India Bill¹¹.
Then, in August, Pm Modi called for call for “a global framework on the expansion of ‘ethical’ artificial intelligence (AI)”, potentially signalling a shift in the government’s stance on regulation.¹³
Part of this shift was reflected in a new consultation paper by the Telecom Regulatory Authority of India (TRAI) in July, which stated that the Centre should set up a “domestic statutory authority to regulate AI” on a “risk-based framework”, which sounds similar to the pending EU AI Act.¹⁴
Canada — waiting for AIDA
While waiting for the Artificial Intelligence and Data Act (AIDA), which was tabled as part of Bill C-27 in June last year, the Canadian government in September released the “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems”.¹⁶
The new voluntary code is based on feedback received during the consultation process on the development of the “Canadian code of practice for generative AI systems¹⁵, a consultation that closed in September.
Australia — discussion papers and push back
The Australian federal Government released a discussion paper in June providing an overview of existing AI governance in Australia and international developments in relation to AI, and is seeking feedback on whether further governance and regulatory responses are needed.¹⁷
The wide-ranging discussion paper deals with, and asks for feedback on:
- Algorithmic bias
- Risk assessment and accountability
- Danger of monopolies and ownership of datasets
- Transparency
However as part of that consultation the “Tech Council of Australia” — an industry lobby group- expressed opposition to both an “European Union style AI Act” or even an AI regulator. While they concede that “guardrails and regulations” for AI are necessary, they speak out against establishing a specialized Australian AI regulator and do “not support following the path of the EU in developing a standalone AI Act”.¹⁸ It is interesting to note that both Google and Microsoft are members of the Tech Council.
Conclusion
While the EU AI Act is coming closer and remains the most ambitious legislation with regards to AI, several countries — such as China — have implemented temporary regulations to deal especially with regenerative AI services, while other — like Canada — focus on voluntary codes of conduct while regulation is being drafted.
For further reading and much more detail see the references below.
European Union: ¹ EU AI Act nearing agreement.. — Euronews | ² European companies sound alarm over draft AI law — Financial Times | ³ Aprobado el estatuto de la Agencia Española | United Kingdom: ⁴ AI safety summit — UK Government | ⁵ AI Safety Summit — The Independent | ⁶ Sunak says UK shouldn’t ‘rush to regulate’ AI — Business Matters | China: ⁷ China Promulgated Framework Regulations on Generative Artificial Intelligence — Lexicology | United States: ⁸ A Bill To establish an artificial intelligence commission (pdf) | ⁹ Voluntary Commitments from Leading Artificial Intelligence Companies — Whitehouse Gov | ¹⁰ NYC law promises to regulate AI in hiring.. — Axios | India: ¹¹ India set to regulate AI, Big Tech, with sweeping Digital Act — The Register | ¹² India opts against AI regulation — Techcrunch | ¹³ PM Modi calls for expanding ‘ethical’ AI — The Indian Express | ¹⁴ TRAI Consultation Paper — Press Release -TRAI (PDF) | Canada: ¹⁵ Canadian Guardrails for Generative AI — Code of Practice | ¹⁶ Voluntary Code of Conduct | Australia: ¹⁷Discussion Paper: Supporting Responsible AI| ¹⁸ Supporting Safe and Responsible AI Tech Council of Australia submission