Image by me and DALL-E

Where the World is on AI Regulation — June 2023

Wolfgang Hauptfleisch

--

While the AI hype has been raging through the media over the last six months, governments have been slowly ramping up efforts to regulate the application of Artificial Intelligence. And while little has been put into law yet, progress is undeniable. A disclaimer: I will list those jurisdictions I have followed more closely, so forgive me if I leave some out I am less familiar with. So what has happened recently: A lot of countries, links and further reading.

The European Union — the DSA has arrived , and AI regulation is coming

The European Union has been working on regulating AI for a considerable time, long before the current hype reached the media. First introduced as a draft in 2021 the EU Act appears now to come closer to be enacted. Just three weeks ago the two concerned committees of the EU parliament voted overwhelmingly to approve a consolidated draft.

The next step will be a vote in the European Parliament, after which — assuming it will pass — it will be handed over to the Council of the European Union¹. This could mean that the Act passes all stages this year, but changes and hold-ups are certainly possible.

Some of the latest amendments agreed upon by the two committees, especially with regards to the classification of foundation models as high-risk, have stirred quite some concerns about its impact on open source developers and small companies. I will cover that in more detail at a later point.

There will certainly a transition period, potentially up to two years, to allow the industry to prepare for the EU AI Act, as well as give member states time to set up regulatory agencies to enforce the act.

Much earlier however, there will be new requirements coming for those companies listed as “very large platforms” (such as Google, Meta and Twitter) under the EU’s Digital Service Act (DSA), which will ramp up transparency requirements for recommendation and suggestion algorithms².

United States — hearings and big players

Over the last six years both the Trump and Biden administrations have issued executive orders³ which concern the application of AI in federal government. An overall legal framework however is still amiss.

The Algorithmic Accountability Act⁴ introduced in 2022 attempts to deal with the the impacts of the automated systems they use and sell. It also creates new transparency about when and how automated systems are used. It is however in the committee stage, and if the act will become law in one form or another remains to be seen. The Biden administration’s Blueprint for a AI Bill of Rights⁵, published in October 2022, is, despite its grand name, more a guidance than anything else at this point.

Spooked by the appearance of ChatGPT & Co the US Senate organised a hearing on AI regulation in mid-May, which included an appearance of OpenAI CEO Sam Altman⁶. The idea of a “ML model development licence” was floated, with calls for regulation coming both from the industry and politicians.

Meanwhile, the National Institute of Standards and Technology (NIST) published its AI Risk Management Framework in January 2023 and launched its AI Resource Center in March. The NIST’s work appears so far to be the most detailed framework of its kind and will certainly have an affect on the transparency requirements for AI development both in the US and Europe⁷.

The UK - innovation and light touch

The UK government has — not surprisingly considering it wants to be seen as the destroyer of “EU’s red tape”— largely ignored the discussion around the EU AI Act, likely facilitated by the rapid changing of prime ministers over the last 2 years.

The government has published several white papers over the last year, the most recent one March 2023, each of them having “innovation” prominently featured in the title⁸ ⁹. The whitepapers stress the importance of supporting innovation in the AI space but so far fall short on potential regulatory involvement.

There seems to be some increasing pressure on prime minister Rishi Sunak government to deal with generative AI however and — in the runup to the G7 summit — Sunak has gone as far as to claim that the UK will lead in global AI “guidance” ¹⁰.

New Zealand — guidelines

Just in May 2023 the Office of the Privacy Commissioner published guidelines setting out “expectations of the Office of Privacy Commissioner with respect to the use of generative Artificial Intelligence (AI)” ¹¹.

Canada — an impact based approach

Last summer, in June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of the Digital Charter Implementation Act (Bill C-27). AIDA shows some similarities with the EU’s approach, though many details still need to be worked out. Bill C-27 will provide a implementation period of at least two years after receiving Royal Assent before coming into force, so the earliest the AI related provisions would come into force would be 2025 ¹² ¹³.

Japan — G7

Little has been moved forward since the the Japanese government published the “Social Principles of Human-Centric AI” as principles for implementing AI in society¹⁴ in 2019.

However, in early May 2023 the Japanese government’s “Strategy Council on Artificial Intelligence” convened for the first time, planning toward establishing a framework to guide the development of generative AI as soon as “next month” ¹⁵.Like the recent statements by the UK government, this process might have been accelerated by the upcoming G7 summit in Japan, where Artificial Intelligence was prominently featured on the agenda.

China — New “Proposed Rules”

In April 2023, China’s leading technology regulatory authority, the Cyberspace Administration of China (“CAC”), announced it is looking to pre-emptively regulate use of generative AI and published a proposed regulation, the “Measures for the Administration of Generative Artificial Intelligence Services” ¹⁶ ¹⁷. While the draft is extensive and covers both development process and liability aspects of AI, it is also vague and ambiguous, e.g. it contains a provision that generated content “must not contain false information”. How this will be implemented or enforced in practice — in the context of Chinese media restrictions in general — remains to be seen.

Latin America

A bit outside my area of expertise, I recommend a recent article by Katie Konyn in Latin America Reports from May 2023 ¹⁸, which covers several Latin American countries.

Conclusion: Where things are

There seems to be a world-wide, minimum, consensus that AI requires regulation of some form, though governments differ widely in their objectives and approaches. Most proposed regulations include at least some form of tier system which applies different rules for different areas of AI. All of them include at least some form of transparency requirements, few have tackled the liability questions yet.

There seems to be a world-wide, minimum, consensus that AI requires regulation of some form, though governments differ widely in their objectives and approaches.

There has been a new wave of government statements about regulating generative AI in the wake of the G7 summit. As with any industry regulation, few governments want to move first and potentially give up a competitive advantage in an over-hyped growth market. At least not as long as the gold rush continues.

Like with privacy regulation it feels like many will await the introduction of the EU AI Act, monitor industry reactions to it and then follow with their own implementations in some shape or form. Due to its wide scope, the impact of the Eu AI Act will be impossible to ignore worldwide anyway.

Follow-ups:

Where the World is on AI Regulation — October 2023

Where the World is on AI Regulation — December 2023

References and further reading:

European Union: [1] https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2413 [2] https://digital-strategy.ec.europa.eu/en/news/digital-services-act-commission-setting-new-european-centre-algorithmic-transparency

United States: [3] “Promoting the Use of Trustworthy AI in the Federal Government” (EO 13960, 2020). “Guidance for the Regulation of AI Applications” (Nov 2020), “Further Advancing Racial Equity and Support for Underserved Communities..” (EO 14091, Feb 2023), [4] Algorithmic Accountability Act of 2022 , [5] Blueprint for an AI Bill of Rights, [6] The Senate’s hearing on AI regulation was dangerously friendly [7] NIST AI Risk Management Framework

United Kingdom: [8] AI regulation: a pro-innovation approach , [9] A “light touch” approach to AI regulation in the UK — Reed Smith, [10] Rishi Sunak races to tighten rules for AI amid fears of existential risk — The Guardian

New Zealand: [11] Office of the Privacy Commissioner | Generative Artificial Intelligence

Canada: [12] Responsible use of artificial intelligence (AI) — Canada Gov, [13] One Step Closer to AI Regulations in Canada: The AIDA Companion Document — McCarthyTetrault, March 2023

Japan: [14] Social Principles of Human-Centric AI, [15] Japan government panel begins discussions on AI policy

China: [16] China’s Cyberspace Administration Proposes Draft Rules to Regulate Generative AI — Lexology, [17] What a Chinese Regulation Proposal Reveals About AI and Democratic Values — Carnegie

Latin America: [18] The progression of advanced AI: Where do Latin American governments stand?

--

--

Wolfgang Hauptfleisch
Wolfgang Hauptfleisch

Written by Wolfgang Hauptfleisch

Software architect, product manager. Obsessed with machines, complex systems, data, urban architecture and other things.