Where The World Is On AI Regulation — December 2023
Only two months have passed since my last AI regulation roundup but enough has happened since then that warrant a December update. Highlights this time are the agreement reached on the EU Artificial Intelligence Act, and president Biden’s executive Order on artificial intelligence.
Like last time, I will likely miss some developments, feel free to point out any omissions in the comments.
European Union
As most people will know by now, the trilogue negotiations on the EU Artificial Intelligence Act have finally concluded after exhausting last minute compromises between EU parliament, commission and council.¹
With the last outstanding issues been agree upon it will now take some time until a final draft is being written. However, the consensus means that the AI Act will pass before the European parliament elections in June 2024.
As with all EU regulatory frameworks, there will be an implementation period of 18 to 24 months, both for industry to prepare themselves for compliance as well for the new EU AI Office to be set up and the sanbox programs to be put in place.
The significance of that agreement can hardly be overstated: The AI Act lists a range of strictly prohibited use cases for AI, introduces special requirements for AI applications based on risk level, a two-tier system of compliance requirements for general-purpose AI systems, clarifies liability and much more.
For details please refer to my article about the outcome of the negotiations from last week.
United States
At the end of October US president Biden issued his “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”². The order sets out a wide range of federal regulatory principles and directs a long list of federal agencies (including the Department of Commerce, Health and Human Services) to establish standards and technical guidelines for the development and use of AI systems.
“to set the rigorous standards for extensive red-team testing to ensure safety before public release” — President’s Biden Executive Order ³
The order invokes a statutory authority, the Defense Production Act, that historically gives the president the power to regulate private industry to support the national defense in the wider sense.
It requires leading AI developers to share safety test results and other information with the government, with the National Institute of Standards and Technology (NIST) being mandated “to create standards to ensure AI tools are safe and secure before public release”.
Other elements of the orders are ⁴:
- mandating NIST to “set the rigorous standards for extensive red-team testing to ensure safety before public release”.
- preventing “AI algorithms from being used to exacerbate discrimination” and to “ensure fairness throughout the criminal justice system”
- various other consumer protection elements
The Whitehouse however stresses the fact that the order is not a replacement for an act of congress and calls for a “bipartisan data privacy legislation to protect all Americans”.
There is much more to write about the executive order and I will certainly do a deep dive into it early next year.
United Kingdom
Meanwhile the British government organized AI Safety summit took place in early November at Bletchley Park. The summit⁵, which was touted to demonstrate “British leadership in regulating AI” started with a joint declaration⁶ agreed upon by the participating countries.
As I mentioned in my article about the event, little detail of the summit emerged beforehand, and it ended rather unceremoniously with an awkward fireside chat between prime minister Rishi Sunak and Elon Musk about the “dangers of AI”.
However it is unclear what impact the summit will have on UK government policy, right after the summit Rishi Sunak stressed that no immediate regulatory initiative is expected in the UK.
Europe / UNECE
The WP.6, a working group of the United Nations Economic Commission for Europe (UNECE), one of five regional commissions of the United Nations, published their paper on “The regulatory compliance of products with embedded Artificial Intelligence or other digital technologies”⁷. AI in embedded system is an area that has not drawn much attention in a field currently dominated by web services and large language models.
The “Working Party on Regulatory Cooperation and Standardization Policies (WP.6)” is a working group that aims to promote regulatory cooperation and standardization policies.
The paper also calls for governments to put human rights — in particular safeguards to protect children — at the heart of any regulatory framework. It also tackles question with regards to the environment, sustainability and reducing the global digital divide.
The WP.6 makes the case for the widespread and globally aligned use of common international standards and compliance and suggests that only AI-embedded products that comply with these standards are put on national markets.
Member states have encouraged the project lead Markus Krebsz to continue this work⁸ and tasked the project team with drafting a UN Declaration and the development of a Common Regulatory Agreement. Both are expected for spring 2024, and I am looking forward to the outcome.
India
As mentioned in my last roundup, the Indian government has widely oscillated between a non-regulatory approach (“pro-innovation”) and a more cautious one. In an astonishingly open admission India’s minister of state for technology recently acknowledges the conflict of interest when it comes to regulating AI.
“The optics of safety and trust are not aligned to the industry’s own desire to rapidly push the frontiers of AI technology and that is the conflict I think we are seeing,” India’s minister of state for technology, Rajeev Chandrasekhar⁹
Meanwhile The “Global Partnership on Artificial Intelligence” (GPAI) summit took place in New Delhi on December 12, with Prime Minister Narendra Modi inaugurating the event.
The summit unanimously adopted the “New Delhi declaration”¹⁰ underscoring the need to mitigate risks from AI systems, and the need to promote equitable access to critical resources for AI innovation, such as computing resources and datasets.¹¹
The GPAI members ( countries like India, the United States, the UK, France, Japan, and Canada among others) stated their dedication to implementing those values through the development of regulations within their jurisdictions.If this will lead to more concrete proposals for AI regulation in India (or the UK for that matter) is however unclear.
Mexico
In a recent interview¹² Alejandra Lagunes, who leads Mexico’s National Artificial Intelligence Alliance explains that there are currently 18 AI-related initiatives, but they’re all “centered around categorizing crime and modifying the law”.
The explains further that the initiative is “aiming to come up with a policy paper that presidential candidates can consider for their platforms during next year’s election”. She stresses the fact that legal and cultural differences between Mexico and the US makes it difficult to copy and paste regulation.
Australia
The “The Need for Human Rights-centred AI”¹³ , a paper published by the Australian Human Rights Commission did not make it into my roundup in October. This submission to the Department of Industry, Science and Resources in response to the outlines the Commission’s position on AI and how it can be developed ethically to protect human rights. It covers a wide range of issues and contains a list of 47 recommendations.
One of the recommendations is for Australia (on federal, state and territorial level) to “introduce a moratorium on the use of facial recognition” as well as on biometric technology in decision making that has a legal, or generally significant effect for individuals, or where “there is a high risk to human rights, such as in policing and law enforcement”.
Looking to 2024
2023 is pretty much a wrap, so we can start to look forward to 2024: The EU AI Act will likely be published early in the year, it remains to be seen how long the implementation period will last. There will be many other aspects to cover until it comes into force in 2025 or early 2026. The final draft of the act will make an interesting read.
In the United States it will interesting to see what guidelines and rules the departments ordered to do so by the president’s executive order will come up with, and how the requirements compare to those of the EU AI Act.
The question if other countries will move beyond declarations and code of conducts will be a for 2024 to answer. We have seen a quick adjustment to the new realities of Large language models as a service. New applications and service will certainly provide new challenges for regulators.
References and further reading:
European Union: ¹ Press releases by the EU Commission, EU Council and EU Parliament | United States: ² Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — White House ³ Executive Order Fact Sheet — White House ⁴ President Biden Signs Sweeping Artificial Intelligence Executive Order — Sidley | United Kingdom: ⁵ UK AI Safety Summit — 1st and 2nd November 2023, official Website ⁶ The Bletchley Declaration — UK Government | UNECE: ⁷ The regulatory compliance of products with embedded artificial intelligence — UNECE ⁸ Human decisions and human rights should be at the core of AI regulation — University of Stirling | India:⁹ How India Is Viewing The OpenAI Drama — Bloomberg [] Global Partnership on Artificial Intelligence — Website. ¹⁰Ministerial Declaration — PDF ¹¹Global Partnership on AI members adopt New Delhi declaration, bat for equitable access to AI resources — Indian Express | Mexico: ¹² The Mexican senator building a plan for artificial intelligence — rest of world | Australia:¹³ The Need for Human Rights-centred AI — Australian Human Rights Commission