The EU AI Act is coming, this time for real, probably.
Yesterday two EU committees, the Internal Market Committee and the Civil Liberties Committee, adopted the negotiating mandate for the EU AI Act, including adding some quite impressive amendments¹. This could be the beginning of the final phase of the EU Artificial Intelligence Act.
The EU’s first attempt to regulate Artificial Intelligence has come a long way: When the EU Commission introduced its first draft of the act in April 2021 many — including me — expected the act to be finalized by summer 2022, but since then many details of the act have been fiercely debated.
Last minute amendments
The committees also agreed on a list of amendments which clarify and extend the AI Act, in some aspects actually quite dramatically². Some highlights:
- The list of of intrusive and discriminatory uses of AI systems banned by the act now includes remote biometric identification systems in publicly accessible spaces, biometric categorisation systems (e.g. categorizing by gender, race, ethnicity, citizenship status, religion, political orientation) and the use of AI for predictive policing.
- AI systems which can influence voters in political campaigns and by use of suggestion systems on very large platforms (as listed in the Digital Service Act³) have now been added to the “high-risk” category.
- New transparency and risk assessment requirements for providers of (generative) foundation models like GTP.
- Clarified exemptions for research.
Especially interesting are the new requirements for the use of AI in suggestion systems and the explicit mention of foundation models.
The landscape is changing
It seems obvious that some of the amendments are a reaction to the developments over the last year. And while a lot of hype has been created in the media around ChatGPT & Co, it has indeed become increasingly clear what the real life impact of large language models might look like.
Some have criticized the AI Act for branching out into areas already covered by the GDPR and overlapping in parts with the Digital Service Act (e.g. on algorithmic transparency), but such overlaps seem unavoidable for a regulation covering such a diverse field of applications.
It is encouraging that transparency requirements for AI generated content seems not limited to just images any more, and the increased transparency requirements in general will serve the purpose much better than regulating very specific use cases.
Moving the usage of AI by very large platforms into the high risk category makes sense and acknowledges the impact these platforms can have. One can only speculate if the takeover of Twitter has helped sharpen the EU’s mind on the matter. Astonishingly, there has been a lot of focus in the EU with regards to suggestion algorithms⁴ recently, something which was quite missing from the early drafts of the AI Act.
While the AI Act’s references to copyright issues in generative AI are still very vague and only stress how much of a grey area it is, requiring providers of large models to be more transparent about their sources seems not a bad thing as such. As many aspects of the act it will be seen how this works out in practice.
So is this it, is this the homestretch now? Possibly. Before negotiations with the Council on the final form of the law can begin, the draft negotiating mandate needs to be endorsed by the European Parliament, with a vote possibly taking place during the 12–15 June session. And while it’s possible that some groups in parliament will try to water down or amend the act again, the clear cut voting results in the committees (84 votes in favour, 7 against, 12 abstentions) seem like a broad consensus has finally been reached. So we could see the act finally coming this year. Probably.
¹ Compromise Text (consolidated version 11.05.2023)
² A more detailed list of amendments