[ad_1]
The EU risk-based AI rulebook – also known as the EU AI law – has been years in the making. But expect to hear a lot more about the regulation in the coming months (and years) as compliance deadlines roll around. Meanwhile, read on for an overview of the law and its goals.
So what is the EU trying to achieve? Rewind to April 2021, when the Commission published the original proposal and lawmakers were drafting it as a law to boost the bloc’s ability to innovate in AI by boosting trust among citizens. The EU proposed that the framework would ensure that AI technologies remain “human-centred” while also giving companies clear rules for working their machine learning magic.
Increased adoption of automation across industry and society certainly has the potential to increase productivity in various fields. But it also poses the risk of rapid-scale damage if the outputs are poor and/or when AI intersects with and fails to respect individual rights.
The bloc’s goal with the AI Law is therefore to drive the uptake of AI and grow the local AI ecosystem by setting conditions aimed at reducing the risks that things could go wrong. Lawmakers believe that having guardrails in place will boost citizens’ confidence in and uptake of AI.
The idea of strengthening the ecosystem through trust was somewhat controversial in the early part of the decade, when the law was being discussed and drafted. However, objections have been raised in some quarters that it is too early to regulate AI, and that European innovation and competitiveness could be harmed.
Few would likely say it’s too early now, of course, given how the technology has exploded into the mainstream consciousness thanks to the boom in generative AI tools. But objections remain that the law protects the prospects of local AI entrepreneurs, despite the inclusion of support measures such as a regulatory sandbox.
However, a major debate is now taking place among many lawmakers how To regulate AI, with the AI Act, the EU has set its course. The coming years will revolve around the bloc implementing the plan.
What does the AI law require?
Most uses of artificial intelligence are no They are not regulated under the AI Act at all, because they fall outside the scope of risk-based rules. (It’s also worth noting that military uses of AI are completely off limits because national security is a member state’s legal competence, not at the EU level.)
For uses within the scope of AI, the law’s risk-based approach establishes a hierarchy where a small number of potential use cases (for example, “harmful, manipulative or deceptive subliminal technologies” or “unacceptable social recording”) are labeled as It carries “unacceptable risk” and is therefore prohibited. However, the list of prohibited uses is full of exceptions, meaning that even the small number of prohibitions in the law carries many red flags.
For example, the ban on law enforcement using real-time remote biometric identification in accessible public places is not the blanket ban that some parliamentarians and many civil society groups have called for, with exceptions allowing its use for certain crimes.
The next level of unacceptable risk/prohibited use is “high risk” use cases – such as AI applications used for critical infrastructure; law enforcement; Vocational education and training; health care; And more – app makers must conduct conformity assessments before publishing them on the market, and on an ongoing basis (such as when they make substantive updates to models).
This means that the developer must be able to prove that it meets the requirements of the law in areas such as data quality, authentication, traceability, transparency, human supervision, accuracy, cybersecurity and robustness. They have to put quality and risk management systems in place so they can demonstrate compliance if law enforcement comes in for an audit.
High-risk regulations published by public bodies must also be registered in a public EU database.
There is also a third category, “medium risk,” which applies transparency obligations to artificial intelligence systems, such as chatbots or other tools that can be used to produce synthetic media. The concern is that it could be used to manipulate people, so this type of technology requires users to be informed that they are interacting with or viewing AI-generated content.
All other uses of AI are automatically considered low/minimal risk and are not regulated. This means, for example, that things like using AI to sort and recommend social media content or targeted advertising have no obligations under these rules. But the bloc encourages all AI developers to voluntarily follow best practices to enhance user trust.
This set of tiered, risk-based rules makes up the bulk of AI law. But there are also some requirements tailored to the multi-faceted models that support generative AI techniques – which AI law refers to as “general purpose artificial intelligence” models (or GPAIs).
This subset of AI technologies, which the industry sometimes calls “foundational models,” typically lies at the forefront of many applications implementing AI. Developers exploit APIs from GPAIs to deploy the capabilities of these models in their own software, often fine-tuned for a specific use case to add value. All of this means that GPAIs are quickly gaining a strong position in the market, with the potential to influence AI outcomes on a large scale.
GenAI entered the chat…
The emergence of GenAI has reshaped more than just the conversation around EU AI law; This has led to changes in the rulebook itself as the bloc’s lengthy legislative process coincided with the hype around GenAI tools like ChatGPT. Lawmakers in the European Parliament seized their opportunity to respond.
MEPs proposed adding additional rules for GPAIs, i.e. the models that underlie GenAI tools. This in turn led to increased tech industry interest in what the EU was doing with the law, leading to some fierce pushes for carve-outs of GPAIs.
French AI company Mistral was one of the loudest voices, arguing that rules imposed on model makers would hinder Europe’s ability to compete against AI giants from the US and China. OpenAI’s Sam Altman also weighed in, suggesting, in a side note to reporters, that it might withdraw its technology from Europe if regulations proved too onerous, before quickly retreating to traditional lobbying (lobbying) by regional power brokers after being approached by the EU. . On this foolish threat.
One of the most obvious side effects of the AI Act has been for Altman to get a crash course in European diplomacy.
The result of all this noise was a difficult journey to end the legislative process. It took months and a marathon final negotiation session between the European Parliament, the Council and the Commission to push the file further last year. The political agreement was reached in December 2023, paving the way for the adoption of the final text in May 2024.
The European Union has hailed the AI law as “the first of its kind in the world.” But being the first in this evolving technological context means that there are still many details to be worked out, such as establishing the specific standards to which the law will be applied and issuing detailed compliance guidelines (codes of practice) for supervision and oversight. The system builds the ecosystem that the law envisions to operate.
So, in terms of assessing its success, the law remains a work in progress, and will remain so for a long time.
For GPAIs, the AI Act continues the risk-based approach, with (only) lighter requirements for most of these models.
For commercial GPAIs, this means transparency rules (including requirements for technical documentation and disclosures about the use of copyrighted materials used to train models). These provisions are intended to assist end developers in complying with their AI law.
There is also a second level – for the strongest (and potentially dangerous) GPAIs – where the law imposes obligations on modellers by requiring proactive risk assessment and risk mitigation for GPAIs with “systemic risk”.
Here the European Union is concerned about very powerful AI models that may pose risks to human life, for example, or even risks that may lead to technology makers losing control over the ongoing development of self-improving AI systems.
Lawmakers chose to rely on the computing threshold for model training as a classifier for this level of systemic risk. GPAIs would fall into this category based on the cumulative amount of computation used to train them measured in floating point operations (FLOPs) greater than 1025.
As of now, there are not believed to be models in range, but of course that could change as GenAI continues to develop.
There is also some leeway for AI safety experts involved in oversight of AI law to report concerns about systemic risks that may arise elsewhere. (For more information on the governance structure the bloc has devised for AI law — including the different roles of the AI Office — see our previous report.)
Lobbying by Mistral and others has led to a loosening of the rules for GPAIs, with lighter requirements on open source providers for example (lucky Mistral!). Research and development has also received an exception, meaning that GPAIs that have not yet been commercialized fall completely outside the scope of the law, even without transparency requirements in place.
A long walk towards compliance
The AI law officially came into force across the EU on 1 August 2024. This date has essentially triggered the start signal as compliance deadlines for different components are scheduled to arrive at different intervals from early next year until around mid-2027.
Some of the key deadlines for compliance are six months after entry into force, when rules regarding prohibited use cases begin; Nine months after the entry into force of the Code of Practice; 12 months for transparency and governance requirements; 24 months for other AI requirements, including commitments for certain high-risk systems; and 36 months for other high-risk systems.
Part of the reason for this staggered approach to legal provisions is to give companies enough time to get their operations in order. But more than that, it is clear that regulators need time to determine what compliance looks like in this evolving context.
At the time of writing, the bloc is busy drafting guidance for various aspects of the law ahead of these deadlines, such as the Code of Practice for Makers of GPAIs. The EU is also consulting on the law’s definition of “AI systems” (i.e. software that will be in-scope or out-of-scope) and clarifications regarding prohibited uses of AI.
The full picture of what the AI Act will mean for companies within the scope is still shaded and fleshed out. But key details are expected to be finalized in the coming months and in the first half of next year.
Something else to keep in mind: As a result of the pace of development in the field of AI, what is required to stay on the right side of the law will likely continue to shift as these technologies (and their associated risks) continue to evolve as well. So this is one rule book that may need to remain a living document.
Enforce AI rules
GPAIs are overseen centrally at EU level, with the Office for Artificial Intelligence playing a key role. The penalties the Commission could impose to enforce these rules could amount to up to 3% of model makers’ total global sales.
Elsewhere, the application of the rules of law for AI systems is decentralized, meaning that it will be up to authorities at Member State level (collectively, as there may be more than one particular regulatory body) to assess and verify compliance issues for the bulk of AI applications. Artificial. . It remains to be seen how applicable this structure is.
On paper, penalties could be up to 7% of total global sales (or €35 million, whichever is greater) for violations of prohibited uses. Violations of other AI-related obligations could be penalized with fines of up to 3% of total global sales, or up to 1.5% for providing incorrect information to regulatory bodies. So, there is a sliding scale of sanctions enforcement powers that you can reach.
[ad_2]