Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
After nearly three years of discussions and negotiation, political agreement has finally been reached in relation to the EU’s AI Act, the first major comprehensive regulation in relation to artificial intelligence. While the text is not yet available, the key elements of the agreement are now clear. In particular, the functional “risk-based” approach remains at the heart of the AI Act, with the greatest degree of regulation applying to so-called “high-risk” AI systems. For general purpose AI, including foundation models that can be used for a variety of different purposes and therefore cannot be so categorised, a two-tier regulatory approach will apply, depending on their computing power. The AI Act’s requirements will begin to apply incrementally over the coming years: developers and users of AI systems should use this time to consider how their AI systems will be regulated and prepare for the new rules.
The AI Act, broadly speaking, adopts a functional “risk-based” approach, with the degree of regulatory intervention depending on the use to which the AI system is to be put. At the most extreme end, certain categories of AI systems are considered as presenting an unacceptable degree of risk in terms of fundamental rights violations and are prohibited altogether. These include AI systems that are used for cognitive behavioural manipulation, emotion recognition in the workplace and educational institutions, social-scoring and real-time remote biometric identification in public for the purposes of law enforcement (subject to various exceptions).
At the next level, are so-called “high-risk” AI systems, which are considered to have significant life / health and fundamental rights implications, including AI systems used in critical infrastructures, access to essential public and private services and law enforcement, as well as AI systems that are products or safety components in products that are already subject to conformity assessment requirements under EU regulation. These AI systems are subject to the most onerous regulatory obligations, including requirements relating to datasets and data governance, documentation and record keeping, human oversight, robustness, accuracy and security, as well as conformity assessment for demonstrating compliance.
Beyond the high-risk category, certain AI systems are considered as being “limited risk”, including those that interact with people, such as chatbots, and AI systems for generating or manipulating content (deep fakes). These AI systems will essentially just be subject to transparency obligations, so that individuals are aware that an AI system is being used.
Otherwise, all other AI systems would not be subject to any specific regulatory requirements (beyond general EU product safety law). But there is the provision for “voluntary codes of conduct” to be developed under the Act which would be relevant for these AI systems, and which could potentially have a similar impact to some of the AI Act’s regulatory requirements.
The AI Act’s functional “risk-based” approach does not cater for general purpose AI, including foundation models, which by nature, cannot be so categorised as they can be used for a multitude of different purposes and integrated into other AI systems.
For these kinds of AI systems, a two-tier regulatory approach will be applied. So-called “high-impact” general purpose AI, namely models trained using a total computing power of more than 10^25 FLOPs (floating point operations), are considered to carry systemic risks and will therefore be subject to more onerous obligations. These include requirements in relation to risk assessment and mitigation, testing and evaluation, as well as robustness. The compute threshold may be updated in light of technological advances and there is provision for designations to be made in specific cases based on other criteria (e.g. the number of users).
Other general purpose AI that are not considered as carrying systemic risks would only be subject to transparency and reporting obligations. All such systems will though be required to have a policy concerning respect for EU copyright rules, which will be relevant in, particular, when it comes to the use of training data and copyright holders’ reservation of rights.
The jurisdictional scope of the AI Act is potentially very broad – it will apply to operators located both inside and outside the EU, provided the AI system is placed on the EU market or put into service in the EU or its use may affect people in the EU in that the output from the AI system is used in the EU.
It is also important to bear in mind that the AI Act’s obligations will apply both to developers of AI systems, as well as deployers of AI systems, i.e. companies that make use of AI systems in their own offerings and operations. Such obligations include for example, in certain circumstances, the requirement to undertake a fundamental rights impact assessment before the AI system is put into service.
The AI Act’s obligations are to be enforced at two different levels. There will be centralised enforcement by the new “AI Office” (within the Commission) in relation to general purpose AI, while national supervisory authorities will enforce the other requirements, including those relating to high-risk AI systems and prohibited AI systems. The AI Office, together with another body, the AI Board which is to be composed of national authorities’ representatives, the Commission and the European Data Protection Supervisor, will also provide a degree of coordination in relation to the implementation of the Regulation.
Enforcement will be backed up with the potential to impose significant fines – up to 7% worldwide turnover for infringements in relation to prohibited AI systems and up to 3% worldwide turnover for non-compliance with any of the other obligations, including infringements of the rules in relation to high-risk AI systems and general purpose AI.
As noted above, the text of the political agreement is not yet available and further work will be required, in any event, to refine this into the final legal text of the Regulation.
Based on previous negotiating texts, however, the AI Act will be a long piece of legislation with elaborate requirements. But in many cases, and in common with much EU legislation, the obligations will be set out in terms of results, rather than operational or technical detail. These will still need to be further spelled out and operationalised in “harmonised standards” or “common specifications” provided for under the AI Act, as well “codes of practice” specifically for general purpose AI, within the initial implementation period before the relevant obligations themselves enter into force.
It is these standards, specifications and codes which will serve as the practical reference point for significant elements of the AI Act and will determine its impact in practice.
The AI Act itself will enter into force shortly after the legal text itself has been finalised and published in the EU’s Official Journal, which may not be for several months’ time. Most of the substantive obligations, however, will only enter into force after an implementation period. To begin with, the provisions in relation to prohibited AI systems will commence 6 months after entry into force. Obligations for general purpose AI will then begin 12 months after entry into force, while the great majority of the remaining obligations will commence 24 months after entry into force.
To bridge the gap until the new rules enter into force, the Commission has proposed an “AI Pact” under which developers of AI systems will be able to commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines. While the details of how this will work remain to be seen, the AI Pact process may allow for early engagement with the Commission in relation to the implementation of the key aspects of the AI Act.
In any event, companies should now make the most of the implementation period to prepare for the application of the new rules. While the practical details are still to come, companies can already start by assessing what risk / regulatory category their AI systems fall into and start scoping the kinds of obligations and requirements that will apply. Doing so will also be useful preparation for companies which may be engaging with relevant bodies in relation to development of the practical and technical details that are to come.
It remains to be seen how influential the EU’s AI Act will be in shaping other jurisdictions’ regulatory responses to AI and whether it may have an outsized impact (the so-called “Brussels effect”), as has been the case in other areas of EU regulation. A number of jurisdictions are currently mapping out their regulatory approaches, including the US and the UK, and companies will need to assess their position in the context of a number of emerging initiatives, as well as in light of existing relevant frameworks, including intellectual property, data protection, competition and consumer protection laws.
Suffice to say, however, as the first major comprehensive regulation in relation to artificial intelligence and given its potentially broad jurisdictional scope, the EU’s AI Act appears set to become a key regulatory consideration for international companies that develop and make use of AI.
Managing Partner, Competition Regulation and Trade, Brussels
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs