Our German practice
Our German offering is a key plank of our European business.
Since 2 February, the first provisions of the European Union ("EU") AI Act (Regulation laying down harmonized rules on artificial intelligence) are directly applicable to all companies that offer or use AI systems. Companies should deal with this at an early stage to assess what measures they need to take and what obligations they will face. We provide an overview of the key contents of the AI Act here.
It came into force on 1 August 2024 and has been directly legally binding in the EU member states since then but has not yet been applicable. This changed on 2 February 2025, which marks the applicability of the first stage of the AI Act. From this date, the general provisions, the requirements for providers and deployers when imparting AI skills to their employees and the prohibition of certain practices in the AI sector apply.
The AI Act has a wide territorial reach and covers providers and deployers of AI systems and other actors, regardless of their location, insofar as they place AI systems on the market in the EU or the output of such systems is used in the EU. The AI Act thus claims extraterritorial application beyond the territory of the EU.
For providers and deployers of AI systems in particular, it includes a comprehensive program of obligations for the introduction and use of AI systems.
The AI Act essentially affects actors involved in the development, placing on the market, provision, use and exploitation of AI systems in the EU. Your individual obligations may therefore vary depending on your role and the risk category of the AI system.
AI providers
Provider within the meaning of Art. 3 para. 3 AI Act is "a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Providers of high-risk AI systems, for example, have the following obligations: Provision of information (so-called instructions for use) for deployers, establishing a quality management system, training of the high-risk AI system with qualitatively suitable data sets, implementation of a conformity assessment procedure, registration in an EU database, CE marking and obligations to provide evidence to an authority yet to be determined. Providers of general-purpose AI models are subject in particular to documentation and information obligations and the obligation to cooperate with the competent national authorities.
Deployers of AI systems
A deployer within the meaning of Art. 3 para. 4 of the AI Act is "a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
Deployers of high-risk AI systems also have special obligations under the AI Act, such as the appointment of an AI officer, information obligations towards employees and employee representatives, testing and documentation obligations, monitoring of the AI system based on the instructions for use and reporting obligations to the competent market surveillance authority yet to be determined.
Importers and distributor
In addition, the AI Act imposes special obligations on importers and distributors of high-risk AI systems. Importers of AI systems, for example, are subject to inspection and documentation obligations as well as information obligations to national authorities before placing the AI system on the market. Distributors of high-risk AI systems are obliged to review whether a CE conformity marking exists and whether the AI system is accompanied by the instructions for use.
Attention: The obligations of a provider could also apply to you!
As usual, employers will be considered the operators of AI if they use AI systems in the HR sector. However, according to the AI Act, you can also be considered a provider of a high-risk AI system if you brand a high-risk AI system that has already been placed on the market or put into operation with your name or trademark (so-called branding), make significant changes or change the purpose of the AI system. This may already be the case, for example, if you brand and use an AI system with your company logo.
AI systems are classified into four risk categories:
Prohibited practices
Certain AI systems such as "social scoring systems" are generally prohibited under the AI Act as they are incompatible with the fundamental rights of the EU. Taking the use of AI in employment relationships as an example, AI incentive systems that aim to influence the behavior of employees and AI systems that are used solely to evaluate employees in the employment relationship are likely to be prohibited. If an AI system is able to recognize certain emotions of the employee, such as boredom or excessive demands, this could also be a prohibited AI practice under the AI Act.
High-risk AI systems
These AI systems are subject to strict requirements under the AI Act. This includes, for example, AI systems that are used by the employer in the application process for the selection and recruitment of applicants (e.g. screening of applications or filtering and evaluation of applicants by AI systems). AI systems that can influence the employer's decisions on working conditions, promotions or the termination of employment relationships can also be classified as high-risk AI systems. This also includes AI systems that can influence the assignment of tasks based on the individual behavior or personal characteristics of employees. AI systems that are used to monitor and evaluate the performance and behavior of employees could also be classified as high-risk AI systems if they do not already constitute prohibited AI practices.
General-purpose AI models
General-purpose AI models include those that have been trained on a broad database and can be adapted to a wide range of advanced tasks. This includes, for example, "chatbots" that can generate new image, audio or video content on the basis of specific questions. Such AI models are subject to technical documentation obligations, including the labeling of the AI model as such.
Other AI systems
Providers and deployers of other AI systems that do not fall into the above categories must fulfill certain transparency obligations. For example, content generated artificially with such systems, such as images, videos and texts, must be labeled accordingly in a machine-readable manner. The development of and compliance with codes of conduct is voluntary. Since 2 February 2025, the requirement according to Art. 4 AI Act for providers and deployers of AI systems to ensure a sufficient level of AI competence among persons who operate or use these systems on their behalf has been applicable. Corresponding training, which should be related to the introduction of internal policies for the use of AI in companies, should therefore take place promptly. These policies should also address, among other things, the prohibited practices regulated in Art. 5 of the AI Act, which has also been applicable since 2 February 2025.
From 2 August 2025, the AI Act provides for fines of up to EUR 35 million or up to 7 % of global annual turnover, depending on the type and severity of the breach of the AI Act.
The AI Act will apply in stages as follows:
1 August 2024 |
The AI Act came into force |
2 February 2025 |
Applicability of the general provisions of the AI Act; prohibition of prohibited AI practices |
2 August 2025 |
Imposition of fines possible; validity of the provisions for general-purpose AI models |
2 August 2026 |
Validity of all provisions of the AI Act |
2 August 2027 |
Obligations for providers of general-purpose AI models placed on the market before 2 August 2025 and application of classification rules for high-risk AI systems |
31 December 2030 |
AI systems already placed on the market before the AI Act came into force (especially high-risk AI systems) must also be compliant with the AI Act |
With regard to the parts of the AI Act that are already applicable or will become applicable in the future, we recommend implementing the following measures in a timely manner:
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs