Follow us


Since 2 February, the first provisions of the European Union ("EU") AI Act (Regulation laying down harmonized rules on artificial intelligence) are directly applicable to all companies that offer or use AI systems. Companies should deal with this at an early stage to assess what measures they need to take and what obligations they will face. We provide an overview of the key contents of the AI Act here. 

It came into force on 1 August 2024 and has been directly legally binding in the EU member states since then but has not yet been applicable. This changed on 2 February 2025, which marks the applicability of the first stage of the AI Act. From this date, the general provisions, the requirements for providers and deployers when imparting AI skills to their employees and the prohibition of certain practices in the AI sector apply.

The AI Act has a wide territorial reach and covers providers and deployers of AI systems and other actors, regardless of their location, insofar as they place AI systems on the market in the EU or the output of such systems is used in the EU. The AI Act thus claims extraterritorial application beyond the territory of the EU.

For providers and deployers of AI systems in particular, it includes a comprehensive program of obligations for the introduction and use of AI systems.
 

Obligations and addressees

The AI Act essentially affects actors involved in the development, placing on the market, provision, use and exploitation of AI systems in the EU. Your individual obligations may therefore vary depending on your role and the risk category of the AI system.

AI providers

Provider within the meaning of Art. 3 para. 3 AI Act is "a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".

Providers of high-risk AI systems, for example, have the following obligations: Provision of information (so-called instructions for use) for deployers, establishing a quality management system, training of the high-risk AI system with qualitatively suitable data sets, implementation of a conformity assessment procedure, registration in an EU database, CE marking and obligations to provide evidence to an authority yet to be determined. Providers of general-purpose AI models are subject in particular to documentation and information obligations and the obligation to cooperate with the competent national authorities.

Deployers of AI systems

A deployer within the meaning of Art. 3 para. 4 of the AI Act is "a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".

Deployers of high-risk AI systems also have special obligations under the AI Act, such as the appointment of an AI officer, information obligations towards employees and employee representatives, testing and documentation obligations, monitoring of the AI system based on the instructions for use and reporting obligations to the competent market surveillance authority yet to be determined.

Importers and distributor

In addition, the AI Act imposes special obligations on importers and distributors of high-risk AI systems. Importers of AI systems, for example, are subject to inspection and documentation obligations as well as information obligations to national authorities before placing the AI system on the market. Distributors of high-risk AI systems are obliged to review whether a CE conformity marking exists and whether the AI system is accompanied by the instructions for use.

Attention: The obligations of a provider could also apply to you!

As usual, employers will be considered the operators of AI if they use AI systems in the HR sector. However, according to the AI Act, you can also be considered a provider of a high-risk AI system if you brand a high-risk AI system that has already been placed on the market or put into operation with your name or trademark (so-called branding), make significant changes or change the purpose of the AI system. This may already be the case, for example, if you brand and use an AI system with your company logo.

Classification of AI systems

AI systems are classified into four risk categories:

Prohibited practices

Certain AI systems such as "social scoring systems" are generally prohibited under the AI Act as they are incompatible with the fundamental rights of the EU. Taking the use of AI in employment relationships as an example, AI incentive systems that aim to influence the behavior of employees and AI systems that are used solely to evaluate employees in the employment relationship are likely to be prohibited. If an AI system is able to recognize certain emotions of the employee, such as boredom or excessive demands, this could also be a prohibited AI practice under the AI Act.

High-risk AI systems

These AI systems are subject to strict requirements under the AI Act. This includes, for example, AI systems that are used by the employer in the application process for the selection and recruitment of applicants (e.g. screening of applications or filtering and evaluation of applicants by AI systems). AI systems that can influence the employer's decisions on working conditions, promotions or the termination of employment relationships can also be classified as high-risk AI systems. This also includes AI systems that can influence the assignment of tasks based on the individual behavior or personal characteristics of employees. AI systems that are used to monitor and evaluate the performance and behavior of employees could also be classified as high-risk AI systems if they do not already constitute prohibited AI practices.

General-purpose AI models

General-purpose AI models include those that have been trained on a broad database and can be adapted to a wide range of advanced tasks. This includes, for example, "chatbots" that can generate new image, audio or video content on the basis of specific questions. Such AI models are subject to technical documentation obligations, including the labeling of the AI model as such. 

Other AI systems

Providers and deployers of other AI systems that do not fall into the above categories must fulfill certain transparency obligations. For example, content generated artificially with such systems, such as images, videos and texts, must be labeled accordingly in a machine-readable manner. The development of and compliance with codes of conduct is voluntary. Since 2 February 2025, the requirement according to Art. 4 AI Act for providers and deployers of AI systems to ensure a sufficient level of AI competence among persons who operate or use these systems on their behalf has been applicable. Corresponding training, which should be related to the introduction of internal policies for the use of AI in companies, should therefore take place promptly. These policies should also address, among other things, the prohibited practices regulated in Art. 5 of the AI Act, which has also been applicable since 2 February 2025.

Fines

From 2 August 2025, the AI Act provides for fines of up to EUR 35 million or up to 7 % of global annual turnover, depending on the type and severity of the breach of the AI Act.

Timetable for applicability of the AI Act

The AI Act will apply in stages as follows:

1 August 2024

The AI Act came into force

2 February 2025

Applicability of the general provisions of the AI Act; prohibition of prohibited AI practices

2 August 2025

Imposition of fines possible; validity of the provisions for general-purpose AI models

2 August 2026

Validity of all provisions of the AI Act

2 August 2027

Obligations for providers of general-purpose AI models placed on the market before 2 August 2025 and application of classification rules for high-risk AI systems

31 December 2030

AI systems already placed on the market before the AI Act came into force (especially high-risk AI systems) must also be compliant with the AI Act

Recommendations for action

With regard to the parts of the AI Act that are already applicable or will become applicable in the future, we recommend implementing the following measures in a timely manner:

  • Starting the AI Act implementation project (in particular recording the AI systems, classifying the AI systems into the respective risk category, identification, reviewing and implementing the relevant obligations, including creating a catalogue of obligations)
  • Discontinuation of prohibited AI practices according to Art. 5 AI Act - already mandatory
  • Creation and adaptation of internal guidelines and policies for employees (e.g. a code of conduct that regulates what employees must observe when using AI and to avoid legal violations when employees use AI)
  • Training employees in the use of AI systems to implement the obligation to impart AI skills according to Art. 4 of the AI Act - already mandatory
  • Determine whether an AI Officer should be appointed
  • Documentation of compliance with the provisions of the AI Act
  • Drafting, reviewing and negotiating contracts and terms of use in connection with AI services
  • Consideration of further legal requirements, in particular:
    • Innovations in the works constitution on AI (e.g. the works council can seek advice from an expert on new technologies, including AI, according to sec. 80 para. 3 German Works Constitution Act (Betriebsverfassungsgesetz — "BetrVG"); the employer must inform the works council of planned changes to working procedures and workflows according to with sec. 90 para. 1 no. 3 BetrVG, which includes the use of AI)
    • Information and co-determination rights of the works council (e.g. when concluding and negotiating works agreements on the use of AI systems or a framework works agreement as well as selection guidelines using AI)
    • Data protection law (e.g. data protection analysis when integrating AI systems, adaptation of data protection information according to Art. 13, 14 General Data Protection Regulation, clarification of data protection responsibilities and conclusion of corresponding agreements (e.g. dataprocessing agreement), review and documentation of the legal basis of data processing, adaptation of existing deletion concepts)
    • German Act on the Protection of Trade Secrets (Geschäftsgeheimnisgesetz) (e.g. creation of a protection concept and training measures for employees on the use and transfer of trade secrets in AI systems)
    • Copyright (e.g. ensuring that training data, inputs into AI systems or content generated by AI systems do not infringe the rights of third parties; checking whether and how content generated by AI is protected by copyright and can be used commercially; clarifying liability issues)
    • Warranty and liability according to the provisions of the German Civil Code (Bürgerliches Gesetzbuch) on contracts for digital products in force since 2022 and on the basis of the new EU Directive on liability for defective products (Produkthaftungsrichtlinie), which came into force on 9 December 2024 and now expressly applies to software and thus also to AI systems. 

Key contacts

Thies Deike photo

Thies Deike

Counsel, Germany

Thies Deike
Dr Simone Ziegler photo

Dr Simone Ziegler

Senior Associate, Germany

Dr Simone Ziegler
Dr Marius Boewe photo

Dr Marius Boewe

Partner, Düsseldorf

Dr Marius Boewe
Moritz Kunz photo

Moritz Kunz

Partner, Germany

Moritz Kunz
Dejan Einfeldt photo

Dejan Einfeldt

Associate, Germany

Dejan Einfeldt
Maximilian Kücking photo

Maximilian Kücking

Senior Associate, Germany

Maximilian Kücking
Julia Ickstadt photo

Julia Ickstadt

Associate, Germany

Julia Ickstadt

Our German practice

Our German offering is a key plank of our European business.

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Germany EU Regulation Artificial Intelligence Technology, Media and Telecommunications Artificial Intelligence AI and Emerging Technologies Thies Deike Dr Simone Ziegler Dr Marius Boewe Moritz Kunz Dejan Einfeldt Maximilian Kücking Julia Ickstadt