As the EU AI Act came into force on 1 August 2024, what is the interplay with the UK and EU GDPR where organisations process personal data in the context of AI technologies?
Whilst the UK will need to wait a little longer for clarity on its approach to regulating artificial intelligence, the long-awaited EU AI Act was finally published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024. The provisions of its tiered risk-based approach to regulating AI will apply incrementally over the forthcoming 6 to 36 months, depending on the relative risk categorisation of the AI systems in scope. The Act is a landmark piece of legislation aimed at harmonising AI rules across the EU - for further information around the legislative framework please refer to our previous posts here, here and here.
At this early stage, it is unclear whether the AI Act will be as pivotal an international benchmark for shaping AI regulation as the EU GDPR was for the global regulation of data protection. Given their technology neutral nature, the EU and UK GDPR will continue to apply to the processing of personal data in the context of AI technologies. However, the AI Act also seems to build on some of the principles under the GDPR and, in practice, the two regimes and their respective requirements will co-exist. It is therefore important for "providers" and "developers" of AI systems to understand the interplay between these two pieces of legislation. In the context of the AI Act, a "provider" is the entity that develops or has an AI system developed and places it on the market or puts it under service under its own name or trademark, and a "deployer" is the entity that uses an AI system under its authority (except for non-professional personal use).
Navigating this interplay between the AI Act and the GDPR requires a thorough understanding of both regimes, the business and the aim of the organisation in deploying the AI system that processes personal data. We cover some key interactions between the two regimes at a high level below:
Transparency and Accountability
Both the AI Act and GDPR emphasize the importance of core principles such as transparency, accountability and explainability. In particular, the AI Act includes clear requirements around the provision of documentation (including technical documentation and record-keeping for certain AI systems), and the GDPR requires data controllers (and in certain circumstances, data processors), to develop and maintain a range of documentation to support accountability requirements when personal data is processed. The transparency requirements under the GDPR vary from very high-level, "cover-it-all" principles, to extremely detailed guidance on what information to provide to data subjects (usually through a privacy policy or notice), and the AI Act seems to try to bridge those extremities by paving a middle ground.
The AI Act sets out transparency obligations for both providers and deployers. Among other requirements, specific transparency obligations under the AI Act include:
- Interaction disclosure requirements, meaning that providers are required to ensure that AI systems intended to interact directly with individuals are designed and developed so that those individuals are informed that they are interacting with an AI system, (unless this is obvious from the perspective of an individual who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use);
- Labelling AI-generated content requirements, meaning that providers of AI systems, (including general-purpose AI systems), generating synthetic audio, image, video or text content, must ensure that the output of the AI system is marked in a machine-readable format and detectable as artificially generated or manipulated. The same requirement is extended to deployers when the content generated or manipulated constitutes a deep fake. In relation to deployers, where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations are more limited;
- Emotion and biometric categorisation system notification requirements, including that deployers of an emotion recognition system or a biometric categorisation system inform individuals of the operation of the system; and
- Public interest requirements, which mean that deployers of an AI system that generates or manipulates text that is published to inform the public of matters of public interest, are required to disclose that the text has been artificially generated or manipulated.
The information required under the AI Act must be provided to the individuals concerned in a clear and distinguishable manner. The information to be provided to data subjects under the GDPR must also be in a concise, transparent, intelligible, and easily accessible form, using clear and plain language.
An important distinction between the regimes appears to be around the timing to comply with transparency requirements. Under the AI Act, the information must be provided to individuals at the latest at the time of their first interaction or exposure. The AI Act does not seem to provide for alternative time frames in the same way as under the GDPR (such as when personal data are obtained, when the data subject is first contacted or their information is disclosed, or within a reasonable period after obtaining the personal data, but at the latest within one month, depending on the source of the data). This means that when deploying an AI system, companies must ensure that the system and the infrastructure where the system is used is capable of such just-in-time notices. The placement and content of the information must be assessed carefully as well.
Data quality
The AI Act builds on the GDPR and develops it further in respect of data quality requirements. Whereas the GDPR largely leaves the concept of data quality to its principles, (especially data minimisation, accuracy, and technical and organisational requirements), the AI Act introduces more detailed requirements around the quality of data (although only for so-called "high-risk" AI systems).
Broadly speaking, these measures can be categorised to cover data governance and management practices, requirements for the data sets, restrictions around when special category data under the GDPR may be processed, and cybersecurity measures:
- Data governance: Relevant data governance and management practices must be in place, and they must be appropriate for the intended purpose of the high-risk AI systems. These practices should cover, for example, identifying and addressing data gaps or shortcomings; data collection practices and the origin of the data; assessment of the availability, quantity and suitability of the data sets for the intended purpose; and formulation of assumptions compared to the information that the data are supposed to measure and represent;
- Bias detection and correction: Compliance measures implemented should specifically include an examination of possible biases that are likely to affect the health and safety of individuals, have a negative impact on fundamental rights, or lead to discrimination prohibited under EU law, especially where data outputs influence inputs for future operations, as well as appropriate measures to detect, prevent and mitigate such possible biases;
- Relevance, representation and statistical properties: Data sets must be relevant, sufficiently representative, and as free from errors as possible for the intended purpose. Data sets should also have the appropriate statistical properties, considering the specific geographical, contextual, behavioural, or functional settings; and
- Cybersecurity: Measures must be in place to prevent, detect, and respond to attacks like data poisoning and adversarial examples.
Data quality measures should also include specific requirements relating to the use of special category data for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems. To use special category personal data for bias detection, the following requirements must be met: the aim cannot be achieved by using other data; the data used is protected to the highest standard; the data should not be transmitted, transferred or otherwise accessed by other parties; the data is deleted after the bias is corrected or after the retention period ends (whichever comes first); and the record of processing activities under the GDPR must include the reasons why the processing is strictly necessary and why that objective could not be achieved by processing other data.
There data quality requirements, though detailed, provide welcome guidance from the legislator on how some of the more the abstract requirements of data protection principles could be interpreted and what issues to consider when assessing the processing of personal data for AI purposes.
Regulation of biometric data
The original Commission draft of the AI Act in 2021 stated that the concept of biometric data used in the AI Act is in line with, and should be interpreted consistently with, that of biometric data as defined in the GDPR and respective legislation. The GDPR defines biometric data as "personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data". Interestingly, the AI Act's definition is word for word identical, except that it does not include the wording "which allow or confirm the unique identification of that natural person". The AI Act also introduces six other defined terms including the terms biometric, biometric identification, biometric verification, biometric categorisation system, and three biometric identification system definitions, depending on whether each system is a remote, real-time remote or post-remote system. There is therefore a risk that the difference in definitions could lead to interesting interpretations as what is understood to be biometric data for the purpose of the GDPR appears to be more limited than it is for the purpose of the AI Act. To the extent the supervising authority is the same for both regimes this is likely to be less problematic, but in Member States where the legal frameworks are enforced by different authorities, there is a risk that this difference could lead to divergence between the regimes.
DPAs' role under the EU AI Act
July also saw the European Data Protection Board (EDPB) announce during its plenary meeting, that it had adopted a statement regarding the data protection authorities' (DPA) role under the EU AI Act framework. In that statement, the EDPB recommended that DPAs should be designated as Market Surveillance Authorities (MSAs) of high-risk AI systems used for law enforcement, border management, administration of justice and democratic processes, given their experience and expertise when considering the impact of AI on fundamental rights – it is expected that this would enable better coordination with regulatory authorities, enhance legal certainty and strengthen supervision and enforcement. An MSA is defined under the AI Act as a national authority carrying out activities and taking measures under Regulation 2019/1020 (around market surveillance and compliance of products with requirements in the applicable EU legislation). DPAs should also be designated as MSAs for other high-risk AI systems, particularly in sectors likely to impact individuals' rights and freedoms relating to the processing of personal data. Clear procedures should also be established for cooperation between MSAs and other regulatory authorities, including DPAs, as well as cooperation between the EU AI Office and the DPAs / EDPB.
Conclusion
It remains to be seen whether the EU AI Act represents the significant step forward in ensuring that AI technologies are developed and used responsibly as envisaged by the aim of the legislation, without impeding innovation and investment in the area. Either way, it will be important to continue to monitor market practice around this new piece of legislation as its impact on commerce continues to evolve, as does the developing international landscape for regulating artificial intelligence.
In addition, whilst the EU AI Act offers to the GDPR with respect to personal data processing, it also builds on some of the principles under the GDPR, meaning that organisations will need to carefully consider the interaction between the two regimes.
Key contacts
Saara Leino
Associate (Finland) (External Secondee), London
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.