Follow us

The start of 2025 saw the Labour Government release its long-awaited AI Opportunities Action Plan which is expected to "turbocharge" a plan for change. Whilst the AI strategy set out new innovative plans alongside those already outlined around data use under the Data (Use and Access) Bill, it also raised several questions from a legal and historic perspective.

See our commentary on the action plan here and our analysis of the Data (Use and Access) Bill is available here and here.

On 18 December 2024, the European Data Protection Board ("EDPB") issued its hotly anticipated opinion on the use of personal data for the development and deployment phases of AI models (the "Opinion") (see press release here). The Opinion follows a request from the Irish Data Protection Commission ("DPC") in September 2024 with a view to achieving greater regulatory harmonisation across the EU on the use of AI .

The Opinion sits against the backdrop of an ever-increasing uptake in the use of AI in our day to day lives, as well as close scrutiny from national supervisory authorities ("SA"), particularly around the use of personal data to train AI models (particularly large language models). 2024 saw multiple complaints by privacy rights groups on the topic, as well as investigations levelled by SAs against the likes of Google, Meta and X (refer to our Data Wrap entries here, here and here).

The Opinion provides some helpful guidance on the use of personal data in the development and deployment of AI models; although much of it is unsurprising and in a number of key areas, the guidance depends on the case in question, requiring SAs to consider a particular issue on a case-by-case basis.

The Opinion also seems to set out some relatively high thresholds for controllers to satisfy in the context of the development and deployment of AI models and around fulfilling the "necessity" limb of the three-step test to rely on legitimate interests as a lawful basis for processing personal data.

In addition, the EDPB uses the Opinion to flag the need for controllers that deploy AI models to conduct an appropriate assessment as to whether an AI model was developed lawfully (including whether the processing in the development phase was subject to a finding of non-compliance with the EU GDPR, particularly if it was determined by an SA or a court). This point will be particularly relevant as part of the due diligence process where a controller wants to deploy an AI model procures it from a third party (such as a large language model).

Following a referral from the Garante, it will be interesting to see how the DPC interprets the Opinion in the context of data processing by a large language model as part of its investigation into OpenAI.

For further information refer to our more detailed analysis of the Opinion here.

In December last year, Austria’s Federal Cartel granted ‘qualified entity’ status to Noyb under the Representative Actions Directive (EU) 2020/1828 (the "Directive"), to be able to bring collective actions in the Europe Union.

The Directive requires EU member states to establish national legislation so that "qualified entities" can represent collective consumer interests to lead both domestic and cross-border representative actions relating to breaches of EU consumer law, including the EU GDPR. The Directive provides for two types of representative actions: (i) injunctive measures, intended to cease or prohibit practices considered an infringement under EU law; and (ii) redress measures such as compensation, repair, replacement, price reduction, contract termination, to the reimbursement of the price paid.

Noyb (the consumer privacy advocacy organisation) is headed by Max Schrems whose legal challenges famously invalidated both the EU-US Safe Harbour and the EU-US Privacy Shield.

Schrems has announced that noyb plans to bring the first actions in 2025 and that "so far, collective redress is not really on the radar of many – but it has the potential to be a game changer". Given the organisation has already brought hundreds of GDPR complaints, it looks like 2025 may well be the year of the data class action.

In Western Australia ("WA"), the Privacy and Responsible Information Sharing Act 2024 ("PRIS Act"), and related Information Commissioner Act 2024 ("IC Act") were passed by WA Parliament on 28 November, receiving Royal Assent on 6 December 2024. It is anticipated the privacy provisions of the PRIS Act will commence in 2026.

The PRIS Act establishes privacy obligations that apply to the handling of personal information by WA public entities, and in some cases, their service providers. The Information Privacy Principles ("IPPs") include requirements concerning the collection, use and disclosure of information, information security, restrictions on disclosures outside Australia, and protection of de-identified information generally. They also provide for access and correction of information, information quality, openness and transparency, a right to anonymity when dealing with an 'IPP entity', restrictions on assigning unique identifiers and a framework for the use of automated decision-making when making a 'significant decision' about an individual.

The PRIS Act also provides a framework for the sharing of information between WA public entities and with other authorised external entities – putting in place processes by which information sharing can be requested, assessed and executed. The Act introduces responsible sharing principles ("RSPs"), which require WA public entities to consider and assess the appropriateness of the activities, recipient, information, settings, and output. New offences and penalties are also introduced, including in relation to certain non-compliance and unauthorised disclosures. The PRIS Act also establishes a new Chief Data Officer for WA, as well as setting out the functions and powers of the Information Commissioner and Privacy Deputy Commissioner established under the IC Act.

For further information on the implications of the PRIS Act and the IC Act on the public sector in Western Australia, or to register for our upcoming seminar on the Acts on 20 February 2025, please refer to our blog post here.

On 13 December 2024, the ICO published the response to its five-part consultation series on generative AI which it launched in January 2024, summarising key themes identified from more than 200 responses it received as part of the consultation. The consultation series sought to address regulatory uncertainties around how specific aspects of the UK General Data Protection Regulations ("UK GDPR") and the Data Protection Act ("DPA") 2018 apply to the development and use of generative AI.

Following its review, the ICO confirmed that it will retain the positions in its initial proposal relating to (i) purpose limitation in the generative AI lifecycle (see Part 2 of the consultation); (ii) accuracy of training data and model outputs (see Part 3 of the consultation); and (iii) allocating controllership across the generative AI supply chain (see Part 5 of the consultation). However, the ICO updated its positions on Parts 1 and 4 of the consultation, as summarised below:

  • Lawful basis for web scraping to train generative AI models (Part 1): The consultation responses shed light on alternative methods of data collection for generative AI other than web scraping (e.g. transparent methods involving direct and licensed collection from data subjects). As a result, the ICO stated that it will continue to engage with developers and generative AI researchers around the development of generative AI models that don't use web scraping when training the models. The ICO stressed that it is for developers to demonstrate that web scraping is necessary for developing generative AI as well, and that developers will also need to significantly improve their approach to transparency (including testing and reviewing any measures that are implemented). Responses to the consultation showed a lack of safeguards against "invisible" large-scale processing such as web scraping taking place. Finally, the ICO noted that where developers use licences and terms of use to ensure deployers are using their models in a compliant manner, developers will need to be able to show that these agreements contain effective data protection requirements and that these requirements are met.
  • Engineering individual rights into generative AI models (Part 4): The ICO strengthened its position in relation to Part 4, noting that developers and deployers of AI models should not only demonstrate that they have a clear and effective process for enabling data subjects to exercise their rights as part of the AI model itself, but they should also "design and build systems that implement the data protection principles effectively and integrate necessary safeguards into the processing". The ICO also clarified that organisations wishing to rely on Article 11 UK GDPR in the context of generative AI (processing of personal data, the purpose for which does not (or no longer) requires the identification of a data subject) will need to adhere to a high threshold, demonstrate that this reliance is appropriate and justified, and must provide data subjects with the opportunity to provide more information to enable identification.

In due course, the ICO will be updating its existing guidance to reflect the positions detailed in its response to the consultation, including updates to its guidance on artificial intelligence. The regulator also recognised that the upcoming data reform legislation under the Data (Use and Access) Bill may have an impact on the positions set out in the response as well. The final position in the response to the consultation is also expected to align with its forthcoming joint statement on foundation models with the Competition and Markets Authority (which will cover the interplay between data protection and competition and consumer law).

December saw the Dutch Data Protection Authority ("Dutch DPA") fine Netflix EUR 4.75 million  for failing to provide sufficient fair processing information to its customers about the use of their personal data in both its privacy notice and in its response to requests from customers requesting access to their personal data. The investigation started in 2019 and the findings relate to Netflix's privacy practices between 2018 and 2020.              

More specifically, the Dutch DPA found that Netflix had failed to inform its customers sufficiently clearly in relation to the following areas, resulting in breaches of Articles 5, 12, 13 and 15 of the EU GDPR:

  • The purposes of and legal bases for processing personal data: Data subjects must be informed about the fact that a processing operation takes place and the purposes of that processing operation. Therefore, Netflix's privacy statement should have shown " the connection between the processing of personal data and the purpose for which the personal data are processed". The Dutch DPA indicated that Netflix did not make clear which data it used for "recommending services offered, analysing target groups, and preventing fraud". It was also found that information was not provided by Netflix using clear and plain language and was not set out in a concise, transparent, intelligible and easily accessible form.
  • Recipients of personal data: Netflix did not provide details to its customers of the specific individual recipients of their personal data in its privacy notice.
  • Retention periods of personal data: Although Netflix included a general statement in its privacy notice setting out that the personal data of data subjects are retained as required or permitted by legislation and regulation, no specific retention periods were provided.
  • International transfers of personal data: Netflix did not provide data subjects with details of (i) their rights which arise when their personal data is processed outside the EEA, (ii) the jurisdictions outside of the EEA to which their personal data is transferred and (iii) any adequacy decisions/appropriate safeguards which apply when their personal data is transferred internationally.

Aleid Wolfsen, the Chairman of the Dutch DPA, commented that explanations to data subjects about the handling of their personal data "must be crystal clear". In light of this decision, organisations might wish to reassess their compliance with the transparency obligations under the EU and UK, in particular, to (i) revisit their privacy notices to ensure that the four areas set out above are addressed sufficiently clearly and (ii) ensure that the same information is also provided sufficiently clearly in responses to data subject access requests.

It should be noted that the Dutch DPA has acknowledged that Netflix has "since updated its privacy statement and improved its information provision". It is also understood that Netflix has objected to the fine.

Among the initiatives flagged in the AI Opportunities Action Plan (see above) was potential reform of the "UK text and data mining regime so that it is at least as competitive as the EU." This is in light of current uncertainty around the IP position for the AI sector and creative industries and, according to the government, the potential to "hinder innovation".

This follows a related UK government consultation launched on 17 December 2024 (available at this link Copyright and Artificial Intelligence - GOV.UK) which runs until 25 February 2025. The consultation covers solutions to achieve the government's key objectives for the AI sector and creative industries in the UK which are:

  • supporting right holders’ control of their content and ability to be remunerated for its use;
  • supporting the development of world-leading AI models in the UK by ensuring wide and lawful access to high-quality data; and
  • promoting greater trust and transparency between the AI and creative industry sectors.

In the process, the consultation addresses other emerging issues, including copyright protection for computer-generated works, the issue of digital replicas, transparency in relation to AI training materials and the possibility to opt-out of such use, technological protection of works, the relative bargaining power of, and contractual and licensing relationships between, content producers and distributors online, and the clear labelling of AI outputs as such.

The consultation cross refers to the position under the EU's AI Act as well, in some places as an indication of the direction in which UK regulation may be heading - for example, in respect of transparency on use of content for AI training, where the government refers to Article 53(1)(d) of the AI Act (which recently introduced a requirement that AI training sources are reported).

For a high-level overview on each of the areas addressed in the consultation please refer to our IP colleagues' blog post here.

Key contacts

Miriam Everett photo

Miriam Everett

Partner, Global Head of Data Protection and Privacy, London

Miriam Everett
Claire Wiseman photo

Claire Wiseman

Knowledge Lawyer, London

Claire Wiseman
Duc Tran photo

Duc Tran

Of Counsel, London

Duc Tran
Angela Tay photo

Angela Tay

Senior Associate, London

Angela Tay
Sara Lee photo

Sara Lee

Associate, London

Sara Lee
Miriam Everett Claire Wiseman Duc Tran Angela Tay Sara Lee