Follow us

As foreshadowed in our briefing on the latest incarnation of the UK's data protection reform, November maintained the steady momentum of the DUA Bill.

The Bill received its Second Reading in the House of Lords on 19 November. The Second Reading is an opportunity for a general debate on all aspects of the Bill. Introducing the Bill, Baroness Jones noted that the changes to it since the Data Protection and Digital Information Bill "have been broadly welcomed".

However, chief among the topics subsequently raised by peers were the "lack of AI-related protections" regarding: copyright issues around data scraping; relaxing the prohibitions on automated decision-making (ADM); and, more generally, the regulation of AI. These are topics that are currently under the close scrutiny of global data protection authorities and government authorities, with a particular spotlight in the UK after the much hyped "AI Bill" did not materialise as part of the King's Speech in July 2024 (refer to our piece here for further information).

Baroness Kidron, among other peers, saw the DUA Bill as a missed opportunity to make the UK "AI ready", stating that it "fails to tackle present day or anticipated uses of data by AI", particularly given that any so-called "AI Bill" is expected to be delayed "until the Government understand[s] the requirements of the new American Administration." A point, in itself, she felt was a cause for concern. A number of the peers also re-iterated the need to maintain one eye on the significant repercussions if the UK were to lose its EU adequacy status when considering the Bill.

The Bill is now set to return for the Committee Stage on 3 December, where the House will examine the text on a line-by-line basis. Whilst the Bill is in its early stages, its predecessor (on which much of the Bill was based), proceeded almost to enactment with few outstanding points, before it fell at the final hurdle when the General Election was called. Twinned with the Government's majority in the House of Commons and the introduction of the Bill in the House of Lords first, a smooth trajectory is expected at a relatively rapid pace.

For further information refer on the topics discussed at the Second Reading refer to our blog here and click here for further detail on the DUA Bill as a whole.

The European Data Protection Board's (the "EDPB's") response to Meta's reformulated 'Pay or Consent' model hints at a possible compromise between regulators and the tech giant around the use of personal data for targeted advertising and the practical implications of 'informed' and 'freely given' consent in this context.

In response to the EDPB's Opinion on so-called 'Pay or Consent' models in April 2024, November saw the parent company of Facebook and Instagram propose to offer consumers three options when using its services. Each of these options will harness personal data to differing degrees: (i) a reduced price (40% less than currently offered) for an entirely advert-free service; (ii) a free service which uses data to deliver highly personalised adverts; and, crucially, (iii) a new free option which uses "a minimal set of data points" to show "less personalised adverts" (including unskippable advertisements).

Meta's previous 'Pay or Consent' model gave users a binary choice between paying for an advert-free service or allowing the company to use their personal data to target advertising to those users. This was criticised by the EDPB in its Opinion which noted that "the offering of (only) a paid alternative to the service which includes processing for behavioural advertising purposes should not be the default way forward." Particularly relevant to recent developments was a suggestion that providing a "further alternative free of charge (Free Alternative Without Behavioural Advertising)" would make it "easier for controllers to demonstrate that consent is freely given."

Initial comments from the EDPB suggest that Meta's new proposals may ease the regulators' concerns over the extent to which the company obtains consent to use its users' personal data (to some extent at least). In a "preliminary" statement published on 12 November 2024, the Chair of the EDPB Anu Talus "welcome[d]" Meta's introduction of a "new choice for free with less detailed profiling for advertising", saying that "less invasive ads are great news." The EDPB did, however, sound a note of caution, clarifying that Meta's solution "still needs to be assessed" and that a broader set of guidelines on 'Pay or Consent' models will be published in due course. Privacy advocacy groups have been more critical. Max Schrems – of None of Your Business (NOYB) – was sceptical as to whether the EDPB would ultimately accept Meta's updated plans and characterised them as "annoying people into consent with huge unskippable ads".

Whilst the EDPB's initial reaction to Meta's latest plans appears to be a relatively positive one, questions remain over whether those plans will provide an ultimate solution to the ongoing dispute around the use of personal data by big tech companies in behavioural advertising.

For further information regarding the EDPB's Opinion on 'Pay or Consent' models click here and Meta's reaction to the Opinion can be found here. For background on the lawful bases on which Meta has relied for target advertising please refer to our data wrap piece "Three legal bases in one year: CJEU ruling on targeted advertising".

To date case law has been inconsistent around when a data subject is entitled to claim compensation for non-material damages arising from breach of the EU GDPR. On 18 November 2024, the German Federal Court of Justice (Bundesgerichtshof) ("BGH") confirmed that:

  • even a "mere and short-term loss of control" of personal data may give rise to compensable non-material damages under Article 82(1) EU GDPR (see machine-translation of the BGH's press release here); and
  • a data subject is not required to provide evidence that there has been a "specific misuse" of data to the detriment of the data subject because of such loss of control. Further noticeable negative consequences are also not required.

The decision was made in relation to Facebook's significant personal data breach in April 2021, where the personal data of 533 million Facebook users from 106 countries was made publicly available on the internet. Unknown third parties then scraped Facebook's database using the platform's contact import tool to search for randomised sequences of numbers at scale. The data subject (the "Plaintiff") was one of thousands of individuals who made claims for compensation against Meta. So far, some of these claims have been dismissed but there are still approximately 4300 cases ongoing in relation to the incident.

In this case, the first instance decision by the Cologne Higher Regional Court ruled in favour of Meta on the basis that the Plaintiff did not clearly evidence any actual fear, anger or discomfort that he experienced because of the loss of control of his personal data. However, the BGH's subsequent decision confirmed the CJEU's decision in Agentsia po vpisvaniyata (Case C‑200/23) published on 4 October 2024, which stated that "a loss of control, for a limited period, by the data subject over his or her personal data, on account of those data being made available to the public… may suffice to cause 'non-material damage', provided that the data subject demonstrates that he or she has actually suffered such damage, however minimal, without that concept of 'non-material damage' requiring that the existence of additional tangible adverse consequences be demonstrated".

In light of the underlying circumstances, the BGH considered €100 was a reasonable level of compensation for a data subject (which is lower than the original €250 awarded by the regional court in the first instance). The Plaintiff was also successful in his application for: (i) a declaration that Facebook is liable for future material and non-material damages; as well as (ii) pre-litigation legal costs and for an injunction against use of the Plaintiff's telephone number and further disclosures by Facebook, insofar as this was not covered by his consent. The BGH referred the case back to the Cologne Higher Regional Court to be determined. The outcome remains to be seen, given that both the German lower and higher courts have made varying decisions in response to the claims coming out of the 2021 Facebook data breach.

The BGH's decision swings the balance in favour of data subjects and is a reminder for controllers and processors of the importance of regularly reviewing data protection compliance, including implementing appropriate technical and organisational measures to ensure a level of security appropriate to the risk posed (Article 32(1), EU GDPR). It also alerts organisations to the potential for data class actions, especially in light of the Competition Appeal Tribunal's decision earlier this year which permitted 45 million claimants to proceed with their £2bn class action against Meta. Although the BGH assessed the Plaintiff's compensation at €100 in this case, this figure may be compounded to a more significant figure in a class action scenario. It will also be interesting to see how the UK courts react to this decision, noting that the BGH and CJEU decisions do not align with Lloyd v Google LLC [2021] UKSC 50 where the UK Supreme Court held that damages would not be available to data subjects without proof that they had suffered some material damage or distress due to a data breach. For further information on Lloyd v Google please refer to our blog post here.

In early November, the European Data Protection Board ("EDPB") published a report on the first year of the EU-US Data Privacy Framework ("DPF"). This follows the European Commission adopting an adequacy decision in respect of the DPF in July 2023, which mandated regular reviews of the decision and the EDPB's participation in those review meetings. For further information on the DPF refer to our piece here.

In line with the European Commission's own review of the DPF, on the whole, the EDPB welcomed "the efforts made by the US authorities and the Commission to implement the DPF", particularly the redress mechanism for EU individuals and the comprehensive complaints-handling guidance published on both sides of the Atlantic. However, the EDPB also sets out some significant recommendations for the next review in three years' time. These include the following:

  • The low number of eligible complaints made in the first year make it difficult to assess the effectiveness of the redress system. To address this, the EDPB called for the Department of Commerce ("DoC") and Federal Trade Commission ("FTC") to conduct proactive checks on certified companies regarding their compliance with the DPF principles.
  • Whilst the EDPB supported the development and use of automated compliance tools by the DoC, it highlighted that these cannot replace individual investigations and assessments.
  • The DoC should publish practical guidance on the "accountability for onward transfer" principle clarifying the requirements for DPF-certified companies when transferring personal data to other third countries. Guidance and practical examples on the interpretation of "HR Data" under the DPF was also recommended, to address differing interpretations between EU and US authorities.
  • The adequacy decision was based on the Commission's favourable assessment of Executive Order 14086 ("EO 14086"), which introduced additional privacy safeguards in the US (including by adding principles of "necessity and proportionality" into the legal framework around US intelligence practices). However, the EDPB called for further detail of how these principles are interpreted and applied by US intelligence agencies in practice and expects the Privacy and Civil Liberties Oversight Board ("PCLOB") to provide useful insights on this in its upcoming review.
  • Section 702 of the Foreign Intelligence Surveillance Act ("FISA") deals with electronic surveillance of non-US persons outside the US for foreign intelligence purposes. The EDPB raised concerns about the re-authorisation of Section 702 in April 2024 because this extension did not codify some of the safeguards set out in EO 14086 (contrary to recommendations from the PCLOB). Given the potential uncertainty now around the scope of surveillance permitted, the EDPB urged the Commission and PCLOB to continue to monitor these developments.
  • The EDPB also raised concerns around use by US intelligence agencies of personal data from data brokers and other commercial entities. These activities are not regulated under the EO 14086 and could therefore bypass and undermine the protections under that executive order.

Whilst not expressly covered in the EDPB's report, the re-election of President Trump adds a further layer of uncertainty to the framework, particularly against the backdrop of a pending Schrems III challenge that has been waiting in the wings since the DPF's inception. It is possible that the new president could revoke executive orders (including EO 14086) and would also influence who sits as the FTC's Chair (a position currently held by Lina Khan who has a robust approach to enforcement of privacy standards which contributed to the favourable adequacy decision). It also remains to be seen whether the new administration will alter other legislation and/or increase government powers of surveillance, which could risk making any challenge to the DPF more likely to succeed.

On 15 November 2024, the Information Commissioner's Office ("ICO"), The Pensions Regulator ("TPR") and the Financial Conduct Authority ("FCA") issued a joint statement for pension scheme providers and retail investment firms on how to deliver regulatory communications to consumers in line with data protection requirements.

The joint statement addresses the need for pension scheme providers and retail investment firms to communicate effectively and in a timely manner with consumers to help them make informed financial decisions (as mandated by TPR's Code of Practice and Guidance and the FCA's Consumer Duty). The statement intends to clarify the interplay between these TPR and FCA requirements and the data protection requirements under the UK GDPR, the Data Protection Act 2018 and the Privacy and Electronic Communications Regulations 2003 (PECR) in respect of direct marketing.

Whilst UK data protection legislation does not prevent firms from sending regulatory communications, the joint statement makes it clear that firms can either: (i) ensure messages do not constitute direct marketing; or (ii) if they are categorised as such, follow the additional steps required to comply with direct marketing requirements.

For a communication not to be classified as direct marketing, firms should use a neutral tone and avoid active promotional language or encouragement when conveying factual information. They should also consider the context and content of the message. For example, "service messages" that provide essential information for customer services or for administrative purposes and do not contain promotions or advertising, would not be regarded as direct marketing. The ICO has provided a range of examples of messages that are not categorised as direct marketing, such as warnings about inadequate pension contributions or reminders about unused ISA allowances. Firms should refer to the ICO's guidance on direct marketing and regulatory communications for further details.

If a communication is considered direct marketing, firms must comply with the relevant data protection requirements in the usual way, and key requirements highlighted by the ICO include: (i) complying with the data protection principles (e.g. fairness, lawfulness and transparency) when processing personal data about consumers; (ii) offering consumers the right to object to and "opt-out" of direct marketing; and (iii) avoiding sending unsolicited electronic mail marketing (e.g. emails, text messages, direct messages on social media) to individuals unless specific consent is given or the "soft opt-in" criteria are met (i.e. the individual is an existing customer and the firm gives them a simple way to "opt out" both when first collecting details and in subsequent messages).

No Data Wrap would be complete without an AI roundup and November is no exception as we shine the spotlight on two EU-related developments this month.

Firstly, on 14 November the EU AI Office published the first draft of the General-Purpose AI (GPAI) Code of Practice envisaged under the EU AI Act. This is the first of four iterations of the Code of Practice which need to be finalised by 1 May 2025. The Draft Code provides best practices for complying with requirements for "providers" of general-purpose AI models under the EU AI Act (which focus on transparency and copyright laws). This includes a subset of providers of GPAI models with systemic risk, to whom more stringent requirements apply (commensurate to those in respect of so-called "high-risk" AI systems). The Draft Code assumes there will only be a small number of GPAI models falling in this latter category and acknowledges that significant changes will be required to the Draft Code if this is not the case.

The Draft Code was prepared by independent experts appointed as Chairs and Vice-Chairs of four thematic working groups. The Chairs and over a thousand stakeholders, EU member state representatives, and European and international observers are now expected to discuss the Draft Code further and participants have until 28 November 2024 to submit written feedback on the first draft as well.

In parallel, on 18 November, the Directive (2024/2853) on liability for defective products was published in the EU Official Journal (the "Product Liability Directive"). The Product Liability Directive updates the rules on strict liability for defective products and brings software (including artificial intelligence) within its scope - providing clarity on an area of previously inconsistent application across EU member states. Member states are required to transpose the directive into national legislation by 9 December 2026.

The Product Liability Directive forms part of the EU's package to regulate AI and AI liability, which also comprises: (i) the EU AI Act and (ii) the proposed AI Liability Directive (that deals with claims for harm caused by AI systems, or use of AI, adapting the existing fault-based non-contractual civil liability rules to cover artificial intelligence, so that claimants can enjoy the same protections as those harmed by other technologies in the EU).  Given that both the Product Liability Directive and the AI Liability Directive will fall within the "Representative Actions Directive," this could also lead the way for consumers to protect their collective interest in the EU via representative actions being brought on behalf of a class of claimants for harm caused by an AI system. This aligns with the broader EU strategy of enhancing consumer rights and access to justice.

Key contacts

Miriam Everett photo

Miriam Everett

Partner, Global Head of Data Protection and Privacy, London

Miriam Everett
Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Sara Lee photo

Sara Lee

Associate, London

Sara Lee

Key contacts

Alice Bourne  photo

Alice Bourne

Associate , London

Mackenzie Zhang photo

Mackenzie Zhang

Trainee Solicitor , London

Miriam Everett Claire Wiseman Sara Lee