Follow us

As a sneaky July entry, the European Commission has warned Meta that it's "Pay or Consent" model breaches the Digital Markets Act ("DMA"). On 1 July, the Commission announced its preliminary findings that Meta’s offer to allow users of services such as Instagram to choose between either paying for a non-personalised option or consenting to targeted advertisements, does not comply with the DMA.

This builds on the legality of "pay or consent" mechanisms featured in our April / May Data Wrap, in addition, a recent decision of the Regional Court of Regensburg (Germany), found that consent obtained through a "Pay or Consent" model did constitute valid consent under the EU's GDPR (the "Regensburg decision").

A "Pay or Consent" model is where an individual is offered a choice between consenting to the processing of their personal data; paying a fee in order that such processing does not occur; or otherwise not using the services offered and so not having their personal data processed.

In the Regensburg decision, the court dismissed the claimant's argument that consent obtained under the "Pay or Consent" model was invalid and that data subjects were entitled to have their data deleted under Article 17 GDPR, stating that 'the plaintiff expressly agreed that the defendant may continue to use information from accounts for advertising purposes'.

The Regensburg decision is a significant development, providing support that a "Pay or Consent" model can still enable the collection of valid consent and therefore an appropriate legal basis for the purpose of the GDPR. This Regensburg decision reinforced the CJEU's Bundeskartellamt judgement that a user's freedom is preserved when a reasonably priced equivalent alternative is available. However, it is important to note that the decision made by the Regensburg court is not binding on fellow national EU Member States' courts.

Only two days after the Regensburg decision, the European Data Protection Board ("EDPB") concluded in its own opinion on "Pay or Consent" models where it found that "in most cases, it will not be possible for large online platforms to comply with the requirements for valid consent under the GDPR if they confront users only with a binary choice between consenting to processing of personal data for behavioural advertising purposes and paying a fee". For further detail on the EDPB opinion please refer to our longer article "Consent or Pay: Boom or Bust?".

It is therefore difficult to read the Regensburg decision as aligning with the EDPB opinion. The uncertainty and alternating conclusions in this area (alongside commentary from other authorities under alternative regulatory regimes – such as the Commission's recent findings in respect of the DMA), underscore the need for entities processing or storing user's data to ensure they are obtaining valid consent from their users, particularly for advertising purposes.

On 3 June 2024, the European Data Protection Supervisor ("EDPS") published practical guidelines for EU institutions, bodies, offices and agencies ("EUIs") regarding responsible use of generative AI systems and the intersection between AI and data protection legislation ("the Guidelines"). Whilst the Guidelines are primarily aimed at EUIs, the recommendations provide useful guidance for those operating in the private sector as well.

The EDPS also stated that the Guidelines have been issued as part of its role as data protection supervisory authority for the EUIs, and not in its future role as AI supervisory authority under the upcoming EU AI Act. It will be interesting to see how the EDPS juggles these two roles once the AI Act comes into force over the summer.

The Guidelines consist of various short chapters covering classic data protection themes such as: what is the role of the data protection officer; requirements to carry out data protection impact assessments; certain principles such as accuracy, data minimisation and transparency; and specific provisions addressing automated decision-making - all from the perspective of an EUI using a generative AI system. The document also includes more AI specific sections covering areas such as: what generative AI is; whether EUIs can use generative AI systems; and how to ensure fair processing and avoid bias when using generative AI systems.

From a practical perspective, interesting themes covered in the Guidelines also include the EDPS' recommendations around the use of data sets, data quality and data management under the chapters covering GDPR principles. In particular, the EDPS emphasised the EUI's and controller's roles in ensuring that all the data sets, including those sourced or obtained from the third parties, are carefully verified. The EDPS guidance seems to expand the remit of the automated decision-making regime in respect of generative AI systems, particularly the requirement to provide meaningful information about the logic of the decisions, as well as their meaning and the possible consequences for the individual. The Guidelines state "it is important for the EUI to maintain updated information, not only about the functioning of the algorithms used, but also about the processing datasets. This obligation should generally be extended to cases where, although the decision procedure is not entirely automated, it includes preparatory acts based on automated processing." The underlined wording has potential to broaden the circumstances in which the requirements apply when looking to interpret Art 22 of the GDPR (automated decision-making). 

Guidance from the enforcement authorities, including in emerging areas such as generative AI, is always welcome.Given the evolving nature of the technology we can expect the EDPS to update these guidelines in due course, as well as publish further guidelines in its subsequent role under the EU AI Act in the future.

June saw Meta confirm that it will pause plans to start training its AI systems using data from its users in the EU and the UK. This follows a request from the Irish Data Protection Commission ("DPC"), acting on behalf of several other EU data protection authorities, to delay training its large language models "using public content shared by adults on Facebook and Instagram across the EU/EEA". The DPC confirmed that the decision "followed intensive engagement between the DPC and Meta…[and] the DPC…will continue to engage with Meta on this issue." The ICO also requested that Meta pause its plans until it could satisfy concerns raised. Stephen Almond, the ICO's Executive Director for Regulatory Risk, confirmed in a statement that the ICO will "continue to monitor major developers of generative AI, including Meta, to review the safeguards they have in place and ensure the information rights of UK users are protected".

Meta has been using user-generated content to train its AI systems in other markets such as the US. However, closer to home, this has been met with potential compliance issues in respect of the EU and UK GDPR. In May, Meta began notifying EU and UK users of an upcoming change to its privacy notice to permit training of its AI systems using user-generated training material to "reflect the diverse languages, geography and cultural references of the people in Europe". This prompted the privacy activist group NOYB (“none of your business”) to file 11 complaints with constituent EU regulators, arguing that Meta was in breach of various aspects of the GDPR. In particular, NOYB suggested the need for an opt-in mechanism for consent to be valid, instead of the potentially misleading opt-out consent mechanism that Meta had offered (where personal data processing does take place, users should be asked their permission first, rather than requiring action to object). As an alternative to relying on consent as a legal basis for processing, Meta had also claimed that its "legitimate interests" would override the fundamental rights of users – however NOYB queried this logic as well, given the same legal basis had previously been rejected by the Court of Justice to justify use of personal data for targeted advertising (C 252/21 - Bundeskartellamt).

Given that Meta is not alone in wanting to use European user's public (and non-public) content to train its AI systems, the issue is simply on pause for now and we are likely to see Meta suggest an alternative user-permission mechanism to permit processing in this way in due course – no doubt, following further close engagement with both the DPC and the ICO.

The Information Commissioner's Office ("ICO") has confirmed that it will not take enforcement action against Snap for the launch of its 'My AI' feature, after publishing its final decision on the matter on 21 May 2024. This concludes a year-long investigation that began with the ICO issuing an Information Notice to Snap on 26 May 2023.

The 'My AI' feature serves as a chatbot allowing users to raise queries via a conversational interface. While it is powered by a form of generative pre-trained transformer technology developed by Open AI, the specific application programming interface (API) used for the My AI user interface has been developed by Snap.

The ICO's final decision represents a U-turn from its initial finding in the Preliminary Enforcement Notice ("PEN") issued on 6 October 2023. In the PEN, the ICO identified two separate issues concerning breach of requirements around data protection impact assessments ("DPIA") and prior consultation (under Articles 35 and 36 of the UK GDPR respectively). As a result, the ICO directed Snap to cease processing the personal data of Snapchat users in the UK for any purpose related to My AI.

The ICO's subsequent U-turn arose after it concluded that the breaches set out in the PEN no longer applied:

  • The first four DPIAs submitted by Snap had failed to adequately address My AI's privacy risks. Snap consequently rectified the prior infringement by investing "considerable time and effort" into producing a compliant fifth DPIA and taking steps to "directly address the concerns" mentioned in the PEN.
  • Snap had also failed to consult the ICO for the first four DPIAs which concluded that My AI gave rise to a high-risk to the rights and freedoms of 13–17 year-old users. The Commissioner later found that this was due to a recording error (where "medium" risk was mislabelled as "high" risk and, therefore, did not accurately reflect Snap's true assessment of the risk posed). After considering Snap's mitigatory measures, the ICO no longer found grounds to issue an Enforcement Notice.

This light-touch approach by the ICO looks to reward Snap's proactive approach to compliance, which was also aided by the nascent stage of generative AI in 2023. However, Stephen Almond, the ICO Executive Director of Regulatory Risk commented that the decision should act as a warning shot for the industry and that organisations using or developing generative AI must consider data protection from the very beginning. He also stated that the ICO will continue to monitor organisations' risk assessments and "use the full range of our enforcement powers – including fines – to protect the public from harm".

The ICO published a suite of guidance on the interplay between AI and the data protection legislation in 2023, as well as four consultation rounds on generative AI that can be found here.

On 7 June 2024, the UK High Court confirmed that data subjects' right to be informed under Article 15(1)(c) extends to the individual identities of recipients of their personal data, as opposed to just the categories of data recipients. The judgment confirmed the CJEU's ruling in January 2023 in RW v Österreichische Post (C 154/ 21) where the court held that the Austrian Postal Service was obliged to provide the data subject "on request, with the actual identity of those recipients" and that "it is only where it is not (yet) possible to identify those recipients that the controller may indicate only the categories of recipient in question" or if the "request is manifestly unfounded or excessive".  

The case, Harrison v Cameron & Another [2024] EWHC 1377 (KB), involved a dispute between the claimant, who had contracted with the defendant's company for landscaping services, and the defendant, that led to a number of threatening phone calls from the claimant, which the defendant had recorded. The defendant shared these recordings (which included personal data) to certain individuals including his family, friends and employees of his company. On becoming aware that these recordings had been shared, the claimant requested that the identities of the recipients were disclosed, as he wanted to know whether the recordings had been shared with certain professional peers and competitors which resulted in him losing out in a property investment.

The court confirmed the CJEU's previous decision that the right to be informed includes a right to know the identities of data recipients, noting in particular that the choice as to how Article 15(1)(c) is complied with – whether it is by disclosing the identities of data recipients or just the categories of data recipients – lies with the data subject (not the data controller).

From a practical perspective, this case serves as a reminder to companies about the importance of their internal data governance processes and reinforces the fundamental right of data subjects to be informed about how their data is processed. Nevertheless, it has yet to be seen whether the disclosure of data recipients' identities will start to become standard practice. At this stage, this level of disclosure does not appear to be automatically required in respect of all DSARs but only where it is specifically requested (e.g. the claimant in Harrison requested a "comprehensive list" of the data recipients). Also, when faced with such a request, "rights of others" exemption gives data controllers a "wider margin of discretion" under the DPA 2018 to decide whether it is reasonable to disclose this information.

On 10 June, the ICO and the Office of the Privacy Commissioner of Canada (“OPC”) announced their joint investigation into genetic testing company, 23andMe, in relation to its data breach. This marks the first cross-border investigation conducted in relation to the high profile 23andMe breach.

The breach was first disclosed by 23andMe in October 2023 after an advertisement for the sale of individual 23andMe profiles was posted on the hacking forum BreachForums, containing alleged samples of 23andMe user data. 23andMe subsequently released a statement that approximately 14,000 user accounts (0.1% of a total of 14 million accounts) were affected by the data breach, with threat actors obtaining access using usernames and passwords for the 23andMe website which were identical to those used on other websites that had previously been compromised (a process known as “credential stuffing” or “password spraying”). Threat actors were also able to access further information shared with the compromised accounts through 23andMe’s ‘DNA Relatives’ and ‘Family Tree’ opt-in features (which allow users to share information - including display names, gender, predicted relationships and percentage of DNA shared with certain genetic ‘matches’ on the 23andMe platform, see an example profile here). It has been further reported that the effect of the ‘DNA Relatives’ and ‘Family Tree’ features brings the number of individuals affected by the data breach to 6.9 million (approximately half of the company's total user base).

In particular, the cross-border investigation will focus on:

  • the scope of data that was exposed by the breach and potential harms to the affected individuals;
  • whether 23andMe had adequate safeguards to protect the highly sensitive information within its control; and
  • whether 23andMe provided adequate notification about the breach to the ICO, OPC and affected data subjects as required under applicable data protection laws.

The US bank, Wells Fargo, reportedly recently terminated the employment of over a dozen of its employees after an internal investigation revealed that the employees were "faking work" by simulating keyboard activities "to create an impression of active work". The bank disclosed the dismissals in broker fillings with the US Financial Industry Regulatory Authority. The regulatory filings did not disclose whether the employees were working from home or in the office.

After their peak during the Covid-19 pandemic, working from home and hybrid working have remained popular, but this has also led to employer concerns about the productivity of their employees. Particularly in light of the reported use of deceptive tools such as "mouse jigglers" which allow the worker to appear to be active on their work computer even when they are not - the cursor moves automatically at certain intervals, keeping the employee's online status set to "available" and therefore appearing to be actively working.

This has also led to employers considering whether to monitor employee productivity more closely when they are working from home. For example through more invasive "bossware", including activity-tracking software to oversee mouse usage, productivity scoring and webcam supervision that can analyse employees attention by tracking eye movements or their body language.

When considering monitoring employees either in the office or working from home, privacy and data protection must also be considered alongside employment legislation. The Information Commissioner's Office published guidance on lawfully monitoring employees in October 2023, providing detailed instructions on the extent to which such monitoring can take place, and how, in order to comply with the UK GDPR.

In response to the increasing threat to cybersecurity, the Singaporean government recently passed the Cybersecurity (Amendment) Bill No.15/2024, granting greater powers to the Cyber Security Agency of Singapore ("CSA") and broadening the scope of the Cybersecurity Act.

Where previously the Cybersecurity Act only regulated self-owned critical information infrastructure ("CII") computer systems in Singapore, the amendment expands the scope to include virtual structures such as CII hosted on cloud platforms (i.e. regulation of third-party owned CII computer systems) and overseas CII owners, as well as entities other than CII owners. It also obliges CII owners to report cybersecurity incidents even if the computers or computer systems are not interconnected with or do not communicate with the CII. The CSA will also now have greater investigatory powers to ensure CII owners meet their obligations under the Act, and those who fail to meet these obligations can face a new civil penalty of up to 10% of annual turnover or SGD500,000, whichever is greater. Businesses must therefore ensure they understand the implications of the new amendment on their operations.

For further information please refer to our full article "Singapore expands the scope of the Cybersecurity Act".

 

Key contacts

Miriam Everett photo

Miriam Everett

Partner, Global Head of Data Protection and Privacy, London

Miriam Everett
Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Duc Tran photo

Duc Tran

Of Counsel, London

Duc Tran
Angela Chow photo

Angela Chow

Senior Associate, London

Angela Chow
Alasdair McMaster photo

Alasdair McMaster

Senior Associate, London

Alasdair McMaster
Sara Lee photo

Sara Lee

Associate, London

Sara Lee
Saara Leino  photo

Saara Leino

Associate (Finland) (External Secondee), London

Ankit Kapoor photo

Ankit Kapoor

Graduate Solicitor (India), London

Tommaso Bacchelli  photo

Tommaso Bacchelli

Trainee Solicitor, London

Miriam Everett Claire Wiseman Duc Tran Angela Chow Alasdair McMaster Sara Lee