Follow us

On 7 August 2024, the ICO announced its provisional decision to issue a £6.09m fine to Advanced Computer Software Group Ltd ("Advanced"), for its failure to implement adequate data security measures which came to light following a ransomware attack on Advanced in August 2022. As a critical IT service provider to the NHS and other healthcare organisations in the UK, Advanced processes and stores personal data, including special category personal data, of individuals on behalf of its customers as their processor. The fine, if finalised, would be the first fine levied against a processor under the UK GDPR.

In the August 2022 ransomware attack, hackers accessed Advanced's health and care systems through a customer account that was not protected by multi-factor authentication ("MFA"). The attack affected 82,946 data subjects and caused disruption to critical functions including NHS 111 and access to patient records. The personal data exfiltrated included phone numbers and medical records and access details for homes of 890 individuals receiving care at home.

The decision on this processor fine comes in the midst of increasing frequency of cyber-attacks, which the ICO provided guidance on in May 2024, including in respect of supply chain attacks. Recent reprimands and statements in relation to cyber-attacks also suggest the ICO's growing concerns over technical and security measures (as opposed to process-based issues such as obtaining valid consents). For example, reprimands following the London Borough of Hackney and Electoral Commission cyber-attacks focused on their respective failures to have effective security patching and password management measures in place. Although it is also note-worthy that both those incidents resulted in reprimands rather than formal enforcement action.

Whilst this is only a provisional decision, it is possible that the ICO has decided to release the decision at this stage as a reminder to other processors, in particular those processing sensitive health data, to ensure that they have implemented robust technical and organisational measures to keep data subjects' personal information safe in the event of cyber-attacks (e.g. by implementing MFA and keeping up to date with the latest security patches).

In an action brought against the European Data Protection Board ("EDPB"), Meta has contested the legality of the EDPB's opinion earlier this year on "pay-or-consent" models, arguing that the advisory board went beyond its remit, and seeking compensation for damages suffered as result of the opinion's requirements.

Earlier in April, the EDPB published its opinion that the consent obtained by large online platforms' use of "pay-or-consent" models, which require users to either consent to their personal data being processed for personalised advertising or pay a fee if they want to use the service, is generally not valid consent. Instead, as users may be stuck between "consenting" or suffering a detriment (i.e. loss of the service) if they don't provide their consent, such consent may not be considered freely given. The EDPB therefore suggested that companies should offer a genuine "equivalent alternative" to pay-or-consent models. For more information on the EDPB's opinion, please see our blog post.

Meta has now hit back at the EDPB by initiating legal proceedings against the regulatory board, arguing that:

  • Article 64(2) of the GDPR which permits the EDPB to issue an opinion on a decision made by Member State regulators is incompatible with the Treaty on the Functioning of the European Union, as this removes companies' right to contest decisions before the European Court of Justice ("ECJ").
  • The EDPB's opinion failed to respect (or at least misinterpreted) the ECJ's Bundeskartellamt decision (C-252/21) which noted that for users who do not consent, an equivalent alternative service must be offered which may include the payment of a fee.
  • The EDPB's opinion introduced a "novel and incoherent obligation nowhere to be found in the GDPR".
  • The EDPB failed to act as an impartial body.

Given 98% of Meta's $138 billion revenue in 2023 came from advertising alone, it is unsurprising the platform has responded with vigour to the EDPB's opinion. While the EDPB's response is awaited, its UK equivalent, the ICO, has taken an arguably more relaxed stance on the pay-or-consent model, stating explicitly that data protection law does not prohibit these models in principle.

Another month, another statement by a big tech company that it will pause plans to train its AI systems using user data!

On 8 August 2024, the Irish Data Protection Commission ("DPC") released a statement confirming that X (formerly known as Twitter) had agreed to suspend its processing of personal data for the purpose of training its AI tool, "Grok". The suspension was X's response to the DPC's application to the High Court which came after extensive engagement between DPC and X regarding the training of the "Grok" AI model, as well as a number of privacy complaints from advocacy groups such as NOYB (chaired by Max Schrems). These complaints were in respect of X's processing of personal data contained in the public posts of X's EU/EEA users between 7 May 2024 and 1 August 2024 to train "Grok" without informing or seeking permission from the users themselves. In particular, it has been reported that the platform automatically opted users into allowing their personal data to be shared with "Grok", as a default security setting with the "opt-out" option buried in the X platform.

The DPC acknowledged X's argument that the platform had implemented mitigation measures, such as an “opt-out” option for users to object to the use of their data for AI training, but noted that these measures were not sufficient on the basis that they were only put in place after July 2024. The Irish High Court echoed the sentiment, reaffirming that these mitigation measures should have been in place from the launch of the AI tool. Whilst X has agreed to temporarily suspend the processing, it is not yet clear how long the suspension will last, and if any further enforcement action will be taken (e.g. requiring X to delete any unlawfully processed personal data or the models trained on such data). However, it appears that the matter will be before the Irish courts again in September 2024.

The enforcement action against X follows heightened attention on the use of user data to train AI models, in particular those developed by big tech companies. For example, in June 2024, Meta also agreed to delay similar plans to start training its AI systems using users' data in the EU and UK in response to pressure from the DPC (for further details see our June Data Wrap). The DPC's reaction to X's AI training plans appears to be stricter than for Meta, given that Meta had informed users in advance about its plans. The DPC's application to the High Court in respect of X also marks the first time that the regulator has used its powers under Section 134 of the Irish Data Protection Act 2018, which are only available where there is an "urgent need to act in order to protect the rights and freedoms of data subjects". The DPC also noted that this was the first time any Lead Supervisory Authority had actually taken action to suspend the processing of data (rather than simply a "request" to suspend the processing of data).

As X and Meta have both noted, they are not the first or only platforms to train their AI models based on user data. As such training methods are likely to become increasingly widespread, and we therefore expect regulator scrutiny to remain on this topic. The DPC released guidance regarding AI, Large Language Models and Data Protection in July, urging platforms to protect staff and user data where they have "not already agreed that purpose with [their] staff or users, or if they do not have a reasonable expectation it will be used for AI training." Similarly, the ICO is also consulting on issues such as web-scraping to train generative AI models and the accuracy of such user-generated data for use as training data.

The extent to which organisations can rely on "legitimate interests" as a lawful basis for processing personal data in the context of data scraping, remains subject to debate worldwide. For a roundup of recent guidance from the ICO, Dutch DPA (the AP) and the EDPB, please refer to our entry "Data scraping: The "legitimate scraping" saga continues in the UK" from our July Data Wrap.

The Information Commissioner's Office ("ICO") has called on 11 currently unnamed social media platforms ("SMPs") and video sharing platforms ("VSPs") to improve their privacy practices. The regulator went on to state that "where platforms do not comply with the law, they will face enforcement action". This announcement follows the ICO Tech Lab's ongoing review of the sign-up processes for 34 SMPs and VSPs as part of the ICO's Children's Code Strategy.

On 2 August, the ICO confirmed that it found varying levels of adherence to its Children's Code, and 11 of the 34 SMPs and VSPs reviewed were "not doing enough to protect children's privacy". Specifically, the 11 platforms are now being asked about issues relating to default privacy settings, geolocation and age verification, and how their approaches conform with the Children's Code. As part of this review, some SMPs and VSPs have also been asked about their approaches to targeted advertising to ensure that their practices align with both the UK GDPR and the Children's Code. In response to the findings of the initial review of 34 platforms, Emily Keany, Deputy Commissioner of the ICO, said that, "there is no excuse for online services likely to be accessed by children to have poor privacy practices".

Further detail of areas where the ICO considers improvement is needed are set out in its Children's Code Strategy progress update. For example, the ICO indicated concerns around platforms which do not set children's profiles to private by default, particularly where they also allow contact from strangers by default. The regulator has written to five platforms outlining its concerns and calling on them to change their practices within eight weeks, or otherwise face further investigation and potential enforcement action.  In an effort to improve understanding of how SMPs and VSPs are impacting children's privacy, the ICO has also recently launched a call for interested stakeholders to share evidence on how children's personal information is used in recommender systems (algorithms used by apps such as TikTok and Instagram), and on developments in age assurance technology to identify children under 13 years old.

The announcement and consultation form part of the ICO's larger objective of ensuring that platforms protect children's privacy, alongside its collaboration with Ofcom on the "regulation of online services where online safety and data protection intersect."

Implementing and maintaining a privacy notice is one of the key requirements of the UK GDPR, and one of the most visible documents that an organisation will maintain in its suite of data protection documentation.

While in theory this allows customers, suppliers, staff, and other individuals to understand how their data is being used, and empowers them to understand their rights over personal data; in practice, it can be an onerous obligation, particularly for under-resourced or smaller businesses. This results in come businesses using template privacy notices that are not necessarily applicable or relevant, or not complying with the requirement at all.

The ICO has now launched a new user-friendly tool to allow small organisations and sole traders to create a bespoke privacy notice by inputting details about their organisation into a privacy notice generator. There are sections of the tool specific to a wide range of sectors, including finance, insurance and legal; education and childcare; health and social care; and charity and voluntary sectors.

Although this tool likely to be useful for small businesses and goes some way to quell concerns about UK GDPR compliance, organisations will still need to ensure that they keep privacy notices up-to-date when any processing activities change.

August saw the European Commission's first investigation under the Digital Services Act ("DSA") come to a close. The investigation, which began on 22 April of this year, looked into the required reporting regarding a rewards programme run by TikTok on the 'TikTok Lite' app. The investigation ultimately resulted in a commitment from TikTok to permanently withdraw the TikTok Lite Rewards Programme from the EU and not to launch any other programme which would circumvent withdrawal.

The 'TikTok Lite Rewards Programme' allowed users to earn rewards, such as gift cards, for completing tasks within the app, such as inviting friends, following creators, and watching videos. However, when launching the TikTok Lite app and rewards programme in France and Spain, TikTok failed to submit risk assessment reports to the Commission. By virtue of its status as a Very Large Online Platform under the DSA, TikTok is required to perform a risk assessment and submit a report to the Commission prior to launching any new functionalities that are likely to have a critical impact on systemic risks. They also have to adopt effective mitigating measures to address identified risks.

The Commission sent a request for information to TikTok on 17 April this year, to which TikTok responded that it had conducted risk assessments but had failed to share these with the Commission, as required. TikTok's failure to disclose the risk assessments led to the Commission issuing a formal decision demanding information, which gave TikTok 24 hours to provide the relevant information. Following this, the Commission was particularly concerned about the "addictive effect" and "negative effects on the physical and mental health of users" of the rewards programme and the failure of TikTok to take effective risk mitigating measures.

On 5 August, the European Commission confirmed that TikTok had agreed to a commitment decision to permanently withdraw the TikTok Lite Rewards programme from the EU. This commitment, the first of its kind under the DSA, does not require TikTok to admit fault, neither is it accompanied by a financial penalty. However, the commitment to withdraw is legally binding, and any breach of the terms of the commitment would amount to a breach of the DSA and give rise to enforcement action (including a risk of potential fines). The Commission will carefully monitor TikTok's compliance with the binding commitments the platform has offered, as well as its other obligations under the DSA.

Key contacts

Miriam Everett photo

Miriam Everett

Partner, Global Head of Data Protection and Privacy, London

Miriam Everett
Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Lauren Hudson photo

Lauren Hudson

Senior Associate, London

Lauren Hudson
Sara Lee photo

Sara Lee

Associate, London

Sara Lee
Alice Bourne  photo

Alice Bourne

Associate , London

Davina Chu  photo

Davina Chu

Trainee Solicitor , London

Miriam Everett Claire Wiseman Lauren Hudson Sara Lee