Could we be set for 'boomerang Britain' as plans for a new Digital Information and Smart Data Bill (DISD Bill) seem to signal a return to some elements of the failed UK data protection reform? The 22 May announcement of the UK general election stalled progress on the overhaul of the UK's data protection framework through the Data Protection and Digital Information Bill (DPDI Bill). This cast doubt over the much-touted shake-up that had been billed as the UK "forging its own path" in a post-Brexit age. However, on 17 July, the King's speech and accompanying briefing notes provided some clarity around which elements of the DPDI Bill the Labour party will take forward.
Similar to its predecessor, the DISD Bill purports to "harness the power of data for economic growth", by placing certain "innovative uses of data" on a statutory footing. These initiatives include establishing a legislative framework for digital identity verification services and setting up smart data schemes (which enable the secure sharing of customer data with authorised third parties who can enhance the customer data with broader, contextual "business" data). In doing so, it seems the Government is looking to encourage the economic growth seen from Open Banking across other sectors of the economy.
Other policy initiatives include: retaining plans to transform the UK's data protection regulator, the ICO, into a more modern regulatory structure (with a CEO, board and chair) with new and stronger powers; as well as "targeted reforms" to some data protection laws that maintain "high standards of protection" but make improvements where there is currently a lack of clarity. Whilst it seems possible that Labour will drop some of the more controversial aspects of the previous DPDI Bill, it will be interesting to see how much of it is retained - for example the proposed relaxation of the automated decision-making provisions under Article 22 UK GDPR. Either way, the Government's hands are likely to remain relatively tied to ensure the UK maintains its adequacy status under the EU GDPR. The DISD Bill also retains the ability for scientists to request broad consent for scientific research and allow scientific researchers in a commercial setting to make equal use of the UK's data regime.
The DISD Bill is not expected to be an immediate priority for the Labour government or the Department for Science, Innovation and Technology and the devil will be in the detail concerning the true impact of these proposals for organisations. The DISD Bill is also expected to complement and sit alongside the UK GDPR and DPA 2018 (rather than replace them), as was the case for its predecessor.
For further information please refer to our blog The King's speech: 40 policy bills, a gaping AI hole and a boomerang data bill. In addition, in what appears to be a move to align with the proposed EU Cyber Resilience Act, the briefing notes to the King's Speech also refer to a new Cyber Security and Resilience Bill. The Bill is likely to be a priority for the Labour party considering the recent CrowdStrike outage (see below). For further details please refer to our separate blog here.
In a dramatic turn of events, a much hyped specific "AI Bill" was missing from the 40 policy bills set out in the briefing notes accompanying the King's speech. This was particularly surprising given that the King's Speech itself referred to the Government seeking to "harness the power of artificial intelligence as we look to strengthen safety frameworks", as well as "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models".
On 30 July, in Parliament, Under Secretary of State Baroness Jones also re-stated previously announced plans to introduce legislation around AI to place the existing AI Safety Institute on a statutory footing. She alluded to the possibility of introducing legislative safeguards to protect employment / worker rights against the use of artificial intelligence technologies as well. In relation to intellectual property rights, Baroness Jones also highlighted the need for "thoughtful guidance and legislation to find balance" around the lack of IP protection for creators. However, given the IPO's failure to produce voluntary "copyright in AI" guidance to date, the issue may still end up being resolved in the UK courts and via sector-led guidance, rather than legislation. Either way, it sounds like we will be kept guessing a little longer around the exact detail of the UK's approach to regulating AI.
With AI evolving so rapidly, developing appropriate legislation is a difficult balancing act. Although a UK version of the EU AI Act is not expected anytime soon, the Government will need to balance the challenge of organisations complying with fragmented international laws, against the risk of deterring innovation and investment through over-alignment. Any new AI legislation will mark a more interventionalist approach and a significant departure from the previous government's "wait-and-see" approach, demanding careful management to unlock growth and accelerate Britain’s progress.
For further information please refer to our blog The King's speech: 40 policy bills, a gaping AI hole and a boomerang data bill.
Since the global IT outage on Friday 19 July 2024, CrowdStrike, a security technology provider, and Microsoft have each released tools and advice to facilitate recovery. The incident has been attributed to a software update from CrowdStrike that caused problems with some PCs, servers and IT equipment running Microsoft Windows, triggering substantial disruption across many businesses. However, organisations continue to face significant challenges, not least because individual devices afflicted by the 'blue screen of death', will require manual attention to deploy the fix and roll back the problematic update which caused the issue. Given the outage has been reported to have affected around 8.5 million devices, it has caused significant business impacts on services in some sectors and will continue to cause disruption for the foreseeable future.
For further information, please refer to our blogs here and here regarding the steps for impacted businesses (including those in respect of data protection and privacy regulatory compliance).
One of the key questions for affected organisations from a data protection perspective, is whether the CrowdStrike / Microsoft outage amounts to a personal data breach and, if so, whether that breach is notifiable to the appropriate data protection authority (or authorities) and affected individuals. This will turn on the specific circumstances. However, it is worth noting that a "personal data breach" under Article 4(12) of the UK and EU GDPR (the GDPR), relates to a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This can include breaches that are the result of both accidental and deliberate causes. The EDPB Guidelines 9/2022 on personal data breach notification under the GDPR (adopted 28 March 2023) refer to the fact that "loss" of personal data should be interpreted as "the data may still exist, but the controller has lost control or access to it, or no longer has it in its possession". In its Opinion 03/2014, the then Article 29 Working Party categorised breaches according to three well-known information security principles (one of which is an "availability breach" where there is an accidental or unauthorised loss of access to, or destruction of, personal data).
The EDPB Guidelines go on to consider whether a temporary loss of availability of personal data should be considered a personal data breach and, if so, whether it needs to be notified to the appropriate data protection authority. The EDPB Guidelines state that, as with a permanent loss or destruction of personal data, a breach involving the temporary loss of availability should be documented in accordance with Article 33(5) GDPR. This helps the controller to demonstrate accountability to the supervisory authority, who may ask to see those records. However, it will very much depend on the circumstances around the breach as to whether the controller needs to also notify the data protection authority and communicate to affected individuals. In particular, in accordance with Article 33(1) GDPR the controller will need to assess the likelihood and severity of the impact on the rights and freedoms of individuals as a result of the lack of availability of personal data – the EDPB acknowledges that this will need to be assessed on a case-by-case basis.
Longer term, we expect to see broader learnings for entities and regulators around operational resilience and how these types of incidents should be dealt with.
A recent audit by the Global Privacy Enforcement Network (GPEN) Sweep (the Sweep) found a high prevalence of deceptive design patterns (DDPs) on popular websites and applications and concluded that the DDPs negatively impacted users’ online privacy decisions.
The GPEN comprises 26 privacy enforcement authorities (including the UK ICO, the French CNIL and the Italian GPDP) and the Sweep was coordinated in conjunction with the International Consumer Protection and Enforcement Network, due to the relevance of DDPs to both privacy and consumer protection, in particular. DDPs are design choices that mislead or deceive users of online services, and regulators tend to refer to them as "harmful online choice architecture".
Whilst there is currently a patchwork of legislation that governs DDPs, they have increasingly been subject to regulatory scrutiny due to their manipulative nature and the potential harm they can cause users, particularly in an increasingly digital society where privacy and consent are of paramount importance. The findings from the Sweep were published on 9 July 2024 and found DDPs in 97% of the websites and applications examined, indicating that users frequently encounter at least one DDP when trying to make privacy-protective decisions or access privacy-related information. The Sweep therefore recommends that organisations improve platform design to enable users to better understand and control their personal data use – to provide users with the ability to make informed privacy decisions, implement privacy-friendly design practices and build consumer trust. Whilst the Sweep is not a formal investigation, the concerns it has raised may be used to inform future enforcement actions, and help support targeted education and outreach to organisations.
This follows a joint paper published by the ICO and CMA in August 2023, where the regulators called on businesses to "stop using harmful website designs that can trick consumers into giving up more of their personal data than they would like." Stephen Almond, Executive Director of Regulatory Risk at the ICO stated "Businesses should take note that if they deliberately and persistently choose to design their websites in an unfair and dishonest way, the ICO will not hesitate to take necessary enforcement action".
For further information regarding the Sweep, examples of DDPs, detail around why they have been subject to recent regulatory scrutiny, DDP action examples and the findings and next steps from the Sweep, please see our blog here.
Whilst the UK will need to wait a little longer for clarity on its approach to regulating artificial intelligence (see above), the long-awaited EU AI Act was finally published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024. The provisions of its tiered risk-based approach to regulating AI will apply incrementally over the forthcoming 6 to 36 months, depending on the relative risk categorisation of the AI systems in scope. The Act is a landmark piece of legislation aimed at harmonising AI rules across the EU - for further information around the legislative framework please refer to our previous posts here, here and here.
At this early stage, it is unclear whether the AI Act will be as pivotal an international benchmark for shaping AI regulation as the EU General Data Protection Regulation was for the global regulation of data protection. Given their technology neutral nature, both the EU and UK GDPR (GDPR) will continue to apply to the processing of personal data in the context of AI technologies. However, the AI Act also seems to build on some of the principles under the GDPR and, in practice, the two regimes and their respective requirements will co-exist. It is therefore important for providers and developers of AI systems to understand the interplay between these two pieces of legislation. In the context of the AI Act, a "provider" is the entity that develops or has an AI system developed and places it on the market or puts it under service under its own name or trademark, and a "deployer" is the entity that uses an AI system under its authority (except for non-professional personal use).
Navigating the interplay between the AI Act and the GDPR requires a good understanding of both regulations, the business and the aim of the organisation in deploying the AI system that processes personal data. For further information, we cover examples of some the of the key interactions between the regimes in our blog here, with a focus on transparency and accountability, data quality, the regulation of biometric data and the role of DPAs under the AI Act.
July saw non-governmental organisation (NGO), the Open Rights Group (ORG), file a complaint with the ICO regarding Meta's use of personal data from its UK users to train its AI systems. The complaint was submitted on behalf of five ORG staff members who are Meta users. The complaint was published shortly after confirmation from Meta that it would pause plans to start training its AI systems using data from its users in the EU and the UK. The pause itself followed a request from the Irish Data Protection Commission, acting on behalf of several other EU data protection authorities, to delay training its large language models "using public content shared by adults on Facebook and Instagram across the EU/EEA". For further information refer to our June Data Wrap entry here.
Of particular note, the ORG stated in its complaint that Meta has "no legitimate interest under Article 6(1)(f) UK GDPR that would override the interests of the complainants (or any data subjects) and no other legal basis to process such vast amounts of personal data for totally undefined purposes". The NGO referenced the ICO's legitimate interest guidance (ICO's Guidance) which states that the lawful basis is likely to be most appropriate where controllers process individuals' data "in ways they would reasonably expect". The ORG went on to refer to the fact that "processing of all personal data ever posted on Meta platforms for any purpose carried out through AI clearly does not align with the data subject's expectations of the social platforms' functions". It also mentioned the ECJ Bundeskartellamt case (C-252/21) stating that Meta "does not even have a 'legitimate interest' to use personal data for advertisements", on the basis that that case would not allow the "irreversible ingestion of data for undefined "artificial intelligence" technology without any purpose limitation and with an undisclosed number of recipients".
The ORG also highlighted the Dutch Data Protection Authority's (AP's) legitimate interest guidelines. The AP's strict position on "legitimate interests" considers that this lawful basis can only cover interests that are protected by law (and that "purely commercial interests" will not suffice). This position was recently reiterated by the regulator in relation to data scraping, for which the AP considers "legitimate interest" is the only likely lawful basis. For further information refer to our May Data Wrap entry here.
Reliance on "legitimate interests" in this context continues to be subject to debate worldwide. The ICO's Guidance in the UK also states that controllers cannot rely on "vague or generic business interests" but, unlike the AP, does not refer to purely commercial interests being unacceptable. In its 23 May Report of work undertaken by the ChatGPT Taskforce, the EDPB also suggests that "legitimate interests" might be a possible lawful basis and provides some useful considerations. In practice, it is also worth checking that a suitable data protection impact assessment, in particular, has been conducted to assess UK GDPR compliance.
The most recent attempt to establish a US federal privacy law has been a hot topic in the privacy world (refer to our previous articles here and here for further information). However, despite ambitious efforts and the initial pace at which the draft legislation was introduced, it looks like this progress has now been stalled.
The initial draft of the new American Privacy Rights Act (APRA) was published on 7 April 2024. The aim of the Act was to create a comprehensive federal level piece of privacy legislation for the US, to enable consistency and harmonisation across the existing patchwork of state laws, and to give individuals greater control over their own personal data. Only weeks after the initial draft, the legislation was updated to include several amendments, the most substantial change merging the Children and Teen's Online Privacy Protection Act 2.0 (so-called COPPA 2.0) into the Act – albeit with some omissions when compared to the standalone COPPA 2.0 proposal that had previously been before the Senate.
In June, the updated draft was due to advance to full committee for consideration. However, the first important discussion session, the House Committee on Energy and Commerce scheduled for 27 June, was cancelled only minutes before it was due to start. Despite significant changes that were made to the first draft (such as removing certain opinion dividing items regarding civil rights and use of algorithms), it is thought that the Committee was "not ready" to discuss the Act further and more time was needed to work through the proposal.
Since then, there has been very little information about the future of the APRA. Based on subsequent commentary and statements from committee members, it sounds like the Act is once again faced with familiar division - with one side supporting the draft APRA's stance on data minimisation, consumer rights, anti-discrimination and other consumer-focused points, and the other side raising concerns around the impact of the Act on innovation, (listing disadvantages such as over-regulation, concerns around private rights of action by individuals against organisations, and a weak mechanism by which the federal law pre-empts provisions of state-level privacy).
Congress is now in recess for August and, with the long-awaited presidential election in November 2024, the future of the APRA remains unclear. It is possible that the APRA will be the latest addition to the list of failed attempts to create a comprehensive privacy law in the US – particularly given that the COPPA 2.0 "bolt on" that had previously been added to the APRA, was instead passed as a separate piece of legislation with a strong bipartisan consensus at the end of July. Regardless of the outcome, initiatives at both state and federal level do indicate an intention to create more holistic privacy and data protection legislation, although it is unlikely that any significant new developments will take place until after the presidential election.
Key contacts
Key contacts
Sarra Leino
Associate (Finland) (External Secondee), London
Tommaso Bacchelli
Trainee Solicitor , London
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.