In a last-minute addition to this Data Wrap, after nearly three years of discussions and negotiation, political agreement has finally been reached in relation to the EU’s AI Act – the first major comprehensive regulation specifically in relation to artificial intelligence. While the text is not yet available, the key elements of the agreement are now clear. In particular, the greatest degree of regulation will apply to so-called “high-risk” AI systems. For general purpose AI, including foundation models that can be used for a variety of different purposes, a two-tier regulatory approach will apply, depending on the computing power.
The AI Act’s requirements will begin to apply incrementally over the coming years: developers and users of AI systems should use this time to consider how their AI systems will be regulated and prepare for the new rules. It will also be interesting to see whether the AI Act gives rise to a similar "Brussels Effect" to the EU GDPR had – setting an international benchmark and trend setting a high-water mark for the regulation of AI.
The EU AI Act is not happening in a vacuum. This AI-specific piece of regulation is intended to complement the existing patchwork of regulation that governs the use of emerging technologies such as AI (e.g. data protection / privacy, IP, competition, employment / discrimination, consumer protection, human rights legislative frameworks).
Only time will tell as to whether the legislation provides an appropriate balance between ensuring safe, responsible and trustworthy use of the technology and protecting fundamental rights on the one hand, versus not stifling innovation or investment in AI technologies and being sufficiently flexible to evolve as the technology evolves, on the other hand. The "horizontal" rules-based approach adopted by the EU is in contrast to the lighter touch adaptable approach adopted in the UK, for example, through a context specific and sector led models in line with five common principles.
For further details please refer to our full blog on the EU AI Act here and our March Data Wrap on the UK's approach to regulating AI here.
Keeping the spotlight on AI, amidst the hubbub of the marathon negotiations on the EU AI Act, on 7 December the CJEU delivered a significant decision interpreting the scope of "automated decision making" under Article 22 of the EU GDPR. In particular, in the so-called "SCHUFA" case, the CJEU ruled that a credit reference agency engages in automated individual decision making when it creates credit repayment probability scores from automated processing which lenders then heavily rely on to establish, implement or terminate contracts.
The decision means that the obligation to comply with Article 22 falls on both the credit reference agency and the lender, rather than just the lender as the ultimate decision maker. The CJEU set out the three conditions for automated decision making under the GDPR: (i) a decision must be made; (ii) it must be based solely on automated processing (including profiling); and (iii) it must produce legal effects concerning the individual or otherwise produce an equivalent or similarly significant effect on the individual. All elements were deemed satisfied in this scenario and, in particular, the CJEU noted the broad nature of the concept of a "decision" within Article 22(1) – it is capable of "including a number of acts which may affect the data subject in many ways….broad enough to encompass the result of calculating a person’s creditworthiness in the form of a probability value concerning that person’s ability to meet payment commitments in the future."
Article 22 provides that automated individual decision making can only be used in certain very limited circumstances, subject to having certain safeguards in place. The CJEU found that if the credit reference agency was not subject to Article 22, it would not be required to provide the individual with meaningful information about the logic involved in the decision (under an Article 15 request under the EU GDPR) and the lender was unlikely to have this information to provide to the individual instead. It is unclear how significant the plugging of this gap was in the broad approach adopted by the CJEU and the decision reached.
Whilst the precedential value of this decision remains to be seen it is also worth noting that an AI-system which makes decisions about offering credit falls within the "high risk" use case under the EU AI Act (refer to entry "Spotlight on the EU AI Act" above.)
Facial recognition technology has been the subject of much scrutiny by supervisory authorities recently, particularly in light of enforcement action taken in France, Italy and Greece in recent years. During October and November, attention turned to interpretation of the material and territorial scope provisions in Articles 2 and 3 of the EU GDPR and UK GDPR. In particular, whether data processing activities (relating to facial recognition technology) conducted by Clearview AI (and its clients) fell within the remit of the EU or UK GDPR and were therefore subject to the jurisdiction of the UK ICO.
Clearview AI hosts a global database that stores over 30 billion images, and offers a service that allows clients to search for facial images within the database. In May 2022 the ICO fined Clearview AI £7.5m for unlawfully storing facial images and ordered Clearview to refrain from obtaining, storing and using the personal data of UK residents. Following an appeal from Clearview AI, on 18 October 2023 the First-tier Tribunal ("FTT") overturned the fine on the basis that the ICO did not have jurisdiction to impose it on Clearview AI.
Clearview AI is based in the US and it does not currently provide services to clients in the EU or the UK. The FTT inferred, however, from the size of Clearview AI's database that it included images of UK residents and images taken within the UK, therefore the service offered by Clearview "could have an impact on UK residents even though it is not used by UK customers". Taking a relatively broad approach, the FTT also found that: use of the service could comprise "monitoring"; Clearview AI's clients (but not Clearview AI) were monitoring behaviour (given the clients' conduct was beyond simple identification); Clearview AI was a joint controller with each client for the purpose of the facial recognition functionality of the service; and processing by Clearview was "related to" the monitoring of UK data subjects in relation to their behaviour in the UK.
All that being said, the FTT's decision that the ICO did not have jurisdiction to issue the penalty notice against Clearview AI was based on the fact that Clearview's clients conduct criminal law enforcement or national security functions (and its services were used by those clients for those functions), yet those functions fall outside the scope of both the EU and UK GDPR.
In a further twist, on 20 November 2023, the ICO sought permission to appeal the First-tier Tribunal's decision, on the basis that "Clearview AI itself was not processing for foreign law enforcement purposes and should not be shielded from the scope of the UK law on that basis". John Edwards, the Information Commissioner also stated that the appeal was to seek clarity as to whether "commercial enterprises profiting from processing digital images of UK people, are entitled to claim they are engaged in “law enforcement”.
The European Data Protection Board (the "EDPB") is consulting on its Guidelines on the technical scope of Article 5(3) of the ePrivacy Directive, known as the 'Cookie Law'.
Recent years have seen a lot of regulatory focus on cookies and similar technologies, which are often used in the context of targeted advertising. And the new guidelines aim to remove some of the ambiguity around the application of Article 5(3), to make it clear when the requirement to obtain user consent applies.
The Guidelines set out four criteria which must be satisfied in order for the consent requirements to be triggered. These are that the operations carried out: (i) involve information; (ii) involved terminal equipment of the user; (iii) are made in the context of the provision of publicly available electronic communications services in public communications networks; and (iv) constitute a gaining of access or storage.
In addition to these criteria, the Guidelines set out a non-exhaustive list of use cases where Article 5(3) could apply, including URL and pixel tracking, local processing, tracking based only on IP, intermittent and mediated IOT reporting, and unique identifiers.
The consultation is open until 18 January 2024.
Moving on to the regulation of "non-personal" data; on 27 November 2023, the EU Council adopted the regulation on harmonised EU rules regarding fair access to and use of data, known as the "EU Data Act", see press release here. This follows political agreement reached between the EU Council, European Parliament and European Commission on 28 June 2023, see press release here.
Given the value attributed to data in the digital age, particularly in respect of AI systems, the EU Data Act is intended to increase data availability and enable data sharing across all sectors, providing structure around who can use and access non-personal data (predominantly that generated by connected devices), for what purpose and under what conditions.
From a data subject perspective, the new regulation strengthens the existing data portability right by allowing consumers and companies to have more control over data generated by connected devices. Of note, the EU Data Act introduces fairly onerous DSAR-style obligations similar to those under the EU GDPR, on certain manufacturers and service providers to make data (and metadata) generated by connected products and services "readily available" upon the request of a user "without undue delay, of the same quality as is available to the data holder, easily, securely, free of charge, in a comprehensive structured, commonly used and machine-readable format, and where relevant and technically feasible, continuously and in real-time".
Following its adoption, the new regulation is due to be published in the EU's Official Journal in the coming weeks and will enter into force on the 20th day following publication. It will then apply to organisations after a 20-month grace period, around the latter half of 2025. However, Article 3 Paragraph 1 of the regulation which sets out requirements for simplified access to data for new products, will only apply to connected products (and associated services) which are placed on the market 32 months after the regulation enters into force.
November saw the practice of scraping personal data for the purposes of training AI algorithms become the subject of an investigation by Italy's supervisory authority. We therefore thought it would be a good opportunity to revisit the Joint Statement issued by twelve data protection regulators (including the UK ICO) earlier this year and what it means for organisations that either engage in data scraping or are susceptible to data scraping.
The Joint Statement served to remind organisations that publicly available personal data is still subject to data protection and privacy laws and made clear that data protection authorities expect social media companies and other website operators to take responsibility for the content that they host online, including in relation to third-party scraping from their websites and platforms.
The statement also comes in the wake of a surge in the practice of mass scraping of publicly available data from websites and online platforms following the increased availability and capability of data scraping technologies, including those assisted by AI, to collect and process individuals’ personal data from the internet. It also comes at a time when a number of organisations are facing enforcement action and significant regulatory fines from data protection authorities around the world both in relation to scraping data from online sources and failing to put adequate measures in place to prevent unlawful scraping of data under their control.
For further information please refer to our full article here which was originally published in the Privacy and Data Protection Journal which sets out best practice recommendations and other considerations that organisations that either engage in data scraping or are susceptible to being scraped need to consider in light of the Joint Statement.
Equifax Limited, the UK subsidiary of Equifax Inc, suffered a major data breach in 2017 which affected more than 13.7 million UK consumers. In response, the UK Financial Conduct Authority ("FCA") published a press release and final notice to Equifax Limited on 13 October 2023, almost six years later fining the company £11.2 Million for failing to manage and monitor the security of UK consumer data. In the intervening period, Equifax has faced multiple penalties from numerous regulators in connection with the incident - both in the UK and the US, including a fine from the UK ICO.
The FCA ruled that Equifax Limited had breached Principles 3, 6 and 7 of its Principles for Businesses.
There are a number of key learning points to take away from both the data breach and the FCA's Final Notice, particularly around intra-group outsourcings. These are set out in detail in our full blog here (along with practical mitigation steps to take) and include:
- intra-group outsourcing arrangements (or similar) must meet the same FCA requirements, and apply the same standard of rigour in overseeing and managing risks in those intra-group arrangements, as outsourcing to an unrelated third party;
- intra-group outsourcing can involve special risks, for example, firms must be careful about intra-group reporting structures which might compromise effective monitoring by the group service provider;
- contractual risk management mechanisms (e.g. audit rights) alone are not sufficient: they must be exercised in practice, even where the firm is overseeing services provided by its own parent; and
- firms remain responsible for FCA compliance and may not delegate responsibility when outsourcing or engaging in a third party arrangement.
The Data Protection and Digital Information Bill was one of the new legal frameworks to 'encourage innovation in technologies such as machine learning' announced in the King's Speech on 7 November as it continues to pass through the legislative procedure.
Following the King's Speech, the Government also proposed amendments to the Bill on 23 November. The amendments included limitations on Government powers to approve statutory codes – something that had been heavily criticised as fettering the independence of the UK Information Commissioner's Office. The amendments also included changes to the data subject access right regime to limit the searches that organisation are required to carry out to reasonable and proportionate searches only – something that will no doubt be welcomed by businesses. However, the UK Information Commissioner John Edwards has, in an updated response to the Bill, criticised the amendments as not addressing the majority of concerns raised by his office. The Bill has since had its first and second reading in the House of Lords and has reached the committee stage (with a date yet to be announced).
In other data-related news in the King's Speech, the King proposed that Ministers will give the security and intelligence services the powers they need and will strengthen independent judicial oversight' in the form of amendments to the Investigatory Powers Bill (2016). The reforms include changes to the bulk personal dataset regime allowing more effective use of less sensitive, publicly available data. The notices regime will also be reformed to better anticipate threats to public safety in the UK caused by technology use from multinational companies that precludes lawful access to data.
However, a notable absence from the King's Speech was any reference to an Artificial Intelligence Bill. This is consistent with the Government's plan to avoid premature regulation and the risk of stifling innovation. For now, effort and resource will instead be placed into considering AI safety and generating evidence to better understand the technology to prevent over regulation.
Following the recent Austrian Post case (see our blog article here), the European Court of Justice has handed down a judgment in the case of VB v Natsionalna agentsia za prihodite finding that fear that personal data may be misused following a data breach can qualify as non-material damage under the GDPR, meaning that a claim for compensation for such damage could be available to affected data subjects.
The court also found that: (i) the mere fact that a data breach has occurred is not enough to conclude that an organisation has breached its security obligations under the GDPR; (ii) it is up to national courts to assess the appropriateness of security measures in place; (iii) the burden of proof lies with the controller organisation in the context of a claim for compensation, to prove that it had appropriate security measures in place; and (iv) a controller is not exempt from having to pay compensation simply because the damage resulted from the actions of a third party (e.g. a threat actor) – the controller must prove that it is in no way responsible for the event that gave rise to the damage concerned.
Although in the UK, rulings from the European Court are no longer required to be followed by the UK courts post-Brexit, they are nonetheless still likely to be influential and the case appears to open the door for potential claimants following a series of cases in the UK which appeared to make it difficult for data breach compensation claims to get off the ground.
Key contacts
Disclaimer
The articles published on this website, current at the dates of publication set out above, are for reference purposes only. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action.