Follow us

At the end of March the Information Commissioner's Office (ICO) published an outline of the proposed structure for its auditing framework for the use of personal data in an Artificial Intelligence (AI) context. Once finalised the framework has potential to help catalyse the use of this new emerging technology within the restrictions of data protection regulation. In particular, it is intended to support the ICO in assessing data controller compliance, as well as providing data protection and risk management guidance, in relation to AI.

The two key components of the framework are set out below:

1) Governance and Accountability

This component of the framework will "discuss the measures an organisation must have in place to be compliant with data protection requirements".

Whilst the ICO has previously published detailed guidance about accountability, which is a key principle under the GDPR, this part of the framework will help organisations to understand and manage their governance and accountability processes in an AI context.

Boards and senior leaders are likely to need to reconsider (or in many cases define) their data protection risk appetite, as well as how AI applications, individually and collectively, fit within the chosen parameters.

In particular, the outline framework includes eight 'focus areas' for organisations to consider in relation to the use of AI, including risk appetite, leadership engagement and oversight, management and reporting structures, compliance and assurance capabilities, data protection by design and by default, policies and procedures, documentation and audit trails, and training and awareness.

2) AI-specific risk areas

This component of the framework will "focus on the potential data protection risks that may arise in a number of AI specific areas and what the adequate risk management practices would be to manage them".

The increasing use, speed and scale of AI applications is associated not only with new data protection risks, but also exacerbating existing risks or making risks more difficult to spot or manage. AI also increases the importance of embedding concepts such as "privacy by design and default" into an organisation's culture and processes and the technical complexities of AI applications can make it more difficult. Organisations need to be able to understand and manage the key risk areas specific to AI in order to design and implement effective data protection measures.

The ICO has identified eight AI-specific areas which the framework will address:

  1. Fairness and transparency in profiling
    • includes issues of bias and discrimination, interpretability of AI applications and explainability of AI decisions to data subjects;
  1. Accuracy
    • includes both accuracy of data used in AI applications and data derived from them
  1. Fully automated decision making models
    • includes classification of AI solutions (fully automated vs non-fully automated decision making models) based on the degree of human intervention, and issues around human review of fully automated decision-making models
  1. Security and cyber
    • includes testing and verification challenges, outsourcing risks and re-identification risks
  1. Trade-offs
    • covering challenges of balancing different constraints when optimising AI models (e.g. accuracy vs privacy)
  1. Data minimisation and purpose limitation
  2. Exercising of rights
    • includes individuals' right to be forgotten, data portability and right to access personal data
  1. Impact on broader public interests and rights
    • such as freedom of association, freedom of speech

The framework will analyse the data protection risks associated with each of these areas and it will also set out organisational and technical measures which the ICO considers good practice (although the ICO notes that the list will not be exhaustive nor definitive as risk controls must be adopted on a case-specific basis).

The ICO has invited feedback on the proposed framework via the AI auditing framework blog, and will be publishing updates on the blog every 2-3 weeks as the framework progresses, pending publication of:

  1. a formal consultation paper expected by January 2020; and
  2. the final framework expected by Spring 2020.

Given the exponential growth of emerging technologies such as AI and their ability to harness data in new and innovative ways, in turn this gives rise to new and interesting regulatory compliance challenges and considerations. The ICO's blog updates and framework will provide much needed guidance for organisations and help encourage the continued use of AI in a lawful manner.

Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman

Related categories

Key contacts

Claire Wiseman photo

Claire Wiseman

Professional Support Lawyer, London

Claire Wiseman
Claire Wiseman