Follow us

The European Commission has proposed a legal framework on AI to ensure that EU citizens can trust AI. It is essential reading for all business using or contemplating the use of AI systems.

The Commission recognises that increases in computing power offer huge potential for AI in areas such as cybersecurity, health, transport, energy, agriculture and tourism but also create risk.

The commissioner for the Internal Market, Thierry Breton, said that the “proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”.

Margarethe Vestager, Executive Vice-President for a Europe fit for the Digital Age said: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way……our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

The next steps will be for the European Parliament and member states to adopt the Commission’s proposal in the ordinary legislative procedure. Once adopted the proposals will be directly applicable across the EU.

The legal framework (see https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence) categorises AI systems according to risk of potential harm to individuals: unacceptable risk; high risk; limited risk; minimal risk.

AI systems which fall into the unacceptable risk category are those which are “considered to be a clear threat to the safety, livelihoods and rights of people” – and will be banned. They include AI systems or applications that allow “social scoring” by governments or “manipulate human behaviour to circumvent users’ free will”. An example cited of the latter is toys using voice assistance to encourage minors to do dangerous things.

High risk AI systems are those that pose significant risks to the health and safety or fundamental rights of individuals – and will be  subject to strict obligations before they can be put on the market. Examples of high-risk systems include those that might determine access to education and the professional course of someone’s life, an AI credit scoring system that might deny citizens an opportunity to gain a loan or an AI system which applies the law to a concrete set of facts, as well as more obvious examples, such as an AI systems in critical infrastructure, such as transport, which could put the life and health of citizens at risk.

The high risk category will encompass all remote biometric identification systems – and their use will need to be authorised by a judicial or other independent body, which will impose appropriate limits.

High risk AI systems will be subject to strict obligations before they can be put on the market which include:

  • adequate risk assessment and mitigation systems;
  • high quality datasets feeding into systems to minimise risk and discriminatory outcomes;
  • logging of activity to ensure traceability of results;
  • detailed documentation providing information necessary on the system and its purpose so that authorities can assess its compliance;
  • clear and adequate information to the user;
  • appropriate human oversight to minimise risk;
  • high level of robustness, security and accuracy.

The limited risk category is likely to apply to AI such a chatbots where users should be aware that they are interacting with a machine, so that they can take an informed decision to continue or step back. The vast majority of AI systems, such as AI enabled video games or spam filters, will fall into the minimal risk category.

The proposals build on work which has been underway for many years. The GDPR, of course, which was designed similarly to prevent harm to individuals and protect European values in the digital age, is aimed at ensuring all data processors meet obligations such as those which will apply to high risk AI systems (See for example the provisions of Article 22 on automated decision making, including profiling, and Article 32 on security of processing). The judiciary has been giving careful consideration to how oversight of AI may work in practice (see Lord Sales’ Sir Henry Brooke lecture on Algorithms, Artificial Intelligence and the Law - http://www.bailii.org/bailii/lecture/06.pdf)

As with the GDPR, it is proposed that the new provisions have extraterritorial scope. The proposal states (at paragraph 11) that the Regulation should apply to “providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union”. There is no one-stop-shop mechanism proposed, possibly reflecting the problems there have been with the attempt to introduce such as mechanism in relation to GDPR.

The proposals recognise that cybersecurity is vital to ensure that AI systems are resilient against the attempts of malicious third parties to exploit vulnerabilities, compromise security and alter the systems’ use, behaviour, performance. Cyberattacks against AI systems can involve data poisoning of training data sets or adversarial attacks on trained models.

The new legal framework is complemented by a co-ordinated plan for member states and new rules on machinery. A European Artificial Intelligence Board, comprised of national supervisory authorities, will be established. There will be a EU database for stand-alone high-risk AI systems.

[show_profile name='Andrew' surname='Moir' jobtitle='Partner, Global Head of Cyber and Data Security, London' phone=' +44 20 7466 2570 ']

[show_profile name='Kate' surname='Macmillan' jobtitle='Consultant, London' phone=' +44 20 7466 3737']

 

Related categories