Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
France currently has no dedicated artificial intelligence regulation. Instead, it relies on existing legislation (in domains such as data protection, healthcare, and criminal law), case law, and guidance from regulators such as the Commission nationale de l'informatique et des libertés (the CNIL). AI-specific regulation is emerging through sector-specific laws and the phased implementation of the EU AI Act. Notable developments include the establishment of the Generative AI Committee in 2023 and the Health Data Hub in 2019 to promote research and innovation in AI.
The French government launched its national AI strategy in 2018, which is part of the broader "France 2030" plan.
This strategy, titled "AI for Humanity," is based on the recommendations from the Cédric Villani report, which aims to create an accessible and collaborative data ecosystem. It recommends prioritising efforts in four strategic sectors - health, environment, transport and mobility, and defence and security - while enhancing AI research and education and ensuring ethical and ecological considerations are integrated into these efforts. Key elements include:
The strategy is divided into two phases, from 2018 to 2022 and 2023 to 2025, focusing on different aspects of AI development and deployment.
The Health Data Hub, established in December 2019 as recommended in the Cédric Villani report "AI to Serve Healthcare Policies", is a data access solution for healthcare services that aims to simplify research, support healthcare projects, foster innovation, and standardise the use of health data. This initiative seeks to promote AI applications in the medical field while ensuring privacy and compliance with regulations.
The Generative AI Committee, established in September 2023 under the leadership of former Prime Minister Élisabeth Borne, aims to bring together stakeholders from various sectors (cultural, economic, technological, and research) to guide the government's approach to generative AI. The Committee submitted a report in March 2024 with 25 recommendations, including a national awareness plan, creating a €10 billion "France & AI" fund, and promoting global AI governance.
The National Institute for the Evaluation and Security of Artificial Intelligence (INESIA), established in February 2025, is set to coordinate the national actors of evaluation and security of AI systems, in particular the French National Agency for the Security of Information Systems (ANISSI), the French National Institute for Research in Digital Science and Technology (Inria), the National Library of Metrology and Testing (LNE), and the Digital Regulation Enterprise Centre (PEReN). Its work will be focused on the analysis of systemic risks in the field of national security, support for the implementation of AI regulation and the evaluation of the performance and reliability of AI models.
France is committed to enhancing its global AI presence through international collaboration. France hosted the AI Action Summit in February 2025 and has actively participated in global AI initiatives such as the G7 Hiroshima AI Process and the AI Safety Summit Bletchley Declaration. France and Germany have formed a strong alliance within the European Union, joining forces on numerous AI initiatives, This collaboration is based on their shared commitment to technological progress, solidified by the Aachen Treaty signed on January 22, 2019, by former German Chancellor Angela Merkel and French President Emmanuel Macron, aiming to deepen cooperation in research and digital transformation. |
There is no specific AI regulation currently as France implements the EU AI Act (read our guide on AI and EU here), effective from 1 August 2024, which establishes a comprehensive legal framework for AI across all EU member states.
However, in a report dated 29 November 2024, the Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) proposed a series of measures aimed at adapting existing laws to better address AI-related challenges. These include revising intellectual property laws to account for AI-generated content and protect creators' rights, and assigning OPECST the responsibility of overseeing and evaluating AI-related public policies.
From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.
Since March 2022, the French Council of State ("Conseil d'État") has made its decisions publicly accessible, enhancing transparency and allowing the application of AI methods to process and analyse decisions.
In a decision of 30 December 2021 (n° 440376), the French Council of State confirmed the lawfulness of decree n° 2020-356 of 27 March 2020 on the creation of an automated processing of personal data called "DataJust", allowing the Minister of Justice to implement the processing of personal data for the purpose of developing an algorithm.
Order No. 2021-443 of 14 April 2021 addresses the criminal liability of autonomous car manufacturers in accidents involving AI systems, defining responsibilities for AI systems and human operators in autonomous mobility services.
Specific sectors such as healthcare have detailed regulations that provide requirements for the use of AI which may impose additional liability considerations for the use of AI systems.
Law n° 2021-2017 of August 2, 2021 "Bioethics Law" introduced an obligation for healthcare professionals using AI to inform patients about its use.
Article L. 4001-3 of the Public Health Code, introduced by the Bioethics Law, requires healthcare professionals using medical devices with machine learning to inform patients about their use and how the results are interpreted. They must explain the results clearly and remain in control of the decision to use these devices. Healthcare professionals must also be informed about the data processing, have access to patient data, and understand the algorithms. Designers must ensure algorithms are explainable to impacted subjects.
Sanction of Clearview AI: On October 20, 2022, the CNIL fined Clearview AI €20 million and ordered it to cease collecting and using data on individuals in France. Clearview AI, a US-based facial recognition service, was found in breach of the GDPR for unlawful data processing and not respecting individuals' rights. Despite a formal notice issued in November 2021, Clearview AI failed to comply, resulting in a €20 million fine and a daily penalty of €100,000 for non-compliance. On April 13, 2023, the CNIL imposed an additional €5.2 million fine for continued non-compliance.
Warning to the city of Valenciennes: In 2017, the city of Valenciennes deployed an illegal video surveillance system using AI-powered image analysis software provided by Huawei. The system included 240 cameras and three AI-based software tools. In May 2021, the CNIL issued a warning to Valenciennes, finding the system to be disproportionate and lacking a legal framework, installed without prior consultation or impact studies. Additionally, the use of automated license plate readers by the municipal police was unauthorized.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs