Follow us


France currently has no dedicated artificial intelligence regulation. Instead, it relies on existing legislation (in domains such as data protection, healthcare, and criminal law), case law, and guidance from regulators such as the Commission nationale de l'informatique et des libertés (the CNIL). AI-specific regulation is emerging through sector-specific laws and the phased implementation of the EU AI Act. Notable developments include the establishment of the Generative AI Committee in 2023 and the Health Data Hub in 2019 to promote research and innovation in AI.

The French government launched its national AI strategy in 2018, which is part of the broader "France 2030" plan.

This strategy, titled "AI for Humanity," is based on the recommendations from the Cédric Villani report, which aims to create an accessible and collaborative data ecosystem. It recommends prioritising efforts in four strategic sectors - health, environment, transport and mobility, and defence and security - while enhancing AI research and education and ensuring ethical and ecological considerations are integrated into these efforts. Key elements include:

  • Research and development through establishing interdisciplinary AI institutes, funding research, and investing in infrastructure.
  • Innovation and application by supporting AI technology development and fostering public-private partnerships.
  • Regulation and ethics via ensuring ethical AI deployment and creating guidelines for transparency and fairness.
  • Funding by allocating significant financial resources, including €2.5 billion from the France 2030 plan, to support AI initiatives.

The strategy is divided into two phases, from 2018 to 2022 and 2023 to 2025, focusing on different aspects of AI development and deployment.

  • Phase 1 (2018-2022) focused on establishing competitive research capabilities, including the creation of AI institutes and initial funding allocations.
  • Phase 2 (2022-2025) aimed to accelerate AI deployment in key sectors. This phase is structured around three strategic pillars, namely support for deep tech supply, training and attracting talent, and bridging the gap between the supply and demand for AI solutions.

The Health Data Hub, established in December 2019 as recommended in the Cédric Villani report "AI to Serve Healthcare Policies", is a data access solution for healthcare services that aims to simplify research, support healthcare projects, foster innovation, and standardise the use of health data. This initiative seeks to promote AI applications in the medical field while ensuring privacy and compliance with regulations.

The Generative AI Committee, established in September 2023 under the leadership of former Prime Minister Élisabeth Borne, aims to bring together stakeholders from various sectors (cultural, economic, technological, and research) to guide the government's approach to generative AI. The Committee submitted a report in March 2024 with 25 recommendations, including a national awareness plan, creating a €10 billion "France & AI" fund, and promoting global AI governance.

The National Institute for the Evaluation and Security of Artificial Intelligence (INESIA), established in February 2025, is set to coordinate the national actors of evaluation and security of AI systems, in particular the French National Agency for the Security of Information Systems (ANISSI), the French National Institute for Research in Digital Science and Technology (Inria), the National Library of Metrology and Testing (LNE), and the Digital Regulation Enterprise Centre (PEReN). Its work will be focused on the analysis of systemic risks in the field of national security, support for the implementation of AI regulation and the evaluation of the performance and reliability of AI models.

France is committed to enhancing its global AI presence through international collaboration. France hosted the  AI Action Summit in February 2025 and has actively participated in global AI initiatives such as the G7 Hiroshima AI Process and the AI Safety Summit Bletchley Declaration.

France and Germany have formed a strong alliance within the European Union, joining forces on numerous AI initiatives, This collaboration is based on their shared commitment to technological progress, solidified by the Aachen Treaty signed on January 22, 2019, by former German Chancellor Angela Merkel and French President Emmanuel Macron, aiming to deepen cooperation in research and digital transformation.

There is no specific AI regulation currently as France implements the EU AI Act (read our guide on AI and EU here), effective from 1 August 2024, which establishes a comprehensive legal framework for AI across all EU member states.

However, in a report dated 29 November 2024, the Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) proposed a series of measures aimed at adapting existing laws to better address AI-related challenges. These include revising intellectual property laws to account for AI-generated content and protect creators' rights, and assigning OPECST the responsibility of overseeing and evaluating AI-related public policies.

From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.

 The French Data Protection Authority (Commission nationale de l'informatique et des libertés, or CNIL) is expected to play a central role in AI regulation and oversight in France, ensuring that AI technologies comply with data protection regulation:

  • On March 29, 2022, the CNIL published a set of resources to address the challenges of AI in relation to privacy and GDPR compliance. These resources include a guide to help companies using AI systems comply with the GDPR and the French Data Protection Act of January 6, 1978 (Loi Informatique et Libertés). In particular, the guide provides an analysis tool enabling organizations to self-assess the maturity of their AI systems with respect to GDPR and best practices in relation to the AI Act.
  • In April 2023, the CNIL opened an investigation into ChatGPT after several complaints were filed concerning potential violations of the GDPR, including one from a French Parliament member. The investigation is still ongoing and the CNIL is collaborating with other European data protection authorities to evaluate OpenAI’s practices.
  • In May 2023, the CNIL released its AI Action Plan for the deployment of AI systems that prioritise individuals' privacy, along with "how-to sheets" providing additional recommendations for the development of AI systems.
  • In early January 2025, the CNIL published its strategic plan for 2025-2028, focusing on four key areas: artificial intelligence, protection of minors, cybersecurity, and everyday digital use.
  • On 7 February 2025, the CNIL published two new recommendations concerning AI and data protection: when personal data is used to train AI models and may be memorised by the model, the individual concerned must be informed, and affirmed that European regulations grant individuals the right to access, rectify, object and delete their data. They urge developers to anonymise models and develop solutions to prevent the disclosure of personal data by models.

On 11 February, CNIL along with four other data protection authorities signed a Joint Statement at the Paris AI Action Summit, on building trustworthy data governance frameworks to encourage the development of privacy-protective AI. Signees, including CNIL highlight the role of data protection authorities in shaping data governance and being committed to fostering a shared understanding of lawful data processing and safety, reducing legal uncertainties, and strengthening collaboration with competition, consumer protection and intellectual property authorities.

The CNIL will continue to publish practical guides addressing privacy, cybersecurity, and ethical risks, particularly those associated with generative AI.

The Law for a Digital Republic of October 7, 2016 (Loi pour une République Numérique) further introduced the principle of transparency for public sector systems, specifically requiring administrations to disclose algorithmic processing used in decision-making on individuals (Article 4). 

A legislative proposal to amend the French Intellectual Property Code (IPC) to account for AI was presented in September 2023. Measures include creating an "AI-generated work" label with the obligation to include the names of contributors, assigning ownership rights of AI-created works without direct human intervention to the authors or rights holders of the contributing works, and ensuring fair remuneration for authors and artists whose works are used by AI systems.

  • Law n° 2023-380 of 19 May 2023 (known as the "JOP2024" law), authorises the experimental use of AI-augmented cameras for security purposes at major sports, recreational, and cultural events until 31 March 2025. This law allows for the deployment of advanced surveillance technologies, including AI-driven video analysis, to enhance public safety by detecting potential threats in real-time.
  • The Law n° 2021-998 of 30 July 2021 on the prevention of acts of terrorism and intelligence, allows intelligence services to use new technologies for monitoring and intercepting communications to identify potential terrorists. This includes a general obligation for electronic communications operators to retain connection data.

AI and the Public Sector

Since March 2022, the French Council of State ("Conseil d'État") has made its decisions publicly accessible, enhancing transparency and allowing the application of AI methods to process and analyse decisions.

In a decision of 30 December 2021 (n° 440376), the French Council of State confirmed the lawfulness of decree n° 2020-356 of 27 March 2020 on the creation of an automated processing of personal data called "DataJust", allowing the Minister of Justice to implement the processing of personal data for the purpose of developing an algorithm.

Autonomous vehicles

Order No. 2021-443 of 14 April 2021 addresses the criminal liability of autonomous car manufacturers in accidents involving AI systems, defining responsibilities for AI systems and human operators in autonomous mobility services.

Healthcare

Specific sectors such as healthcare have detailed regulations that provide requirements for the use of AI which may impose additional liability considerations for the use of AI systems.

Law n° 2021-2017 of August 2, 2021 "Bioethics Law" introduced an obligation for healthcare professionals using AI to inform patients about its use.

Article L. 4001-3 of the Public Health Code, introduced by the Bioethics Law, requires healthcare professionals using medical devices with machine learning to inform patients about their use and how the results are interpreted. They must explain the results clearly and remain in control of the decision to use these devices. Healthcare professionals must also be informed about the data processing, have access to patient data, and understand the algorithms. Designers must ensure algorithms are explainable to impacted subjects.

Sanction of Clearview AI: On October 20, 2022, the CNIL fined Clearview AI €20 million and ordered it to cease collecting and using data on individuals in France. Clearview AI, a US-based facial recognition service, was found in breach of the GDPR for unlawful data processing and not respecting individuals' rights. Despite a formal notice issued in November 2021, Clearview AI failed to comply, resulting in a €20 million fine and a daily penalty of €100,000 for non-compliance. On April 13, 2023, the CNIL imposed an additional €5.2 million fine for continued non-compliance.

Warning to the city of Valenciennes: In 2017, the city of Valenciennes deployed an illegal video surveillance system using AI-powered image analysis software provided by Huawei. The system included 240 cameras and three AI-based software tools. In May 2021, the CNIL issued a warning to Valenciennes, finding the system to be disproportionate and lacking a legal framework, installed without prior consultation or impact studies. Additionally, the use of automated license plate readers by the municipal police was unauthorized.


Key contacts

Alexandra Neri photo

Alexandra Neri

Partner, Paris

Alexandra Neri
Sébastien Proust photo

Sébastien Proust

Of Counsel, Paris

Sébastien Proust
Suzanne Carayol photo

Suzanne Carayol

Associate, Paris

Suzanne Carayol
Clémence Dubois Ahlqvist photo

Clémence Dubois Ahlqvist

Associate, Paris

Clémence Dubois Ahlqvist
Vincent Denoyelle photo

Vincent Denoyelle

Partner, Paris

Vincent Denoyelle
Camille Larreur photo

Camille Larreur

Associate, Paris

Camille Larreur
Emmanuel Ronco photo

Emmanuel Ronco

Partner, Paris

Emmanuel Ronco

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Sydney Perth Brisbane Melbourne Technology, Media and Entertainment, and Telecommunications Emerging Technology Artificial Intelligence Technology, Media and Telecommunications Artificial Intelligence Tech Regulation Digital Transformation Emerging Technologies AI and Emerging Technologies Alexandra Neri Sébastien Proust Suzanne Carayol Clémence Dubois Ahlqvist Vincent Denoyelle Camille Larreur Emmanuel Ronco