Follow us

The Brazilian government announced this month that the Federal Attorney General's Office (Advocacia Geral da União) (the "AGU") will use OpenAI models (accessed via Microsoft Azure) to reduce the size of its debt burden arising from a large volume of litigation claims against it. The AGU hopes the use of AI will speed up its analysis of and response to thousands of cases, improving the government's efficiency and helping the AGU's team to cope with the heavy workload.

Background and goals

The government aims to better manage the significant volume of cases against it and the resulting debt orders (precatórios), in particular in relation to small-value claims (Requisições de Pequeno Valores). These orders consume a significant portion of the federal budget each year: the Brazilian Planning and Budget Ministry (Ministério do Planejamento e Orçamento) estimates that in 2025, the government would spend at least 100 billion reais on judgment debts of this type, which is equivalent to about 1% of the Brazilian gross domestic product (GDP).

By using AI, the Brazilian government hopes to be better equipped to respond to the suits which result in these debt orders. The government expects to place particular emphasis on the use of AI for its defence of small claims, which have (on an aggregate basis) a significant budgetary impact but are challenging to manage on an individual basis.

The AGU identifies three main areas for applying AI:

  • triage of lawsuits, identifying characteristics of each case and suggesting arguments that could be made before the courts;
  • producing statistics and analysis of caseloads to enable strategic decision-making by the AGU and identification of potential settlement options;
  • summary of documents and production of submissions.

According to the AGU, it will use AI to supplement and improve the efficiency and accuracy of its human workforce, but it says that it is not looking to downside or replace its employees. The AGU also states that its human workers will remain responsible for and will supervise all AI outputs.

Existing use of AI in the Brazilian legal sector

AI has been widely used in the Brazilian legal sector, being a common tool for lawyers in monitoring cases and conducting legal research on case law.

A survey carried out in 2021 also found that 47 of the federal and state courts in Brazil were using some type of AI since 2019. According to the survey, the initiatives by the Brazilian courts mainly focus on structuring data and automating workflows to increase efficiency on the judicial services.

For instance, the National Council of Justice (Conselho Nacional de Justiça) created a national platform for the storage and distribution of AI tools developed or recommended by the Brazilian judiciary. The Brazilian Supreme Court also operates two chatbots: Victor, that analyses and classifies cases, and Rafa, developed to incorporate the United Nations 2030 Agenda by classifying cases according to the Sustainable Development Goals defined by the United Nations.

Current regulation related to AI in Brazil

In 2019, Brazil signed the Artificial Intelligence Principles of the Organisation for Economic Cooperation and Development (OCDE). Based on these principles, Brazil defined its Artificial Intelligence Strategy (Estratégia Brasileira de Inteligência Artificial) in 2021.

In parallel, Brazil enacted its Data Protection Law (Law No. 13.709/2018) in 2018. This law establishes a general legal framework for data protection and privacy within the context of technological developments. The National Council of Justice also issued Resolution No. 332/2020 in 2020, laying down a set of principles to be applied by the Brazilian courts when using AI.

For instance, Resolution No. 332/2020 provides that the use of AI by the Brazilian courts must respect the principle of equal treatment between the parties, which requires an analysis of whether AI outputs may result in discriminatory biases. This resolution also highlights that the use of AI by the Brazilian courts must be compatible with constitutional rights, such as the right to access a fair and equitable justice, and that Brazilian society must be informed, in clear and precise language, about the use of AI by its courts.

Although the Resolution No. 332/2020 was considered a pioneering effort to address the use AI in this context, it remains general and vague in its application and enforcement mechanisms. For example, a point of contention is around the need for human supervision of AI and its outputs in the context of judicial decision-making.

In this context, further regulation of AI is currently being discussed in the Brazilian lower house (Projeto de Lei No. 2.338/2023). This new regulation would provide guidelines for guaranteeing the rights of individuals that may be impacted by the use of AI (such as the right to be informed about its use and the right to challenge decisions and request human review), a risk-based approach to the categorisation of AI systems and the articulation of rules relevant to the civil liability of technology providers, developers or operators for the output of AI.

Conclusion

The Brazilian government's decision to use AI, specifically OpenAI models accessed via Microsoft Azure, in the AGU is a notable step towards seeking to improve efficiency and reducing the burden of litigation claims. With the increasing volume of cases and resulting debt orders, the government aims to better manage these challenges by leveraging AI technology. By triaging lawsuits, producing statistics and analysis, and summarizing documents, the AGU expects AI to supplement and enhance the work of its human workforce, without downsizing or replacing employees. This move aligns with Brazil's broader adoption of AI in the legal sector, where it has been used for many years already.

However, while the use of AI in the Brazilian legal sector has been growing, there is a need for further regulation to address potential challenges and ensure the protection of individuals' rights and due process. It is critical to carefully consider the practical, regulatory and ethical implications that come with this technology. While AI has the potential to enhance efficiency and accuracy, an over-reliance on AI, specifically GenAI in its current form, may pose legal risks.

The AGU will need to ensure that the AI systems used comply with existing regulations and that the AGU (and its human workforce) retain oversight and responsibility for the steps taken and arguments made / facts advanced by the government in the legal processes that it faces. While AI can already be a hugely helpful tool, the AGU need look no further than the examples of made-up legal citations and 'hallucinated' facts to know that the tech is not a silver bullet. 

Key contacts

Charlie Morgan photo

Charlie Morgan

Partner, London

Charlie Morgan
Julia Thedy photo

Julia Thedy

Associate, Paris

Julia Thedy
Charlie Morgan Julia Thedy