Follow us

With the coming into force of the EU AI Act in August 2024, AI governance is an increasing concern for governments and private organisations alike. The UN's report "Governing AI for Humanity" examines some of the key concerns and challenges surrounding the creation of AI governance frameworks.

Many of the challenges highlighted in the report track challenges that organizations and governments are facing as they look to develop and deploy AI systems and govern their use, from the lack of consistent terminology around AI, to reconciling different jurisdictional approaches to regulating AI, to the challenges in accessing high quality training data that is representative of the communities these AI systems serve.

We've set out some of the report's key findings and recommendations below.

  1. A fragmented AI governance landscape

Noting the transborder effects of AI, the report begins by highlighting the need for consistent global frameworks, remarking that the current AI governance landscape is a patchwork of efforts by various governments, companies and organizations. This multiplicity of initiatives, often adopting different approaches not only creates gaps in AI governance but make compliance with these various diverging systems impracticable.

The report suggests the creation of an interdisciplinary, international scientific panel of AI experts which would partner with international organizations and issue an annual report on AI opportunities and risks, produce quarterly reports on how AI can be used to achieve social development goals and issue ad hoc reports on emerging risks and gaps in the governance landscape.

The report also proposed the creation of an AI office which would report to the Secretary General to oversee the implementation of the recommendations in the report.

  1. Bridging the AI opportunity divide

The report further notes that most of these efforts are concentrated in a handful of countries, notably the G7, and that an equitable approach would require greater participation of the global south and of historically underrepresented groups. The report proposes remedying this through the launch of a biannual intergovernmental and multi-stakeholder policy dialogue on AI governance to share best practices, promote common understanding of implementation of AI governance by public and private sector actors and share insights on AI incidents.

The report also points out that access to computational power, data and expertise is concentrated in the hands of a few states and actors. Notably, the 100 top computing clusters capable of training large AI models are all found within developed countries. In response, the report proposes establishing UN-affiliated capacity development centres to make expertise, computing power and data more widely available to underserved regions and a global fund for AI to fund this initiative.

  1. Addressing the data divide

The report also highlights that many parts of the world are "data poor", meaning there is insufficient data being collected from these countries to reflect their diversity accurately in AI systems. As AI systems are only as accurate as the datasets they are trained on, lack of data in many parts of the world is a serious obstacle to developing AI which reflect those communities.

The collection and use of data to create training datasets is subject to various data protection and intellectual property frameworks. The report suggests creating a global AI training framework to resolve issues of interoperability, availability and use of AI training data.

The report also suggests the use of data stewardship and exchange mechanisms such as data trusts and marketplaces for exchange of anonymised data to train AI models and setting up model agreements to facilitate international data access.

  1. Inconsistent AI standards

The report points to the proliferation of AI standards, many of which define keys terms such as fairness, safety and transparency, differently. Even the term AI itself is not completely standardised and steeped in debates around what type of technologies qualify as AI. The report suggests developing a bank of standard definitions and identifying gaps where new standards are required.

See our page on Artificial Intelligence for the latest news and developments on AI across the globe.

Related categories

AI

Key contacts

Alexander Amato-Cravero photo

Alexander Amato-Cravero

Director, Emerging Technology (Advisory), London

Alexander Amato-Cravero
Linda Agaby photo

Linda Agaby

Associate (Canada), London

Linda Agaby
Alexander Amato-Cravero Linda Agaby