INNOVATIONS

What the global governance of artificial intelligence should achieve

For artificial intelligence to fulfill its global potential, new structures and guardrails are needed to help all of humanity thrive. Five basic principles should guide policy making

5901 views 1 comment(s)
Photo: Shutterstock
Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

Although artificial intelligence (AI) has been quietly helping us for decades, and progress has accelerated in recent years, the year 2023 will be remembered as the "big bang" moment. With the advent of generative artificial intelligence, technology is penetrating public consciousness and shaping public discourse, influencing investment and economic activity, fueling geopolitical competition, and changing all human activities—from education to health care to the arts. Every week brings new, exciting events. Artificial intelligence is not going away, and changes are happening faster.

Policy-making is proceeding at almost the same pace, and new regulatory initiatives and forums are beginning to move forward to keep up with the times. But while the ongoing efforts of the G7, the European Union and the United States are encouraging, none of them are universal and represent a global common good. Basically, given that the development of AI is being driven by a small group of CEOs and market players in only a few countries, the voices of the majority, especially those from the Global South, are missing from discussions on defining governance issues.

The unique challenges posed by AI require a coordinated global approach to governance, and only one institution has the inclusive legitimacy needed to orchestrate such a response: the United Nations. If we want to harness the potential of AI and reduce risks, we need to manage it properly. In this regard, the UN High-Level Advisory Body on Artificial Intelligence was established to offer analysis and recommendations to address the global governance gap. It is a group of 38 people from around the world and they represent a wide range of geographies, genders, disciplines and ages, drawing on expertise from government, civil society, the private sector and academia.

It is an honor for us to be part of the Executive Committee of that Advisory Body. We've just published the group's interim report, which proposes five principles to strengthen AI governance and address a range of interrelated challenges.

First - because risks differ in different global contexts, each will require appropriate solutions. This implies an understanding of the fact that rights and freedoms can be threatened by the particular design, use (and abuse) and choice of management practices. Failure to apply AI meaningfully - what we call a “lost opportunity” - can unnecessarily exacerbate existing problems and inequalities.

Second, since artificial intelligence is a tool for economic, scientific and social development, and since it already helps people in their daily lives, it must be managed in the interest of society. This means that goals related to equity, sustainability, community and individual well-being must be taken into account, as well as broader structural issues such as competitive markets and healthy innovation ecosystems.

Third, in order for the global problems of managing artificial intelligence to be effectively solved, it is necessary to harmonize the regulatory frameworks emerging in different regions. Emerging regulatory frameworks in different regions will need to be aligned so that the challenges of global AI governance can be effectively met. Fourth, the management of artificial intelligence must be accompanied by measures to preserve the service, privacy and security of personal data. Finally, governance must be grounded in the United Nations Charter, in international human rights law, and in several other international obligations on which there is broad global consensus, including the Sustainable Development Goals.

Establishing these principles in the context of AI requires overcoming some difficult challenges. Artificial intelligence is built on massive amounts of computing power, data and, of course, special human talent. Global governance must consider ways to develop and ensure broad access to all three resources. In addition, work should be done to build the capacity of basic infrastructure that supports the AI ​​ecosystem, such as reliable broadband access and electricity, especially for countries in the Global South.

Also, greater efforts are needed to address both known and as yet unknown risks that may arise from the development, implementation or use of AI. The risks associated with AI are a hotly debated topic. While some focus on scenarios for the possible end of humanity, others are more concerned with the harm done to people here and now; few, however, argue that the risks of uncontrolled artificial intelligence are unacceptable.

Good governance is based on strong evidence. We foresee the need for an objective assessment of the state of AI and its trajectory in order to provide citizens and governments with a sound basis for policy and regulation. At the same time, an analytical observatory to assess the societal impact of AI—from job displacement to national security threats—would help policymakers keep up with the massive changes AI is causing offline. The international community needs to develop the capacity for self-policing, including monitoring and responding to potentially destabilizing incidents (as major central banks do in financial crises), as well as promoting accountability and even taking enforcement action.

These are just some of the recommendations we promote. They should be seen as a floor, not a ceiling. Above all, they are an invitation for as many people as possible to tell us what forms of artificial intelligence management they would like to see.

For AI to realize its global potential, new structures and guardrails are needed to help us all thrive as it evolves. Everyone has a stake in the safe, fair and responsible development of artificial intelligence. The risks of doing nothing are obvious. We believe that the global governance of artificial intelligence is essential to exploiting the significant opportunities and managing the risks that this technology presents to every country, community and individual today, but also for generations to come.

The authors are co-chairs and rapporteurs of the UN Advisory Body on Artificial Intelligence

Copyright: Project Syndicate, 2023. (translation: NR)

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)