Mark Coeckelbergh
Mark Coeckelbergh
AI Ethics Global Lead

Towards Global AI Governance?

United Nations calls for rules for AI and ensuring that AI serves the common good.

International organizations are increasingly active in the area of AI policy. For example, in November 2021 UNESCO offered its Recommendation on the Ethics of Artificial Intelligence, which puts forward a number of principles such as transparency, fairness, and oversight, but also offers some guidance with regard to specific areas such as education, environment, and health. Additionally, the Council of Europe has warned that AI should be used to promote human rights, democracy, and the rule of law.

After its Ad Hoc Committee on Artificial Intelligence had proposed elements of a legal framework on AI, the Committee on AI has further worked on the issue during the past year and a half, and many guidelines and recommendations have been produced.

This summer the UN has been particularly busy with regards to the risks of AI. In July the UN Security Council held a meeting on the risks of AI for global peace and security – its first-ever meeting on this topic. On this occasion Secretary-General Antonio Guterres warned of problems such as disinformation and hate speech, undermining elections, and cyberattacks.

Pointing to the UN’s history of responding to new technologies with new treaties and global agencies (consider for example treaties on nuclear weapons), he argued for a global approach and proposed to create a new United Nations entity to govern AI, inspired by existing agencies such as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change. Later this year he plans a meeting that will explore options for global governance.

This proposal for global governance of AI is – like all global governance – somewhat controversial since the usual approach to AI regulation is often still taking the context of the nation-state for granted. But although there remain questions regarding which concrete path(s) to take, there is a growing consensus in the AI ethics and AI policy communities that this approach fails. Global problems require global solutions, and given the current problems with applications that use large language models, there is already sufficient recognition of the urgency to regulate – even among Big Tech CEOs.

The current global discussion on the risks of AI and the mentioned initiatives by international organizations provide hope that a supranational approach may gain more traction. Moreover, the EU shows that supranational regulation is possible: in the EU, the use of artificial intelligence will be regulated by the AI Act.

In June, the European Parliament adopted its negotiating position on the AI Act, one of the steps in the block’s negotiation process that will likely lead to its adoption. In contrast to the alarmist voices in the Big Tech world, the EU thus proposes a concrete, supranational, and hopefully workable answer to the ethical problems with AI.

For governments, it is important to already align their policies and regulatory initiatives with what happens in this wider context. For companies and organizations that develop or use AI (in a particular sector or across sectors), it will be vital to be sufficiently prepared for these regulatory developments at the European and global levels, to develop their own vision of the ethical and responsible development and use of AI, to establish best practices in this area, and to find competent and experienced partners to support them with these challenges.

Subscribe to our newsletter

Don’t miss out on the opportunity to be the first to know about our exciting updates. Simply enter your email address below and hit the subscribe button to join our newsletter today!

Related articles

The Daiki process enables you to use and customize language models responsibly, enhancing the safety, truthfulness, and helpfulness of LLMs.
ISO 42001 is a certifiable framework for an AI Management System (AIMS) that aims to support organizations in the responsible development, delivery, or use of AI systems.
A tale of delicate balances, trade-offs, and the need for pragmatic solutions in regulatory eco-systems
Daiki Logo

Request a demo today

Daiki Logo

Apply today

Daiki Logo

Partner with us

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join our waitlist for responsible AI development

Daiki Logo

Join waitlist for responsible AI development