United Nations Urges Immediate Worldwide Measures on AI, Similar to Climate Change Action

United Nations Urges Immediate Worldwide Measures on AI, Similar to Climate Change Action

United Nations Urges Immediate Worldwide Measures on AI, Similar to Climate Change Action


### United Nations Suggests Global Oversight of Artificial Intelligence: A Move Towards International AI Governance

The United Nations (UN) has made a noteworthy advancement in tackling the global issues associated with artificial intelligence (AI) through a fresh report advocating for the establishment of a global governance framework for AI. Released by the UN Secretary General’s High-Level Advisory Body on AI, this report details a thorough approach for overseeing and regulating AI technologies, highlighting the necessity for international collaboration to reduce risks while enhancing the advantages of AI.

#### A Worldwide AI Governance Framework

The report advocates for the creation of a global entity akin to the **Intergovernmental Panel on Climate Change (IPCC)**, tasked with collecting current information on AI advancements and evaluating their risks. This organization would foster international discussion among the UN’s 193 member countries, permitting them to deliberate and agree upon measures to regulate AI. The objective is to guarantee that AI technologies are designed and utilized in manners that are safe, secure, and fair.

A prominent suggestion is the establishment of an **AI fund** aimed at assisting initiatives in lower-income nations, particularly within the Global South. This initiative would help to close the digital gap and ensure that every country, independent of its economic standing, can reap the benefits of AI innovations. Additionally, the report proposes the creation of **AI standards** and **data-sharing frameworks**, alongside offering training and resources for nations to build their own AI governance systems.

#### Tackling the Risks of AI

The swift advancement of AI technologies, especially **large language models** like ChatGPT, has sparked both enthusiasm and apprehension. Although AI has the potential to transform industries and enhance productivity, it brings with it significant dangers. These encompass the likelihood of AI to:

– **Automate disinformation**: AI might be utilized to disseminate false information on an unprecedented scale, eroding trust in institutions and media.
– **Generate deepfake videos and audio**: Content created by AI could be exploited for harmful ends, such as impersonating individuals or fabricating news.
– **Displace workers widely**: Automation propelled by AI could result in substantial job loss, particularly in sectors reliant on repetitive tasks.
– **Intensify algorithmic bias**: AI systems could perpetuate and even magnify existing societal prejudices, resulting in unjust treatment of specific groups.

The report underscores the immediate need to confront these dangers, stating that the pace of AI development may soon render it challenging to manage. This concern has prompted experts and policymakers to advocate for more stringent regulation and oversight.

#### Diverging Views from Major Powers

The UN’s proposals emerge at a moment when major global powers, especially the **United States** and **China**, compete for dominance in AI. Both nations have put forward resolutions at the UN that encapsulate their distinct visions for AI governance. The U.S. resolution, presented in March, advocates for the creation of “safe, secure, and trustworthy AI,” while China’s resolution, unveiled in July, prioritizes international cooperation and the widespread accessibility of AI technologies.

Despite these resolutions, notable disparities exist between the two nations’ strategies regarding AI governance. The U.S. generally favors a less interventionist stance, enabling firms to innovate with minimal governmental oversight, whereas China supports a more centralized approach and regulation. These differences are likely to hinder efforts to craft a cohesive global AI governance framework.

#### The UN’s Position in AI Governance

While the UN is ideally suited to facilitate international collaboration on AI, some experts argue that it cannot oversee global AI governance in isolation. **Joshua Meltzer**, a specialist at the Brookings Institute, contends that AI governance will necessitate a “distributed architecture,” with individual countries also playing crucial roles in regulating AI technologies. Given the rapid pace of AI evolution, solely depending on the UN may not suffice to keep up with the dynamic environment.

Nevertheless, the UN report aims to establish a common foundation by rooting its recommendations in **human rights**. This method, according to **Chris Russell**, a professor at Oxford University, offers a robust grounding in international law and focuses on addressing tangible harms brought about by AI. By framing AI governance through the lens of human rights, the UN aspires to formulate a framework that is both inclusive and enforceable.

#### Global Collaboration on AI Safety

In spite of the geopolitical rivalry between major nations, there is an increasing agreement among scientists and scholars regarding the necessity for international collaboration in AI safety. Earlier this year, a cohort of eminent academics from both the West and China called for enhanced cooperation on AI safety following a conference in Vienna, Austria.

This collaborative spirit is echoed by **Alondra Nelson**, a member of the UN advisory body and a professor at the Institute for Advanced Study. Nelson believes that governmental leaders can unify efforts to tackle the challenges presented by AI, but she warns that much will hinge on how the UN and its member nations choose to enact the report’s recommendations. “The devil will be in the details of implementation,” she states.