PPL Building U.N. Ave., cor. San Marcelino St., Paco Manila, 1007 Philippines

+632 8 524 7708 - 10

Technologies That Create Solutions |

AI Seoul Summit 2024 commits to global AI Safety Standards

Home > Tech O'clock

Digital Magazine

Recent from TECH OCLOCK

Advance Solutions Inc.

9 hours 5 minutes ago

Serious Battery Life for Serious Performance with Intel Core Ultra Processors Whether you're at home, on the train, out for coffee, or on a plane,

June 10, 2024

AI Seoul Summit 2024 commits to global AI Safety Standards

The digital world today is all abuzz with Artificial intelligence or AI. This “phenomenon” is sweeping all across the digital landscape and transforming how people live and work. In fact, market leaders leverage this new wave of AI innovation to gain a competitive advantage.

On May 21- 22, 2024, international bodies came together in Seoul to discuss the global advancement of artificial intelligence. The international event dubbed as the AI Seoul Summit was co-hosted by the Republic of Korea and the U.K. and was followed on from the AI Safety Summit, held in Bletchley Park, Buckinghamshire, U.K. last November 2023.

Participants included representatives from the governments of 20 countries, the European Commission and the United Nations as well as notable academic institutes and civil groups. It was also attended by a number of AI giants, like OpenAI, Amazon, Microsoft, Meta and Google DeepMind.

One of the key aims was to move progress towards the formation of a global set of AI safety standards and regulations. To that end, a number of key steps were taken:

1.) Tech giants committed to publishing safety frameworks for their frontier AI models.

2.) Nations agreed to form an international network of AI Safety Institutes.

3.) Nations agreed to collaborate on risk thresholds for frontier AI models that could assist in building biological and chemical weapons.

4.) The U.K. government offers up to £8.5 million in grants for research into protecting society from AI risks.

U.K. Technology Secretary Michelle Donelan said in a closing statement, “The agreements we have reached in Seoul mark the beginning of Phase Two of our AI Safety agenda, in which the world takes concrete steps to become more resilient to the risks of AI and begins a deepening of our understanding of the science that will underpin a shared approach to AI safety in the future.”

New voluntary commitments to implement best practices related to frontier AI safety have been agreed to by 16 global AI companies. The so-called “Frontier AI Safety Commitments” promise that:

a) Organizations effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems.

b) Organizations are accountable for safely developing and deploying their frontier AI models and systems.

c) Organizations’ approaches to frontier AI safety are appropriately transparent to external actors, including governments.

World leaders of 10 nations and the E.U. have agreed to collaborate on research into AI safety by forming a network of AI Safety Institutes. They each signed the “Seoul Statement of Intent toward International Cooperation on AI Safety Science,” which states they will foster “international cooperation and dialogue on artificial intelligence (AI) in the face of its unprecedented advancements and the impact on our economies and societies.” The nations that signed the statement are: Australia, Canada, European Union, France, Germany, Italy, Japan, Republic of Korea, Republic of Singapore, United Kingdom, and United States of America.

A number of nations have agreed to collaborate on the development of risk thresholds for frontier AI systems that could pose severe threats if misused. They will also agree on when model capabilities could pose “severe risks” without appropriate mitigations. Such high-risk systems include those that could help bad actors access biological or chemical weapons and those with the ability to evade human oversight without human permission. An AI could potentially achieve the latter through safeguard circumvention, manipulation or autonomous replication. Secretary Donelan announced the government will be awarding up to £8.5 million of research grants towards the study of mitigating AI risks like deepfakes and cyber attacks. Grantees will be working in the realm of so-called ‘systemic AI safety,’ which looks into understanding and intervening at the societal level in which AI systems operate rather than the systems themselves.

Reference: AI Seoul Summit: 4 Key Takeaways on AI Safety Standards and Regulations (techrepublic.com)

For further information on ASI’s products and solutions, you may call or visit our social media accounts:

Share this article:

+632 8 524 77 08

[email protected]

Tech O'clock

The Official Newsletter of Advance Solutions, Inc.

You have been successfully Subscribed! Ops! Something went wrong, please try again.

PPL Building United Nations Manila