27 Nations Agree to Develop Global AI Risk Standards 

URL copied to clipboard.
27 Nations Agree to Develop Global AI Risk Standards 
oliver dholl

The second annual AI Safety Summit, which took place in Seoul between 21 and 22 May, concluded with 27 nations signing development proposals related to AI risks. Risks include the capacity for AI to evade human oversight and the use of AI for creating chemical and biological weapons. 

Following two days of discussions between civil society representatives, government representatives, and industry leaders, proposed advancements in AI governance were agreed upon. Three major documents were signed: the Seoul Declaration, the Seoul Ministerial Statement, and the Frontier AI Safety Commitments. 

Additionally, attendees agreed that supporting AI’s link to innovation and inclusivity was of paramount importance. 

Innovation and inclusivity 

Those in attendance at the Seoul summit discussed the positive impact that AI can have in terms of innovation and inclusivity, especially towards small businesses, start-up companies, and academic institutions. While attendees agreed that new safety guidelines must be implemented, they also noted that enough room must be left for businesses to leverage AI and foster innovation. 

AI has already been adopted by a range of industries, including medical and healthcare, finance, data management, marketing, and agriculture. 

It has also transformed the world of entertainment, especially online casinos. Online gaming has benefited from AI-powered security measures, game development, and customer support. Daniel Smyth has selected the best AI-enhanced online casinos rated on multiple parameters such as game variety, promotions and rewards, and user experience. 

Likewise, in the education sector, AI-driven tools are revolutionizing personalized learning experiences, helping to tailor educational content to each student’s unique needs and pace of learning. 

ADVERTISEMENT

The eduction and online casino industries – as well as every other sector that has so far adopted AI technology – will benefit from the new worldwide commitment to mitigate AI risks agreed upon in Seoul. 

The Seoul Declaration 

This central agreement focuses on the promotion of international AI governance. To boost collaboration between nations, this declaration engages with various global initiatives, including the OECD AI principles, the Hiroshima AI Process Friends Group, and the UN General Assembly’s stance on safe AI systems. 

Most of these initiatives were adopted recently. For example, the UN’s stance on safe AI was only announced in March of this year after it was backed by 120 member states

The Seoul Declaration also anticipates the UN Secretary-General’s report on AI and the ongoing discussions related to the Global Digital Compact. Being an amalgamation of various global initiatives, the declaration established common ground between all nations. 

Specifically, the declaration pledged to support AI safety institutes and research programs and instigate the establishment of policy and governance frameworks. 

The signees of this declaration were representatives from the EU, Australia, France, Canada, Japan, Italy, Germany, the Republic of Singapore, the Republic of Korea, the United States of America, and the United Kingdom. 

ADVERTISEMENT

The Seoul Ministerial Statement

Through this statement, nations agreed to define an AI risk threshold. This would involve identifying AI model capabilities that could pose risks without proper control. This includes the potential for AI to evade human oversight through deception, replication, adaption, or manipulation. 

This statement also highlighted the importance of transparency, risk management, and accountability. Lastly, it defined the importance of equity and the sustainability of AI. 

27 nations signed this statement. While China participated in discussions over the weekend, no representatives from the country signed the Seoul Ministerial Statement. 

This suggests that China prefers to simply observe the forming of a unified AI governance rather than endorse it. China has one of the world’s biggest AI markets, which is expected to be worth $150,541.93 million by 2032

Frontier AI Safety Commitments 

This agreement was signed by 16 global tech companies that use AI technology. This includes Amazon, Google, IBM, Meta, Microsoft, OpenAI, and Samsung Electronics. 

 

By signing the Frontier AI Safety Commitments agreement, these tech companies agreed to adhere to three outcomes. Outcome number one is to assess the risk of each AI model both before launching, during training, and throughout the rest of the model’s lifecycle. 

 

Outcome two is to be accountable for safety development and deployment and outcome three is to ensure that all approaches to AI safety are transparent. Specifically, safety commitments need to be made transparent to external actors, such as governments. 

 

Next Steps 

At the end of the event, the French head of state announced the dates for the AI Action Summit, which will take place on 10 and 11 February 2025. Ahead of this next meeting, those who signed agreements at the Seoul Summit are responsible for identifying risk thresholds and acting on other outcomes agreed upon in the documents. 

 

For example, the tech companies who signed the Frontier AI Safety Commitments document must publish a severe risk safety framework ahead of the meeting in France next year. 

More headlines