The United States and China commenced their first intergovernmental dialogue on artificial intelligence in Geneva, Switzerland, on May 14. They agreed to discuss areas of concern and share domestic approaches to AI risks and governance, with a particular emphasis on safety issues associated with advanced systems.The dialogue was a crucial step toward better cooperation, but given the continued growth in both commercial and military AI applications, much more collaboration is necessary, including by non-governmental stakeholders.
As the world's two biggest economies and leaders in the AI field, the United States and China hold the keys to shaping the future. When U.S. President Joe Biden and Chinese President Xi Jinping agreed to launch these intergovernmental talks in November, they opened the door to building mutual understanding of AI's risks and potential governance frameworks. The hard part will be turning that initial agreement into meaningful, sustainable action.
Cooperation faces significant headwinds. The geopolitical rivalry and the lack of trust between the two countries may slow international coordination and lead to flawed countermeasures that hinder international agreements for mitigating AI risks. In addition, diplomatic processes, which typically move at a slow pace, may be unable to keep up with the rapid development of AI technologies.
Promoting the direct collaboration of experts from both nations is critical to bridging gaps in understanding and accelerating the development of scientific, evidence-based policymaking. Outside formal Track 1 diplomatic channels, holding Track 1.5 and Track 2 dialogues involving non-governmental experts can help build trust, foster innovative solutions and create consensus between countries. Track 1.5 dialogues include both government officials and non-governmental experts, while Track 2 dialogues include only unofficial representatives.These types of conversations happen a lot between stakeholders in China and the United States to promote stable relations. The advantage of Track 2 diplomacy is that it allows for more flexible discussions on cutting-edge issues — or even sensitive ones. For example, participants from the AI industry can explore ways to address risks through global norms and technical measures.
However, when looking at the participants in Track 2 dialogues on AI in recent years, most of them are foreign policy and military professionals, and the involvement of content experts and industry representatives is limited. This imbalance can lead to a narrow focus on geopolitical and national security concerns, neglecting the technical and broader aspects of AI safety.Scientists and industry experts are crucial in providing a nuanced understanding of AI technology development and application in real-world scenarios. By involving these stakeholders, the dialogue can shift from high-level political negotiations to technical problem-solving, increasing the likelihood of developing practical solutions for shared challenges.
The inclusion of experts from academia and industry is vital for several reasons:
First, scientists and industry leaders are at the forefront of AI research, development and implementation. They possess the technical expertise needed to identify, predict and address complex AI safety issues, and they can look into specific AI deployments and work on solving technical issues.
Second, collaboration between scientists can help depoliticize the dialogue, so that the focus stays on shared goals of advancing knowledge and technology about AI safety — including safety benchmarks and testing protocols. Academic experts tend to focus on technical aspects and how they can contribute, rather than discussing existential risks with an alarmist attitude.
Third, given the significant roles of AI companies in both China and the United States, experts can provide vital input on technology research and development, technical measures and governance frameworks.Overall, expanding the dialogue to include a broader range of stakeholders can provide more diverse perspectives on the risks and opportunities associated with AI, potentially creating new opportunities for innovation and experimentation while ensuring that AI applications are safe and beneficial. For example, they could explore joint research projects on specific AI safety issues, such as algorithmic bias and deep fakes, which could lead to breakthroughs that benefit both nations and the global community.
While the intergovernmental dialogue between China and the United States on AI safety is a promising start, the initiation of the Track 1 dialogue does not mean that Track 1.5 and Track 2 dialogues are unimportant. On the contrary, these dialogues can play an even more effective role, stimulating and causing the Track 1 dialogue to develop in a more comprehensive, in-depth direction, including fostering cooperation between scientists and the private sector.By engaging a broader range of stakeholders and fostering international cooperation, the two countries can lead the way in developing a robust, safe and ethical framework for AI that benefits the global community.