Code Breakers: International AI Governance

(Left to right) Miranda Bogen, Center for Democracy and Tech; Dr. Ben Buchanan, The White House; Koji Ouchi, Embassy of Japan; Guillaume Cléaud, Embassy of France participate in a panel at Meridian House on December 6, 2023. Photo by Stephen Bobb.

2023 has been a monumental year for AI policy globally. In the United States, President Biden signed a groundbreaking Executive Order aimed at the safe, secure, and trustworthy development and use of artificial intelligence. In the United Kingdom, international stakeholders came together to discuss risks at the “frontier” of AI and how best to mitigate them, with twenty-nine countries signing on to the Bletchley Park Declaration. At the same time, the Hiroshima AI Process launched by Japan under the Group of Seven (G7) released its International Guiding Principles for Organizations Developing Advanced AI Systems as well as a voluntary International Code of Conduct for Organizations Developing Advanced AI Systems. And the European Union reached a landmark deal on the world’s first comprehensive regulation for artificial intelligence, the AI Act.

Against this backdrop, Meridian and Microsoft hosted a robust discussion on the complexities of global AI governance structures as the world seeks consensus to manage AI’s risks in order to seize its benefits.

With introductory remarks by Meridian Executive Vice President Natalie Jones, speakers included:

  • Miranda Bogen, Director, AI Governance Lab, Center for Democracy & Technology
  • Ben Buchanan, Special Advisor on Artificial Intelligence, White House
  • Guillaume Cléaud, Deputy Head, Department of Treasury and Economic Affairs, Embassy of France
  • Natasha Crampton, Vice President, Chief Responsible Ai Officer, Microsoft
  • Koji Ouchi, Counsellor, Embassy of Japan

Here are some top takeaways from the program:

1. Public Interest and Policymaking

The trajectory of AI governance is not dictated by momentary enthusiasm but by sustained efforts over years of consultation and development. The world has seen booming consumer attention to AI in the last year as new machine-learning-based AI technologies are incorporated into daily public use. Interactive tools like generative AI language model ChatGPT have extraordinarily broad applications and have sparked the public imagination about opportunities that these emerging technologies bring. Although the emergence of generative AI has not altered the fundamental trajectory of policymaking, it has unmistakably influenced and expedited its pace. This acceleration underscores an urgency to shore up responsible AI governing practices. Balancing the excitement around these innovations with ethical considerations, the emphasis remains on preserving public trust to unlock the true benefits of AI for society at large.

2. Future Proof AI Governance

Considering what is arguably one of the busiest and most impactful periods in AI governance, the imperative is clear: we must collaboratively shape a future-proof policy framework that aligns with democratic values and the rule of law. Rooted in industry standards, foundational principles, and national/domestic regulations, this nuanced approach fosters adaptability, ensuring that AI governance remains nimble amidst the dynamic interplay between technology and societal values. As every nation endeavors to harness AI to realize its unique vision of the future, the call to action is for a collective commitment to safety, individual privacy, freedom from discrimination, and transparency, striking a delicate balance between fostering innovation and upholding ethical considerations.

3. Convergence or Divergence of Interoperable Standards

Competition is a powerful remedy to the potential pitfalls of concentrated power in the tech industry. For smaller AI players to succeed in the marketplace, the core building blocks of the technology need to be broadly accessible. Open-source advanced AI models allow other developers to enhance and expand these models swiftly and cost-effectively. Still, there is a divergence of opinion by some who view open-source AI as an existential risk that demands broader mitigation, speculating that substantial progress in AI could result in human extinction or an irreversible global catastrophe.  The harmonization of governance structures that prioritize innovation, competition, and responsible oversight will address challenges of fragmented regulatory environments that would hinder product access and impede technological advancement.

4. Industry as Catalysts for Ethical AI Practices 

Industry is not just a stakeholder but a catalyst for responsible AI practices. Panelists acknowledged that private sector partnerships have become a transformative force, actively shaping public policy through discussions on voluntary commitments, regulatory sandboxing, and industry-led initiatives. These initiatives are increasingly regarded as pivotal milestones, steering the course of binding regulations before they come into effect. This collaborative approach provides a more nimble and swift mechanism for industry players to adapt to and address the ethical nuances of rapidly advancing AI technologies. The depth of these collaborative efforts is evident in robust exchanges occurring within senior-level roundtables, engaging AI CEOs, critics, policy experts, and civil society leaders. This multi-stakeholder dialogue fosters a comprehensive understanding of the ethical dimensions of AI, allowing for agile responses and a collective commitment to responsible innovation.

5. Shaping AI Governance through Inclusive Collaboration

A robust approach to AI governance is enriched by a whole of society and whole of government approach that goes beyond national borders and includes the active involvement of Civil Society Organizations (CSOs), researchers, and stakeholders at subnational levels. This expanded perspective acknowledges the profound impact of AI across diverse societal domains and emphasizes the collective responsibility shared by global governments, communities, and the growing array of AI deployers in steering the trajectory of AI development.  The evolution of AI requires creating a framework where technical expertise converges with societal considerations, ensuring a more comprehensive and globally resonant approach to AI governance.

Project summary

Code Breakers: International AI Governance | December 2023
Countries: Japan, France, United States
Impact Areas: Artificial Intelligence and Cybersecurity, Foreign Policy
Program Areas: Diplomatic Engagement