09/20/2024 | News release | Distributed by Public on 09/20/2024 06:24
The United Nations Secretary-General's High-level Advisory Body on Artificial Intelligence has unveiled its final report, "Governing AI for Humanity," presenting a blueprint for global AI governance. This landmark document, the culmination of months of intensive work and global consultations, outlines a strategic approach to addressing AI-related risks while promoting the equitable distribution of its transformative potential on a global scale.
The report is the product of extensive consultations, involving more than 2,000 participants across all regions of the world. This collaborative effort included deep-dive discussions on key issues with top experts, numerous consultation sessions spanning all regions, and hundreds of written submissions from organisations and individuals. To further enrich its findings, the Advisory Body commissioned an AI Risk Global Pulse Check - the most comprehensive global horizon scanning exercise on AI risks to date - and an AI Opportunity Scan to crowdsource expert assessments of emerging AI trends.
At the heart of the report are seven key recommendations designed to address critical gaps in current AI governance arrangements:
Creating an international scientific panel on AI: This panel would serve as a global authority on AI capabilities, opportunities, risks, and uncertainties. By providing impartial and reliable scientific knowledge, it aims to foster a shared foundational understanding of AI worldwide.
Launching a policy dialogue on AI governance: This initiative proposes regular intergovernmental and multi-stakeholder discussions to share best practices, promote common understandings on AI governance measures, and voluntarily share significant AI incidents that challenge state agencies.
Establishing an AI standards exchange: This exchange would bring together representatives from various standard-setting organizations, technology companies, and civil society to develop a register of definitions and standards for measuring and evaluating AI systems.
Forming an AI capacity development network: This network would link collaborating, UN-affiliated capacity development centers to provide expertise, compute resources, and AI training data to key actors, particularly in developing countries.
Creating a global fund for AI: This fund would aim to address the AI divide by facilitating access to AI enablers, particularly for countries lacking adequate resources or infrastructure.
Developing a global AI data framework: This framework would establish principles and practical arrangements for AI training data governance and use, aligned with international commitments on human rights, intellectual property, and sustainable development.
Setting up an AI office within the UN Secretariat: This office would act as a coordinating body, supporting the implementation of the other recommendations and ensuring a coherent UN-wide approach to AI governance.
The report emphasises the need for a globally inclusive and distributed architecture for AI governance based on international cooperation. It calls on all governments and stakeholders to work together in governing AI to foster development while furthering respect, protection, and fulfillment of all human rights. The proposed mechanisms are designed to complement existing efforts and foster inclusive global AI governance arrangements that are agile, adaptive, and effective in keeping pace with AI's rapid evolution.
A key focus of the report is addressing three main gaps in current AI governance efforts:
Representation gaps: Many parts of the world, particularly in the Global South, have been left out of international AI governance conversations. The report stresses the importance of including diverse voices in decisions about AI governance.
Coordination gaps: The proliferation of AI governance initiatives risks creating incompatible regimes across different regions. The report proposes mechanisms to enhance coordination and interoperability.
Implementation gaps: The report highlights the need for action and follow-up processes to ensure that commitments to good governance translate into tangible outcomes in practice.
The Advisory Body's recommendations are grounded in five guiding principles outlined in their interim report. These principles emphasise inclusive governance for the benefit of all, governance in the public interest, alignment with data governance and promotion of data commons, universal and networked governance rooted in adaptive multi-stakeholder collaboration, and anchoring in the UN Charter, international human rights law, and other agreed international commitments.
While the report does not currently recommend establishing a new international AI agency, it does reflect on potential future needs for more robust global governance as AI capabilities advance. The Advisory Body suggests that if risks become more acute and the stakes for opportunities escalate, a more formal international mechanism might become necessary to enforce red lines and pool resources for collaborative AI research.
The report also addresses the economic implications of AI, acknowledging its potential to significantly increase global GDP and transform various sectors. However, it also highlights the need for effective governance to manage risks and ensure fair outcomes, including skills development, investment in digital infrastructure, and considerations for workplace integration and value chains.
In conclusion, "Governing AI for Humanity" presents a vision for an inclusive, effective global governance framework for AI that can harness its benefits while mitigating risks as the technology continues to evolve rapidly. By proposing mechanisms for common understanding, common ground, and common benefits, the report lays the groundwork for a new social contract for AI that ensures global buy-in for a governance regime that protects and empowers all of humanity. As AI continues to shape our world, this report serves as a crucial roadmap for navigating the challenges and opportunities that lie ahead, emphasising the need for collaborative, adaptive, and inclusive approaches to AI governance on a global scale.
Programme Manager - Digital Ethics and AI Safety, techUK
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.
Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace.
Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess's primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical.
Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.
Email:[email protected]Website:tessbuckley.meLinkedIn:https://www.linkedin.com/in/tesssbuckley/