• Fragmented global landscape: No single binding international treaty. Governance is a mix of national laws, regional frameworks, voluntary industry standards, and multistakeholder initiatives. (See OECD AI Principles; EU AI Act draft.)

  • Leading regional efforts:

    • European Union: EU AI Act (risk-based regulatory regime) moving toward implementation — the most comprehensive statutory framework. (European Commission)
    • United States: Sectoral/agency approach (FTC, NIST guidance, targeted bills in Congress) — emphasis on innovation + enforcement of existing consumer/procurement laws.
    • China: Rapid regulatory development with standards, security reviews, and state-centered governance for AI deployment and data use.
  • Soft law and standards: Organizations (ISO, IEEE, OECD, Partnership on AI) and technical bodies (NIST, OpenAI policies) produce guidelines, risk assessments, and standards adopted by industry and governments.

  • Corporate governance & procurement: Large tech firms implement internal safety teams, red-teaming, model cards, and deployment controls; governments increasingly require risk assessments in procurement/use.

  • Focus areas and tensions:

    • Safety and alignment: Research on robustness, interpretability, and long-term risks is growing, but regulation lags technical progress.
    • Accountability and liability: Debates over who is responsible for harms (developers, deployers, users).
    • Civil rights and discrimination: Regulations and litigation address bias, surveillance, and due process.
    • Security and dual-use: Export controls, classification of capable models, and monitoring of misuse (e.g., cybercrime, biological risks).
    • Economic and labor impacts: Policy discussions on redistribution, retraining, and competition.
  • Emerging moves:

    • International coordination: G7, OECD, UN, and multilateral forums discussing norms; proposals for model testing, transparency, and sharing of safety work.
    • Regulatory sandboxes and certification: Pilot programs to test rules before broad enforcement.
    • Capacity gaps: Many countries lack expertise/resources to regulate effectively; calls for technical assistance and globally interoperable standards.

Bottom line: Progress is substantive but uneven — substantial policy building blocks exist (EU Act, standards, agency guidance), but global coordination, enforcement mechanisms, and technical integration of safety into governance remain works in progress. Key upcoming milestones will be EU implementation, U.S. legislative moves, and international agreements on model testing, export controls, and responsible disclosure.

Selected sources: OECD AI Principles; EU AI Act (European Commission); NIST AI Risk Management Framework; Partnership on AI; recent G7/OECD statements.

International coordination is currently active but fragmented. Major forums — the G7, OECD, the United Nations (including UNESCO and the UN Secretary‑General’s initiatives), and other multilateral venues — are convening governments, industry and civil society to negotiate shared high‑level norms, principles and governance approaches. Key features of this coordination include:

  • Norm‑setting and principles: Bodies like the OECD and UNESCO have issued nonbinding frameworks (e.g., OECD AI Principles, UNESCO Recommendation on the Ethics of AI) that many countries reference when shaping national policy. The G7 and the EU have similarly articulated principles stressing safety, human rights, and accountability.

  • Proposals for testing and evaluation: There is growing consensus on establishing standardized safety testing and red‑teaming protocols for advanced models. Governments and expert groups are drafting approaches for independent model evaluation, risk classification, and pre‑deployment assessment, though no single global testing regime has been adopted.

  • Transparency and information‑sharing: International proposals emphasize transparency about model capabilities, training data provenance, and deployed use cases. Efforts range from voluntary disclosure frameworks and model cards to calls for legally mandated reporting for high‑risk systems.

  • Coordination on safety research: States and multilateral bodies promote sharing of safety research and best practices, including cooperative funding, shared benchmarks, and mechanisms to exchange incident/near‑miss information — but practical mechanisms for secure, trustful sharing are still under development.

  • Gaps and challenges: Coordination is uneven (developed countries lead; many low‑ and middle‑income countries are underrepresented), enforcement is limited because most outputs are nonbinding, and technical disagreements persist about thresholds for regulation, export controls, and how to reconcile openness with security.

In short, international actors are building normative and technical scaffolding — testing regimes, transparency expectations, and safety‑sharing proposals — but have not yet converged on a comprehensive, enforceable global governance architecture. For more detail, see OECD AI Policy Observatory, UNESCO Recommendation on the Ethics of AI (2021), and recent G7 and UN statements on AI.

Explanation for the selection

  • Representative coverage: The summary captures the major, distinct elements shaping AI governance today — regional laws (EU, U.S., China), soft law and standards bodies, corporate practices, and key policy tensions (safety, accountability, rights, security, economic impacts). That mix reflects how governance is actually emerging: not from a single source but from overlapping legal, technical, and voluntary regimes.
  • Policy relevance: It highlights the frameworks most likely to affect deployment and design choices in the near term (EU AI Act, U.S. agency guidance, China’s state-led measures), which is crucial for actors trying to comply or influence outcomes.
  • Actionable levers: By noting concrete mechanisms (regulatory sandboxes, certification, export controls, procurement rules), the summary points to where policymakers and firms can intervene or pilot solutions.
  • Realistic assessment: The snapshot emphasizes fragmentation, capacity gaps, and uneven enforcement — important qualifiers for anyone claiming governance is “solved.”

Suggested ideas and authors to explore

  • Regulatory design and comparative approaches
    • Helen Toner (Center for Security and Emerging Technology) — analyses on policy levers and governance pathways.
    • Karen Yeung — work on algorithmic regulation and risk-based frameworks.
  • Standards, testing, and technical governance
    • NIST (AI Risk Management Framework) — practical, technical touchstone for risk assessment.
    • David Kaye / Nicholas Eberstadt (various authors in standards and testing debates) — for discussion of model testing and capabilities evaluation.
  • Corporate governance, safety teams, and industry norms
    • Joanna Bryson — AI ethics and governance, including accountability debates.
    • Timnit Gebru, Margaret Mitchell — critiques of corporate practice and calls for research governance.
  • International coordination and geopolitics
    • Els Torreele / Allan Dafoe — on global coordination and institution-building for powerful technologies.
    • Henry Farrell / Abraham Newman — for geopolitical perspectives on technology standards and influence.
  • Rights, bias, and public-interest approaches
    • Ruha Benjamin — social justice lens on tech and governance.
    • Cathy O’Neil — critical perspectives on algorithmic harms and accountability.
  • Security, dual-use, and export controls
    • Miles Brundage (Future of Humanity Institute) — on misuse risks, export controls, and governance options.
    • The WHO/CSET/BIOSAFETY authors on bio-related dual-use concerns tied to generative models.

Key reports and documents to consult

  • OECD AI Principles and related OECD guidance
  • European Commission: EU AI Act (proposal and legislative texts)
  • NIST: AI Risk Management Framework
  • Partnership on AI publications and model governance guidance
  • Recent G7/OECD/UN statements on AI safety and coordination

If you’d like, I can:

  • Prepare a one-page annotated reading list tailored to a policymaker, technologist, or civil-society advocate.
  • Suggest concrete policy options (e.g., model certification, mandatory impact assessments) mapped to actors who could implement them.

The NIST AI Risk Management Framework (AI RMF) serves as a practical, technical touchstone for AI risk assessment because it translates high‑level principles into actionable, interoperable practices that diverse actors can adopt.

Key reasons:

  • Actionable structure: It breaks down “risk management” into concrete functions (Govern, Map, Measure, Manage, and Evaluate) and common outcomes, making abstract goals operational for engineers, product managers, and policymakers.

  • Technical orientation: Unlike purely ethical guidelines, the AI RMF focuses on measurement, metrics, testing, monitoring, and documentation — methods that align with engineering workflows and permit reproducible assessment of model behavior and system performance.

  • Flexibility and interoperability: The framework is non‑prescriptive but interoperable: organizations can map their existing processes to RMF functions, enabling coordination across sectors and compatibility with other standards (ISO, OECD, EU guidance) without forcing a single technical stack.

  • Emphasis on lifecycle and context: It treats AI systems as socio‑technical artifacts, requiring risk assessment across design, training, deployment, and post‑deployment monitoring — which is essential for capturing emergent harms and contextual risks.

  • Supports accountability and governance: By encouraging documentation (model cards, data provenance, risk registers) and roles/responsibilities, the RMF helps bridge technical assessment with organizational governance and regulatory compliance.

  • Community and tooling momentum: NIST’s work has spurred tool development, pilot projects, and adoption by U.S. agencies and industry, creating practical examples and templates that lower the barrier for adoption globally.

In short, the AI RMF matters because it makes principled AI risk management practicable: it provides a shared, technically grounded vocabulary and method set that engineers, auditors, and regulators can use to assess, compare, and mitigate risks in real systems. (See NIST AI RMF v1.0 and related implementation guides.)

Karen Yeung is a leading scholar in law, ethics, and technology whose work examines how regulatory systems can respond to algorithmic and automated decision‑making. Key points of her contribution:

  • Conceptualizing algorithmic regulation: Yeung analyzes how algorithms reshape governance — not just as objects to be regulated but as tools that enact policy decisions, automate enforcement, and transform administrative processes. She highlights the need for regulators to understand both the capabilities and limits of algorithmic systems.

  • Risk‑based regulatory frameworks: Building on administrative law and regulatory theory, Yeung advocates for proportionate, risk‑sensitive approaches to governing algorithmic systems. That involves tailoring oversight intensity to the potential harms (e.g., privacy intrusion, discrimination, loss of due process), rather than one‑size‑fits‑all rules.

  • Accountability and legitimacy: She emphasizes procedural safeguards—transparency, contestability, human oversight, and remedies—to preserve legality, fairness, and democratic legitimacy when decisions are automated. Her work interrogates who should be accountable (designers, deployers, public bodies) and how to operationalize redress.

  • Interdisciplinary method: Yeung combines legal analysis, political theory, and empirical study to show how technical design choices interact with institutional incentives, power dynamics, and social impacts — informing pragmatic regulatory design (e.g., audits, impact assessments, adaptive regulation).

Representative writings:

  • Yeung, K. (2018). “Algorithmic Regulation: A Critical Interrogation.” Regulation & Governance.
  • Yeung, K. (2019). Work on automated decision‑making, accountability, and governance in various edited volumes and policy papers.

Why she’s relevant to AI governance: Her scholarship provides a principled foundation for the risk‑based, procedural regulatory measures seen in contemporary policy debates (e.g., impact assessments, proportionate obligations in the EU AI Act), linking normative aims (fairness, accountability) to concrete regulatory tools.

Standards

  • What they are: Agreed norms and specifications (technical, procedural, or ethical) that guide how AI systems are designed, documented, and managed.
  • Purpose: Promote interoperability, safety, and accountability across developers and users; provide benchmarks for compliance and procurement.
  • Examples: ISO/IEC standards for AI, OECD Principles, model cards, and documentation practices.
  • Importance: Standards help translate high‑level ethical principles into actionable, auditable practices and enable regulators and purchasers to set expectations.

Testing

  • What it is: Systematic evaluation of AI models and systems against defined criteria—performance, robustness, safety, fairness, and security—often through benchmarks, red‑teaming, and stress tests.
  • Purpose: Reveal failure modes, measure capabilities and risks, and provide evidence for deployment decisions or regulatory compliance.
  • Examples: Red‑teaming exercises, adversarial robustness tests, bias audits, and pre‑deployment risk assessments.
  • Importance: Testing turns abstract risks into measurable outcomes, informing mitigation, certification, and responsible release decisions.

Technical Governance

  • What it is: The set of engineering, operational, and oversight processes that embed safety, ethics, and accountability into AI lifecycles—covering model development, deployment, monitoring, and incident response.
  • Components: Versioning and provenance tracking, access controls, continuous monitoring, update/roll‑back mechanisms, incident reporting, and third‑party evaluation requirements.
  • Purpose: Ensure that technical systems operate within acceptable risk bounds and that organizations can respond to harms or emergent behaviors.
  • Importance: Technical governance operationalizes standards and testing: it enforces practices that reduce harms, supports regulatory compliance, and enables trusted, auditable AI use.

Why these three together matter

  • Complementarity: Standards set the “what,” testing provides the “how well,” and technical governance supplies the “how” in practice. All three are needed to move from principles to enforceable, reliable AI behavior.
  • Policy relevance: Regulators increasingly reference standards and testing in laws (e.g., EU AI Act’s risk classification) and expect technical governance measures in procurement and certification.
  • Remaining gaps: Global alignment on specific standards and test regimes is incomplete; mechanisms for secure sharing of test results and incident data remain underdeveloped—creating challenges for effective, interoperable governance.

Suggested further reading: OECD AI Policy Observatory; NIST AI Risk Management Framework; EU AI Act (draft).

Henry Farrell and Abraham Newman are leading scholars who examine how political power, economic ties, and institutional choices shape technology standards and global influence. A short explanation of their relevance:

  • Henry Farrell — networks, norms, and information politics

    • Farrell’s work emphasizes the role of institutional networks, epistemic communities, and information flows in shaping norms and standards. He shows how coalitions of states, firms, and experts produce shared expectations that can become de facto rules governing technologies.
    • His analyses illuminate how information control, reputational pressure, and multistakeholder governance affect adoption of technical standards and regulatory norms (useful for understanding voluntary standards, standard-setting bodies, and norm diffusion).
    • Key value: helps explain soft‑power dynamics and how non‑binding norms spread through networks rather than formal treaties.
    • Selected work: Farrell & Newman (with others) on “weaponized interdependence” and networked governance; Farrell’s papers and blog posts on norms and digital governance.
  • Abraham Newman — power, economic interdependence, and institutional leverage

    • Newman focuses on how states use economic and informational interdependence to exert coercive leverage and shape institutions. His work on “weaponized interdependence” (with Farrell) shows how control over networks and chokepoints (finance, data flows, standards bodies) can become tools of state power.
    • He also examines how institutions and standards become arenas for geopolitical competition—why states push for particular technical rules, export controls, or certification regimes to secure advantage or constrain rivals.
    • Key value: clarifies how standards are not merely technical but strategic—embedded in economic statecraft, sanctions, and security policy.
    • Selected work: Farrell & Newman, “Weaponized Interdependence” (International Security, 2019); Newman’s writings on sanctions, state power, and global networks.

Together, Farrell and Newman provide a framework for seeing technology standards as instruments of geopolitical influence—produced through networks, institutional design, and economic dependencies—rather than as neutral technical artifacts. Their work is directly relevant to debates over AI governance, export controls, and international standard‑setting.

For further reading: Farrell & Newman, “Weaponized Interdependence” (Int’l Security, 2019); Henry Farrell’s academic blog posts on norms and governance; Abraham Newman’s publications on sanctions, networks, and global political economy.

Policy relevance: The selection focuses on the regulatory instruments and governance practices most likely to shape near‑term deployment and design decisions — namely the EU AI Act, U.S. agency guidance and sectoral rules, and China’s state‑led measures — because these actors set de facto global standards through market size, procurement rules, and regulatory reach. Firms and organizations responding to these frameworks will change model development (risk classification, safety testing, interpretability), disclosure and documentation (model cards, training‑data provenance), and deployment practices (pre‑deployment risk assessments, access controls, red‑teaming). Attention to soft law (OECD, NIST, ISO) and industry self‑governance is included because these norms fill gaps, influence compliance expectations, and often become inputs to binding rules. In short, these frameworks most directly affect what developers build, how deployers operate, and what policymakers prioritize — so they are the most consequential levers for near‑term governance and compliance.

References: EU AI Act (European Commission); NIST AI Risk Management Framework; OECD AI Principles; recent U.S. agency guidance (FTC, NIST) and Chinese regulatory notices.

  • David Kaye — emphasis on rights, transparency, and governance: Kaye (former UN Special Rapporteur on freedom of expression) brings expertise on how technical evaluation regimes intersect with human rights, due process, and transparency obligations. His work argues that testing and disclosure practices should protect privacy, prevent misuse of sensitive data, and enable accountability without creating surveillance risks. Cite Kaye when discussing how model testing protocols must be designed to safeguard civil liberties and ensure meaningful oversight. (See: Kaye’s UN reports and writings on AI and human rights.)

  • Nicholas Eberstadt — emphasis on national security, strategic risk, and measurement: Eberstadt’s scholarship on demographic, economic, and strategic trends informs debates about systemic and national‑security implications of powerful technologies. In standards-and-testing debates, his perspective is useful for emphasizing metrics, independent verification, and the policy consequences of capability thresholds (e.g., when a model’s performance merits export controls or stricter oversight). Cite Eberstadt when highlighting the need for rigorous, policy‑oriented measurement and the broader societal stakes of capability assessments.

Together they illustrate two complementary concerns for model testing and capabilities evaluation: protecting rights and democratic norms (Kaye) while ensuring rigorous, policy‑relevant measurement for security and governance decisions (Eberstadt). Use their work to balance human‑rights safeguards with demands for credible, independent testing regimes.

Els Torreele and Allan Dafoe are relevant selections because both focus on how societies should design institutions and international mechanisms to manage powerful, potentially disruptive technologies — precisely the governance challenge AI now poses.

  • Els Torreele — practical public‑interest institution building

    • Perspective: Torreele emphasizes public‑interest infrastructure, democratic oversight, and accountable institutions to ensure technologies serve social needs (health, equity, public goods) rather than purely commercial or state security motives.
    • Why relevant: Her work highlights the necessity of purpose‑built public institutions (e.g., for procurement, testing, funding, and stewardship) that can set priorities, fund safety and access, and enforce standards — filling gaps where markets and fragmented regulation fall short. This approach addresses capacity and equity gaps in global governance, especially for lower‑resourced countries.
    • Key implication: Effective AI governance requires investing in independent, well‑resourced international and national institutions that prioritize public goods, not just voluntary standards or ad hoc industry practices.
  • Allan Dafoe — conceptual framework for global coordination and catastrophic risk governance

    • Perspective: Dafoe studies institutional design for managing global catastrophic and strategic risks from transformative technologies, arguing for formalized international cooperation, accountability mechanisms, and capabilities for monitoring, testing, and crisis response.
    • Why relevant: He offers frameworks for when and how states should create binding institutions (treaties, inspection regimes, licensing, export controls) and how to structure them to overcome collective‑action problems and information asymmetries around capabilities and risks.
    • Key implication: For high‑risk AI, ad hoc or voluntary arrangements are insufficient; durable, enforceable international institutions are needed to coordinate testing, share risk information, and govern deployment of the most capable systems.

Together, Torreele and Dafoe complement each other: Torreele grounds governance in public‑interest institution building and equitable capacity, while Dafoe provides the strategic rationale and design principles for international, enforceable coordination to manage systemic and catastrophic risks. Their combined insights point toward a two‑track approach: build accountable public institutions domestically and multilaterally, and negotiate binding international mechanisms for the highest‑risk technologies.

Sources for further reading:

  • Allan Dafoe, “AI Governance: A Research Agenda” and related work on institutional design for catastrophic risks.
  • Els Torreele, writings and policy proposals on public interest R&D, global health governance, and technology stewardship (see her public commentary and reports on technology governance and public infrastructure).

Recent statements from the G7, OECD, and UN reflect converging but nonbinding commitments by major governments and multilateral bodies to manage AI risks while preserving benefits. Key common themes are:

  • Prioritizing safety and risk-based governance: These statements call for proportionate, risk‑based approaches that require stronger oversight for higher‑risk systems (testing, audits, red‑teaming) while avoiding overly restrictive measures for lower‑risk applications.

  • Promoting transparency and independent evaluation: They urge clearer disclosures about model capabilities, provenance, and testing, and support development of independent model evaluation, certification, or audit mechanisms to verify safety claims.

  • Encouraging international cooperation: The documents emphasize cross‑border coordination on standards, information‑sharing (including incident/near‑miss reporting), and joint research on robustness, interpretability, and societal impacts.

  • Balancing openness and security: Signatories recognize tension between scientific openness and misuse risk; they recommend calibrated measures (export controls, access restrictions, responsible disclosure practices) rather than wholesale secrecy.

  • Upholding human rights and democratic norms: Statements regularly link AI governance to human rights protections, privacy, non‑discrimination, and accountability for harms.

  • Supporting capacity building and inclusiveness: They note the need to help low‑ and middle‑income countries build regulatory and technical capacity and to include diverse stakeholders (civil society, industry, technical experts) in governance processes.

Why this matters: Although nonbinding, these coordinated political signals shape national regulation, technical standards, and industry practice. They create momentum toward shared testing regimes, disclosure norms, and mechanisms for cooperative oversight — but actual enforceable global rules remain to be negotiated.

Sources for further reading: recent G7 AI Communiqués, OECD AI Policy Observatory summaries, and UN/UNESCO AI statements and recommendations.

The Partnership on AI (PAI) is a multistakeholder organization founded by industry, academia, and civil society to study and shape best practices for AI. Its publications synthesize technical, ethical, and policy insights and aim to produce actionable guidance that can be adopted by developers, deployers, and regulators.

What PAI publishes and why it matters

  • Practical guidance: PAI issues reports, white papers, and toolkits on topics such as model cards, transparency, safety testing, red‑teaming, and risk assessment. These materials translate research and field experience into concrete practices organizations can implement.
  • Multistakeholder legitimacy: Because PAI brings together firms, researchers, and NGOs, its outputs carry cross‑sector credibility and help bridge differences between private incentives and public-interest goals.
  • Norm formation: PAI’s work shapes industry norms (soft law) by demonstrating feasible protocols for responsible development and deployment before—or alongside—formal regulation.
  • Community and capacity building: PAI runs working groups and convenings that surface use cases, share lessons from incidents, and incubate standards or prototypes (e.g., reporting templates, evaluation frameworks).
  • Policy input: PAI publications inform regulators and standards bodies by clarifying technical options and implementation tradeoffs (useful for policymakers drafting laws like the EU AI Act or for agencies designing oversight mechanisms).

Model governance guidance — main emphases

  • Risk‑based approach: Prioritize resources and controls according to model capability and deployment risk (high‑risk systems require stronger safeguards).
  • Transparency and documentation: Encourage model cards, datasheets, and disclosure about training data provenance, evaluation metrics, and known limitations to support informed use and oversight.
  • Safety testing and red‑teaming: Advocate systematic adversarial testing, scenario analysis, and external evaluation to identify failures before deployment.
  • Human oversight and accountability: Recommend clear roles/responsibilities, auditability, incident reporting, and mechanisms for remediation when harms occur.
  • Privacy and security protections: Promote data governance, differential privacy, access controls, and measures to prevent misuse or leakage of sensitive information.
  • Continuous monitoring: Stress post‑deployment monitoring, feedback loops, and update/patch processes to address emergent issues.
  • Collaboration and information sharing: Support responsible sharing of vulnerabilities, best practices, and interoperable evaluation tools across actors.

Representative PAI outputs

  • Model Card and Documentation guidance (templates and best practices)
  • Red‑teaming and adversarial testing reports
  • Governance frameworks and checklists for deployment and procurement (See Partnership on AI website for specific publications.)

Why this guidance matters for governance PAI’s model governance guidance fills the gap between high‑level principles and operational practice. It helps organizations implement responsibilities that regulators may later require, provides policymakers with technically grounded options, and advances interoperable practices that can be incorporated into standards, procurement rules, and regulatory regimes.

Selected sources: Partnership on AI publications and working groups; examples cited in OECD and EU policy discussions.

Below are concise, actionable policy options paired with the actors best placed to implement each. Options are practical, interoperable across jurisdictions, and scalable to different capabilities.

  1. Model classification and capability-based certification
  • What: Require independent testing and tiered certification based on model capabilities (e.g., compute, emergent behaviors, ability to generate disinformation or perform code/biological design).
  • Who: National regulators (tech/competition/security agencies) set rules; accredited third‑party labs perform testing; standards bodies (ISO, OECD, IEC) define technical criteria.
  • Rationale: Focuses regulatory attention where risk is highest and creates interoperable attestations for cross‑border use/export.
  1. Mandatory pre‑deployment impact assessments (AIIA)
  • What: Obligate developers and deployers of high‑risk systems to conduct and publish standardized impact assessments covering safety, privacy, discrimination, security, and societal harms.
  • Who: Legislatures/regulatory agencies mandate format and scope; firms conduct assessments; independent auditors verify completeness for high‑risk classes.
  • Rationale: Encourages risk identification early, informs procurement and public oversight, and creates accountability trails.
  1. Incident reporting and near‑miss sharing
  • What: Require timely reporting of safety incidents, misuse, and near misses to a secure national or international repository, with tiers for confidentiality vs public disclosure.
  • Who: National regulators require reporting; an international body (OECD or a UN technical forum) hosts cross‑border aggregation and anonymized sharing; industry participates via sectoral coalitions.
  • Rationale: Builds collective learning, early warning about emergent risks, and evidence for regulation; balances transparency with IP/security needs.
  1. Regulatory sandboxes and conditional authorizations
  • What: Create controlled environments where novel AI systems can be tested under regulatory supervision with informed users and monitoring.
  • Who: National agencies (financial, health, transport) run sandboxes; regional blocs coordinate mutual recognition of lessons and approvals.
  • Rationale: Lowers barriers to innovation while enabling regulators to observe real‑world impacts and refine rules.
  1. Model provenance, documentation and “model cards” mandates
  • What: Standardize and require metadata disclosures (training data provenance, capability statements, known limitations, safety evaluations) for models above a risk threshold.
  • Who: Standards bodies define schemas; national regulators mandate disclosures for market access; procurement rules require them for public contracts.
  • Rationale: Improves transparency for users, auditors, and downstream deployers; aids accountability and risk management.
  1. Export controls and usage restrictions for high‑capability models
  • What: Restrict cross‑border transfer of models, weights, or specialized tooling that enable dual‑use harms; apply licensing and end‑use controls.
  • Who: National governments coordinate multilaterally (Wassenaar-like processes, G7, OECD) for harmonized controls; customs/security agencies enforce.
  • Rationale: Mitigates proliferation of capabilities that can be misused for cyberattacks, biological design, or large‑scale disinformation.
  1. Mandatory red‑teaming and adversarial testing for high‑risk models
  • What: Require internal and external red‑teaming, with documented remediation before broad deployment.
  • Who: Firms perform tests; accredited independent red‑teams and standard test suites (via NIST/ISO) validate results; regulators set minimum requirements for high‑risk classes.
  • Rationale: Reduces unexpected failure modes and uncovers misuse vectors prior to release.
  1. Liability frameworks and clarity on accountability
  • What: Define civil and administrative liability rules for harms caused by AI (differentiating manufacturers, deployers, and operators), and safe‑harbor paths for good‑faith compliance.
  • Who: Legislatures enact laws; courts refine standards through adjudication; regulators provide guidance and enforcement.
  • Rationale: Aligns incentives for safer design and careful deployment without stifling innovation.
  1. Public procurement standards and certification requirements
  • What: Require certified safety, transparency, and impact assessments for AI used in government services.
  • Who: Governments set procurement rules; procurement agencies enforce; vendors comply to sell to public sector.
  • Rationale: Uses government buying power to raise baseline safety and set market norms.
  1. Capacity building and technical assistance for lower‑resource countries
  • What: Fund and coordinate technical help (training, labs, policy toolkits) so more countries can assess and regulate AI responsibly.
  • Who: Multilateral development banks, OECD, UN agencies, regional organizations deliver programs; high‑income states fund and mentor.
  • Rationale: Reduces governance gaps, promotes interoperable standards, and prevents regulatory arbitrage.

Implementation notes (short)

  • Layered approach: Combine voluntary standards for lower‑risk systems with mandatory rules and audits for high‑risk or high‑capability AI.
  • MutualTitle: Concrete AI Governance Options — What to Do and Who Should Act

Below recognition: are practical Encourage international policy options, brief descriptions, and the actors best mutual recognition placed to implement each of certifications one.

  1. Mandatory pre and test‑deployment impact assessments
  • results to What: Require developers/deploy reduce duplicationers to assess risks (safety, privacy, and friction fairness, security) before public release, with documentation.
  • and mitigation plans.
  • Who: National Privacy/security regulators (privacy/data balance: protection authorities, sectoral regulators), procurement Design reporting agencies for government use and sharing; companies must conduct and certify systems that assessments.
  • Why: Ident protect IPifies harms early and creates accountability and sensitive trail.
  1. data while Risk‑based model enabling oversight certification and labeling
  • What. -: Independent certification for high‑ Iterationrisk models (safety tests: Use, red‑te sandboxesaming results), plus standardized labels/model cards and phased describing capabilities, limitations, and training data rollouts provenance.
  • Who so rules: National standards bodies and certifying agencies can adapt (or delegated third alongside rapid‑party conformity assessment bodies); technical change international standards bodies (ISO, OECD).

Selected for harmonized criteria; industry references consortia to operationalize- OECD tests.

  • Why: Provides verifiable AI Principles assurance to regulators, purchasers, & AI and the public; supports cross Policy Observatory‑border interoperability.
  1. Mandatory incident reporting and
  • shared near‑miss databases
  • What EU AI: Obligate organizations to report Act ( breaches, misuse, or seriousproposal) model failures to authorities and contribute- N anonymized near‑miss dataIST AI to secure Risk Management, shared repositories.
  • Who Framework : Regulators (- UNESCOcybersecurity agencies, sectoral overse Recommendation oners) the Ethics of AI to collect

If you want reports; multilateral platforms (, IOECD, UN) or trusted intermedi can convertaries to host shared databases; industry required this into to submit.

  • Why: a one Enables collective learning, faster mitigation‑page, and evidence for policy brief policymaking.
  1. Export tailored to controls and model capability classification a specific- What: Class actor (ify models by capability and restrict export ornational regulator access to high‑capability, tech models and associated tooling that pose firm, security risks.
  • Who: or international National governments (trade and security agencies) body). coordinating via multilateral fora (Wassenaar Arrangement, G7/OECD) to align thresholds.
  • Why: Limits proliferation of dual‑use capabilities while allowing legitimate research and commerce.
  1. Regulatory sandboxes and conditional approvals
  • What: Time‑limited, supervised testing environments where companies can pilot systems under regulatory oversight and data protection safeguards.
  • Who: Regulators and innovation agencies to host sandboxes; standards bodies to set evaluation criteria.
  • Why: Balances innovation with risk control and informs rulemaking with real‑world evidence.
  1. Mandatory transparency for government use and procurement rules
  • What: Governments must disclose AI use in public services, conduct public impact assessments, and adopt procurement rules requiring vendor safety attestations.
  • Who: National and local governments, public procurement offices, audit institutions.
  • Why: Protects civil rights, promotes accountability, and incentivizes safer products.
  1. Liability rules and consumer redress mechanisms
  • What: Clarify legal responsibility for harms from AI (strict liability for certain harms, duty of care standards), and ensure accessible remedies for affected individuals.
  • Who: Legislatures to enact liability frameworks; courts to interpret; regulators to implement enforcement mechanisms.
  • Why: Creates stronger incentives for safe design and deployment.
  1. Funding and coordination for global safety research and capacity building
  • What: Public funding for foundational safety research, grants for low‑ and middle‑income countries to build regulatory capacity, and mechanisms for secure sharing of safety knowledge.
  • Who: National governments, multilateral institutions (World Bank, OECD, UN), philanthropic funders, and research consortia.
  • Why: Reduces global disparities and supports informed governance.
  1. Standards for data governance and access controls
  • What: Rules for data provenance, consent, and secure data‑sharing infrastructures for model training and evaluation.
  • Who: Data protection authorities, standards bodies (ISO, IEEE), and national legislatures.
  • Why: Protects privacy and improves auditability.
  1. Ethical review boards and corporate safety governance
  • What: Require large AI developers to maintain independent safety boards, red‑teaming teams, and internal compliance processes with whistleblower protections.
  • Who: Companies (especially those operating advanced models), guided by industry codes and regulator minimum requirements.
  • Why: Strengthens internal checks and aligns corporate incentives with public safety.

Implementation notes (short)

  • Mix of instruments: Use a layered approach—mandatory rules for high‑risk cases, standards/certification for technical assurance, and voluntary best practices for lower‑risk innovation.
  • Multilevel coordination: National laws needed for enforcement; international alignment (standards, export controls, data sharing) reduces fragmentation.
  • Phased rollout: Start with high‑risk sectors/models, pilot sandboxes/certification, then scale as methods mature.

Key sources and precedents: EU AI Act (risk‑based rules, conformity assessment), NIST AI Risk Management Framework (assessment guidance), OECD AI Principles (nonbinding standards), export‑control frameworks (for dual‑use tech).

The OECD AI Principles are a set of non‑binding, high‑level guidelines adopted by OECD members (and endorsed by many non‑members) to promote trustworthy, human‑centred AI. They focus on outcomes and governance rather than technical specs, and serve as a reference point for national policies and multilateral discussions.

Key elements

  • Principles: Five principal values — inclusive growth, human‑centred values and fairness, transparency and explainability, robustness/ safety/ security, and accountability. An additional Recommendation urges governments to implement these principles through policies and institutions.
  • Risk and rights focus: Emphasizes protecting human rights and democratic values, managing risks (safety, bias, misuse), and ensuring equitable benefits.
  • Accountability and governance: Calls for appropriate oversight, regulatory frameworks, and mechanisms for redress and liability.
  • Transparency and explainability: Encourages documentation (e.g., model cards), clear information about system capabilities and limitations, and stakeholder engagement.
  • International cooperation: Supports interoperability of standards, shared best practices, and capacity building, especially for low‑ and middle‑income countries.

Related OECD guidance and outputs

  • OECD AI Policy Observatory: A hub compiling national policies, case studies, toolkits, and data to help policymakers implement the Principles.
  • Practical guidance: Reports and toolkits on AI use in public sector, trustworthy AI assessment frameworks, data governance, and measuring AI’s economic and social impacts.
  • Measurement and monitoring: Workstreams developing indicators, risk taxonomy, and methods to evaluate policy effectiveness and AI diffusion.
  • Multistakeholder engagement: Convening governments, industry, and civil society to translate high‑level principles into concrete regulatory and procurement practices.

Why it matters

  • Normative anchor: The Principles are widely cited as the foundational soft law for AI governance, guiding national laws (and regional initiatives like the EU) and multilateral cooperation.
  • Policy translation: OECD guidance helps operationalize principles into practical tools (assessments, checklists, indicators) that regulators and organizations can adopt.
  • Global reach: Their endorsement by many countries promotes interoperability and reduces fragmentation, though they remain nonbinding and require complementary legal regimes for enforcement.

Sources: OECD AI Principles & Recommendation (2019), OECD AI Policy Observatory.

Miles Brundage (Future of Humanity Institute) focuses on how advanced AI can be misused, which governance measures could reduce those risks, and trade‑offs each option entails. Concise summary of his key points:

  • Misuse risks: Brundage highlights that increasingly capable models lower barriers for malicious actors to cause widespread harm — from automated disinformation and cyberattacks to assistance for biological or chemical wrongdoing. He stresses both near‑term (criminal/terrorist misuse, economic disruption) and long‑term systemic risks (cascading automation failures, strategic instability).

  • Export controls and access management: He argues that controlling distribution of the most capable models and critical components (model weights, high‑quality datasets, specialized compute/hardware) can reduce proliferation of dangerous capabilities. Effective controls must balance preventing misuse with avoiding unnecessary stifling of beneficial research and development. He also emphasizes practical challenges: defining capability thresholds, enforcement across jurisdictions, and the risk of driving dangerous work underground or to less regulated actors.

  • Governance options and layered approach: Brundage advocates a portfolio of measures rather than a single fix, including:

    • Technical controls (watermarking, access gating, monitoring, and red‑teaming);
    • Regulatory measures (risk‑based regulation, mandatory reporting, pre‑deployment evaluation for high‑risk models);
    • International coordination (harmonized norms, export control regimes, shared testing standards);
    • Corporate governance (safety teams, responsible disclosure practices, procurement safeguards);
    • Capacity building (support for lower‑resource countries to participate in governance). He underscores the need for adaptive, risk‑sensitive rules that evolve with capabilities.
  • Emphasis on empirical, evidence‑based policy: Brundage calls for systematic testing, model evaluation, and information sharing (incident reporting, benchmarks) so policymakers can set thresholds and tailor interventions to observed harms and capabilities rather than speculation.

  • Trade‑offs and political realism: He recognizes political and economic pressures (innovation incentives, national competitiveness, corporate interests) that complicate strict controls. Thus, he favors pragmatic, internationally coordinated measures that are targeted, enforceable, and designed to minimize negative impacts on legitimate innovation.

For further reading, see Brundage’s papers and policy work at the Future of Humanity Institute and related publications on AI governance, export controls, and risk assessment (e.g., his coauthored works on AI capabilities, governance frameworks, and policy briefs on model access and safety).

The EU AI Act is the European Commission’s flagship legislative proposal to regulate artificial intelligence through a risk‑based framework. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes duties proportional to those risks — from outright bans (e.g., certain biometric surveillance practices) to strict requirements for high‑risk systems (risk management, data quality, transparency, human oversight, conformity assessment) and lighter obligations for lower‑risk tools (transparency notices). The draft includes obligations for providers, deployers, importers and distributors, enforcement powers for national authorities, and penalties for noncompliance.

Why this selection matters

  • Comprehensive statutory model: Unlike most soft‑law approaches, the AI Act aims to be a binding, EU‑wide legal regime covering development, placing on the market, and use of AI systems.
  • Risk‑based and sector‑neutral: It targets harms rather than technologies per se and can apply across sectors (healthcare, transport, employment, law enforcement).
  • Global impact: Because of the EU’s market size and extraterritorial reach, the Act is likely to influence corporate practices and other jurisdictions’ rules.
  • Implementation challenges: Key issues include defining scope (what counts as an AI system), thresholds for “high risk,” conformity testing for complex models, balancing innovation and safety, and ensuring consistent enforcement across member states.

Primary sources

  • European Commission: Proposal for a Regulation laying down harmonised rules on AI (the “AI Act”) and subsequent legislative texts as adopted by EU institutions.

The phrase “Actionable levers: regulatory sandboxes, certification, export controls, procurement rules” highlights concrete mechanisms that policymakers and firms can use to shape AI outcomes. It matters because:

  • It moves from principle to practice: High‑level norms (safety, fairness, accountability) are necessary but insufficient. Actionable levers translate those norms into enforceable or testable interventions that change incentives and behavior.

  • They enable iterative learning:

    • Regulatory sandboxes let regulators and firms experiment with novel uses under controlled conditions, generating real‑world evidence to refine rules without freezing innovation.
    • Certification schemes and standards create measurable compliance criteria that firms can adopt and auditors can verify.
  • They create gatekeeping and incentives:

    • Export controls and classification regimes can limit cross‑border transfer of particularly powerful or dual‑use models, shaping who gains access and under what conditions.
    • Procurement rules allow governments to demand higher safety, transparency, and auditability from vendors, leveraging public spending to raise industry norms.
  • They address different points in the AI lifecycle:

    • Sandboxes and testing target development and pre‑deployment learning.
    • Certification and procurement shape deployment and market entry.
    • Export controls and reporting regimes influence distribution and externalities.
  • They are practically implementable and politically tractable: Unlike sweeping global treaties, these tools can be piloted at national or sectoral levels, scaled, harmonized, or linked across jurisdictions (e.g., mutual recognition of certifications).

In short, naming these levers signals where tangible policy action can occur now — enabling experimentation, risk management, and incremental harmonization while broader international governance evolves.

Selected references: OECD AI Policy Observatory; NIST AI Risk Management Framework; EU AI Act (proposal).

Corporate governance, safety teams, and industry norms are the private-sector backbone of AI governance. They operate where law is often absent or slow, shaping how AI is developed, tested, deployed, and remediated in practice. Briefly:

  • Corporate governance: This is the set of internal policies, oversight structures, accountability mechanisms, and decision rules that firms use to manage AI-related risks. It determines who in the company can approve model development or deployment, how tradeoffs (e.g., speed vs. safety, profit vs. privacy) are resolved, and how incidents are documented and reported. Good corporate governance translates high‑level principles into enforceable procedures and aligns incentives across engineers, managers, and boards.

  • Safety teams: Specialized groups (red teams, risk assessment units, model‑safety labs) focus on identifying, testing for, and mitigating technical and operational risks: adversarial attacks, hallucinations, scaling failures, misuse pathways, and emergent capabilities. They run stress tests, adversarial probing, interpretability analyses, and pre‑deployment reviews. Safety teams are the technical and organizational mechanism that operationalizes safety goals; their independence, resources, and authority are crucial to effectiveness.

  • Industry norms: These are informal but powerful expectations—best practices, standards, and shared tools—developed by firms, standards bodies, and multistakeholder initiatives (e.g., model cards, incident disclosure norms, red‑teaming protocols). Norms help coordinate behavior across competitors, create reputational incentives, and lower transaction costs for safer development. They also bridge gaps between jurisdictions by making certain practices de facto global.

Why this triad matters

  • Speed and adaptiveness: Private actors can move faster than regulators to implement technical fixes and iterate safety practices.
  • Fill regulatory gaps: Where law is fuzzy or absent, corporate rules and norms set de facto standards that reduce harm.
  • Scalability and diffusion: Widely adopted norms and well‑resourced safety teams can propagate safety practices across the ecosystem.
  • Limits and risks: Reliance on voluntary governance creates uneven protections, potential conflicts of interest (profit vs. public safety), and variable transparency. Hence public oversight, audits, and regulatory backstops remain necessary.

Key practical implications

  • Regulators should require transparency about governance structures and grant safety teams sufficient independence and whistleblower protections.
  • Standards bodies and multistakeholder fora should codify best practices (testing, reporting, incident sharing) so norms become interoperable.
  • Civil society and researchers must monitor adherence and push for accountability where corporate governance falls short.

Sources and further reading: OECD AI Policy Observatory; NIST AI Risk Management Framework; Partnership on AI publications on model reporting and red‑teaming; scholarly literature on corporate responsibility and safety engineering.

Joanna Bryson is a prominent researcher and commentator on AI ethics, governance, and the social impacts of artificial intelligence. Her work combines computer science, cognitive science, and political theory to address how AI systems should be designed, regulated, and integrated into society. Key contributions relevant to accountability debates include:

  • Clear stance on responsibility: Bryson emphasizes that moral and legal responsibility should attach to human actors (developers, deployers, institutions), not to AI systems themselves. She argues against treating AI as moral agents or persons; doing so risks misplacing accountability and weakening incentives for human oversight and regulation. (See Bryson, “Robots should be slaves,” 2010; later writings expanding this view.)

  • Focus on institutional design: She advocates governance that targets corporate and institutional practices—contracting, procurement, auditing, and liability regimes—so that those who create and deploy systems face clear obligations and consequences. This includes technical transparency, documentation (model cards), and auditability to enable accountability.

  • Pragmatic ethics and policy: Bryson favors practical, enforceable measures (standards, regulatory requirements) over purely aspirational principles. She has been active in policy debates and multistakeholder initiatives that translate ethical concerns into implementable governance tools.

  • Interdisciplinary approach to harms: She stresses understanding AI within social and economic contexts—how power, incentives, and organizational arrangements produce harms—so accountability mechanisms must address these systemic factors, not only technical fixes.

Relevant works and engagements: Bryson’s academic papers and essays on AI agency and responsibility; contributions to policy fora and public debates on AI regulation and ethics.

For further reading: Bryson J. J., “Robots should be slaves” (Philosophy & Technology, 2010) and her subsequent essays and talks on AI governance and responsibility.

Regulatory design concerns how societies shape rules, institutions, and processes to manage AI’s benefits and risks. Comparative approaches study how different legal systems, political economies, and cultures choose different designs and what trade‑offs those choices produce. Together they help identify which instruments are likely to work, for whom, and under what conditions.

Key dimensions to consider

  • Normative goals: Regulations reflect priorities—safety, innovation, human rights, economic competitiveness, or state security. Designs that stress one goal (e.g., fast commercial deployment) will trade off others (e.g., strict harm prevention).

  • Legal architecture: Choices include comprehensive omnibus laws (EU’s risk‑based AI Act), sectoral/agency regulation (U.S. model, relying on FTC, FDA, DoD, etc.), or administrative/state‑led controls (China). Omnibus laws offer clarity and uniformity; sectoral approaches provide flexibility and domain expertise.

  • Instruments and mechanisms:

    • Ex ante obligations (pre‑deployment testing, certification, safety cases) reduce upstream risk but raise compliance costs and possible innovation slowdowns.
    • Ex post liability and enforcement (fines, tort law) incentivize caution with lower upfront burdens but can leave harms unmitigated beforehand.
    • Soft law and standards (technical standards, voluntary codes) enable rapid iteration and industry uptake, yet lack strong enforceability.
    • Hybrid mechanisms (regulatory sandboxes, mandatory reporting + voluntary standards) try to combine agility with oversight.
  • Regulatory scope and granularity: Risk‑based frameworks calibrate obligations to system risk (high, limited, minimal). This targets resources to dangerous uses but depends on reliable risk classification and may be gamed by actors seeking lower scrutiny.

  • Institutional capacity and procedural design: Effective regimes need competent regulators, technical evaluation capacity, transparency requirements, public participation, and mechanisms for cross‑agency and international cooperation. Low capacity states often favor lighter, standards‑based approaches or adopt “exported” rules from larger markets.

Comparative trade‑offs and lessons

  • Centralization vs decentralization: Centralized, statutory regimes (EU) give clear rules and predictability across jurisdictional boundaries; decentralized, agency‑led systems (U.S.) adapt more quickly to technology change but can produce patchwork outcomes and regulatory gaps.

  • Precaution vs innovation: Precautionary designs (strict pre‑deployment controls, export limits) better contain systemic risks but may stifle beneficial innovation or drive investment to less regulated jurisdictions. Innovation‑friendly designs prioritize permissive markets plus post‑hoc remedies, risking delayed mitigation of harms.

  • Openness vs security: Democratic norms favor transparency and accountability; security‑oriented regimes may restrict disclosure and centralize oversight to manage misuse. Comparative analysis shows tensions—excessive secrecy impedes external review; excessive openness can enable misuse.

  • Global interoperability: Divergent national designs complicate cross‑border data flows, model deployment, and export controls. Harmonization via standards, mutual recognition, and multilateral agreements reduces friction but requires political convergence on core principles.

Practical implications for policy‑making

  • Adopt mixed toolkits: Combine ex ante safeguards for high‑risk systems, ex post liability for accountability, and standards/sandboxes for learning and technical alignment.

  • Build adaptive governance: Use sunset clauses, iterative rulemaking, and mandated review to keep rules aligned with rapidly evolving capabilities.

  • Invest in capacity and international coordination: Technical evaluation labs, shared testing protocols, and mutual recognition arrangements reduce fragmentation and uplift low‑capacity jurisdictions.

  • Make normative choices explicit: Regulators should transparently prioritize values (safety, rights, innovation) so trade‑offs are democratically accountable.

Selected further reading

  • European Commission, Proposal for an AI Act (risk‑based regulatory model)
  • NIST, AI Risk Management Framework (practical guidance)
  • OECD, AI Principles and Policy Observatory (comparative surveys)

(These sources outline concrete designs and allow comparative study of outcomes.)

The NIST AI Risk Management Framework (AI RMF) is a voluntary, non‑binding guidance document produced by the U.S. National Institute of Standards and Technology to help organizations identify, assess, manage, and communicate risks from artificial intelligence systems. It is designed to be flexible, technology‑neutral, and usable across sectors and organizational sizes.

Key points

  • Purpose: Encourage systematic, repeatable risk management practices for AI that improve trustworthiness while supporting innovation and interoperability.
  • Structure: Organized around four core functions — Map (identify context and risks), Measure (assess system behavior and impact), Manage (select and implement risk controls), and Govern (oversight, roles, and accountability).
  • Principles: Emphasizes transparency, fairness, robustness, safety, privacy, and human oversight, but focuses on operationalizing these concepts through risk processes rather than prescribing specific technical solutions.
  • Implementation: Offers profiles, measurement methods, playbooks, and examples to help organizations tailor practices to their risk tolerance, legal obligations, and operational context.
  • Role in governance: Serves as a common toolkit that can inform industry best practices, procurement requirements, regulatory guidance, and international standardization efforts without being a formal regulation.

Why it matters

  • Provides practical, testable steps for integrating safety and trustworthiness into AI lifecycles.
  • Helps bridge technical work (testing, metrics, documentation) with governance needs (accountability, reporting).
  • Widely referenced by U.S. agencies and industry and used as an input for policymaking and standards development internationally.

Further reading: NIST AI Risk Management Framework (official publication) and associated NIST resources and implementation guides.

Timnit Gebru and Margaret Mitchell have been prominent critics of how large technology firms conduct AI research and govern its societal impacts. Their critiques focus on three interrelated concerns:

  1. Conflicts of interest and corporate incentives
  • They argue that for-profit firms often face structural incentives (market share, proprietary advantage, and public relations) that can suppress inconvenient research findings, underfund independent safety work, or prioritize rapid deployment over harms mitigation. This creates a conflict between commercial aims and public-interest research. (See Gebru’s work on data set harms and calls for independent auditing.)
  1. Research governance and academic norms
  • Both advocate for stronger governance of AI research practices: clear standards for dataset provenance, documentation (e.g., model cards, data sheets), rigorous impact assessments, and mechanisms that preserve academic freedom and methodological transparency within corporate labs. They emphasize that robust internal review processes and external oversight are necessary to prevent harmful deployments and to ensure reproducibility and accountability.
  1. Structural protections for researchers and marginalized communities
  • Their high‑profile departures and disputes highlighted how researchers raising ethical concerns can be marginalized or dismissed. They call for protections for whistleblowers and for research agendas that center the perspectives of communities disproportionately affected by AI (racialized groups, low-income populations). This includes advocating for diversifying teams and decision-making, and for participatory approaches to assessing harms.

Policy implications they push for

  • Independent research governance: separation between commercial deployment decisions and safety/ethics evaluation; independent auditing and third‑party review.
  • Publicly accessible documentation and accountability mechanisms: mandatory disclosure of datasets, model capabilities, and risk assessments for high‑impact systems.
  • Institutional safeguards: formal channels protecting researchers who identify harms, and funding/support for community‑led studies of AI impacts.

Why it matters philosophically

  • Their critique reframes AI ethics from abstract principles to institutional ethics: the moral character of AI depends not only on algorithms but on organizational structures, incentives, and power relations that shape what research gets done, published, or suppressed. This shifts the focus from individual responsibility to collective and systemic governance.

Key sources

  • Timnit Gebru’s publications and public essays on dataset harms and ethics in AI.
  • Statements and papers by Margaret Mitchell on research governance and ethical review in AI.
  • Coverage of their departures from major labs and subsequent calls for reform (e.g., media reports and open letters).

These critiques have influenced ongoing policy debates about transparency, whistleblower protections, independent oversight, and how to embed democratic accountability into corporate AI research.

This selection was made because these authors synthesize domain‑specific expertise (public‑health, security analysis, and biosafety) to clarify a pressing governance challenge: generative AI models can lower technical barriers and accelerate biological design in ways that have plausible dual‑use harms. Key reasons for choosing them:

  • Interdisciplinary authority: WHO brings public‑health and global governance legitimacy; CSET (Center for Security and Emerging Technology) brings policy and security analysis focused on technology diffusion; biosafety experts contribute technical grounding in laboratory practice and biological risk. Their combined perspective links technical capabilities to real‑world public‑health and security consequences.

  • Concrete, policy‑relevant framing: They translate abstract “dual‑use” concerns into actionable policy topics — e.g., capability‑dependent risk assessment, need for model testing/evaluation for biohazard outputs, red‑teaming, disclosure/reporting mechanisms, and norms for access controls — which are directly useful for regulators, funders, and platform operators.

  • Evidence‑based caution: The authors emphasize plausibility without exaggeration: they document specific pathways (assistance with sequence design, protocol optimization, troubleshooting) and characterize uncertainty about scale and feasibility. That balanced approach helps policymakers weigh responses proportionate to risk.

  • Focus on governance levers: Their recommendations point to practical interventions that fit current governance tools: model and dataset governance, export‑control alignment, research oversight, industry standards, responsible disclosure, and capacity building for detection and response in public‑health systems.

  • Relevance to international coordination: Because bio‑related risks cross borders and involve health, security, and commerce, the report’s cross‑cutting recommendations help bridge gaps between AI governance forums (G7, OECD, UN) and biological safety/regulatory bodies (WHO, national health agencies).

References and further reading (selected)

  • WHO/CSET/biosafety analyses and briefs on AI and bio risks (various joint and independent reports).
  • CSET: “Biodefense in the Age of AI” and related policy notes.
  • WHO: guidance on responsible use of AI in health and risk considerations.
  • Academic reviews on dual‑use concerns and AI (e.g., Nature/Science commentaries).

If you want, I can summarize one of their specific recommendations (e.g., pre‑deployment testing for biological outputs) in 3–5 bullet points.

International coordination on AI governance sits at the intersection of shared risks and strategic rivalry. Cooperative mechanisms (OECD, G7, UN, standards bodies) create common norms, testing regimes, and information‑sharing that reduce cross‑border harms, raise baseline safety, and help smaller states build capacity. They also lower transaction costs for companies and enable interoperable regulation.

At the same time, geopolitics shapes what cooperation looks like. States with differing values, industrial strategies, and security priorities contest thresholds for export controls, classification of “capable” models, data‑flow rules, and mandatory disclosures. Major producers (EU, US, China) pursue partly divergent approaches — the EU emphasizes rights‑based, risk‑based regulation; the US prioritizes sectoral innovation plus enforcement; China combines rapid rule‑making with state control — producing fragmentation that complicates global harmonization.

The result is a pragmatic, mixed system: transnational soft law and technical standards expand baseline safety and foster collaboration, while strategic competition drives selective decoupling (export controls, domestic standards, secure supply chains) and contestation over enforcement mechanisms. Effective global governance will therefore require both technical common ground (shared testing, incident reporting, standards) and political agreements that manage strategic concerns — for example, carve‑outs for security, multilayered trust frameworks, and capacity‑building for less resourced states.

Key implication: improving AI governance depends as much on diplomacy and trust‑building among states as on technical standards — without political accommodation, technical convergence will be partial and fragile.

Sources: OECD AI Principles; UNESCO Recommendation on the Ethics of AI (2021); recent G7 and EU statements on AI governance; NIST AI Risk Management Framework.

Ruha Benjamin is a sociologist and scholar who examines how race, class, and power shape technological design, deployment, and governance. Her work argues that AI and other technologies often reproduce and amplify existing social inequalities unless governance explicitly addresses structural bias and unequal power relations. Key points from her perspective:

  • “Race after technology”: Technologies are not neutral; they encode social values and can function as “discriminatory instruments” that produce what she calls the “New Jim Code”—a term highlighting how algorithmic systems can perpetuate racialized harms under the guise of objectivity.

  • Focus on upstream dynamics: Benjamin emphasizes examining who designs technologies, whose interests they serve, and how funding, institutions, and labor conditions shape outcomes. Governance must therefore intervene not only at deployment but in design, procurement, and corporate incentives.

  • Participatory and democratic governance: She advocates for inclusive policy processes that center marginalized communities, meaningful public engagement, and mechanisms that enable affected groups to contest and shape technological decisions.

  • Redistribution and structural remedies: Beyond technical fixes (e.g., debiasing algorithms), Benjamin calls for structural reforms—accountability, regulatory measures, and social policies—that address underlying inequalities that technology alone cannot solve.

  • Imagination and alternatives: She promotes “disobedient” and creative frameworks—repair, care, and solidarity-driven design—that imagine alternative technological futures oriented toward justice rather than efficiency or profit.

Relevant works: Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (2019); essays and talks on tech justice and democratic governance.

Rights approach

  • What it emphasizes: Protecting individual and collective rights (civil, political, economic, social) — e.g., privacy, free expression, due process, non‑discrimination, and access to essential services.
  • Policy tools: Rights‑based statutes, human‑rights impact assessments, data‑protection laws (like GDPR), and judicial remedies.
  • Strengths: Clear legal obligations; aligns AI policy with established moral and legal frameworks; empowers affected people through enforceable remedies.
  • Limits: Rights language can be abstract or contested across jurisdictions; rights frameworks may struggle with collective harms and systemic effects.

Bias approach

  • What it emphasizes: Detecting, measuring, and mitigating unfair or discriminatory outcomes produced by AI systems (whether due to data, design, or deployment).
  • Policy tools: Audits, fairness metrics, dataset curation practices, model cards, requirement for disparate‑impact testing, and sectoral standards.
  • Strengths: Targets measurable harms; enables technical and procedural fixes; connects to anti‑discrimination law and compliance regimes.
  • Limits: “Bias” is multifaceted (statistical vs. substantive fairness); technical metrics can obscure value judgments; fixes can be brittle across contexts and may miss structural inequities.

Public‑interest approach

  • What it emphasizes: Governing AI to advance public goods—safety, democratic integrity, public health, equitable economic outcomes, and shared infrastructure—rather than only protecting individual rights or correcting bias.
  • Policy tools: Public‑sector procurement rules, public benefit obligations, open testing and certification, regulatory sandboxes, investment in public‑interest AI, and mandates for transparency and accountability in systems affecting public services.
  • Strengths: Enables systemic, collective remedies; aligns AI deployment with societal priorities; creates institutional capacity to steward technology at scale.
  • Limits: Requires political will and resources; risks capture by powerful actors if public interests are poorly defined; balancing innovation and public protection can be contentious.

How they interact (brief)

  • Complementary roles: Rights frameworks set legal floors; bias mitigation addresses specific discriminatory mechanisms; public‑interest policies target systemic outcomes and infrastructure. Effective governance typically combines all three: enforceable rights, technical and procedural bias controls, and institutions that steer AI toward the common good.

Further reading (select)

  • OECD AI Principles; EU AI Act drafts; UNESCO Recommendation on the Ethics of AI (2021); NIST AI Risk Management Framework.

The snapshot highlights fragmentation, capacity gaps, and uneven enforcement because these qualifiers correct common overconfidence about AI governance. They matter for three reasons:

  • Descriptive accuracy: Governance is indeed a patchwork—national laws, voluntary standards, and multistakeholder initiatives coexist without a single, binding global regime. Recognizing that helps set appropriate expectations about what rules actually apply where and to whom (OECD, EU AI Act draft).

  • Practical implications: Fragmentation and capacity differences mean important enforcement and oversight functions are uneven. Wealthier states and large firms can implement robust testing, certification, and red‑teaming; many countries and smaller actors cannot. That shapes where risks concentrate and which remedies are feasible (NIST, OECD observations).

  • Policy urgency and strategy: If governance were “solved,” attention could shift away from building capacity, harmonizing standards, and creating interoperable enforcement tools. Emphasizing gaps justifies investing in international coordination, technical assistance, sandboxes, and legally binding mechanisms where needed (G7, UN dialogues).

In short: noting fragmentation and gaps is not pessimism but a necessary, realistic baseline for designing effective, equitable next steps in AI governance.

Security and dual‑use

  • Dual‑use nature: Many AI capabilities can be used for beneficial purposes (medical diagnosis, climate modeling, automation) and harmful ones (cyberattacks, automated disinformation, biological design assistance). This “dual‑use” quality makes simple open/close choices insufficient.
  • Risks to consider: misuse by criminals or hostile states (cyber intrusion, fraud, surveillance), escalation of geopolitical tensions (autonomous weapons, information warfare), and enabling of other high‑consequence harms (assisting in biological or chemical weapon design).
  • Policy tension: Openness accelerates research and beneficial innovation; secrecy or restrictions can slow progress and disadvantage some actors. Effective policy must balance scientific collaboration, public safety, and competitive/strategic concerns.

Export controls and related measures

  • Purpose: Export controls aim to limit the transfer of sensitive AI technologies, models, datasets, and associated expertise to actors or countries that could misuse them. They are an established tool in arms control and dual‑use regimes adapted to AI.
  • Forms of controls: licensing requirements for transferring models or hardware, blacklists or restricted‑party lists, limitations on cloud services or access to advanced compute, and controls on personnel exchanges or training programs.
  • Challenges in AI context: Defining what to control is technically complex (models vs. weights vs. training data vs. algorithms), enforcing controls across decentralized cloud services and open-source releases is difficult, and controls can have unintended global economic and diplomatic effects.
  • Complementary measures: Responsible disclosure policies, model testing/classification frameworks, export‑control harmonization among like‑minded states, licensing that includes safety conditions, and capacity‑building for low‑ and middle‑income countries to reduce incentives for illicit acquisition.

Practical governance implications

  • Risk‑tiered approach: Many proposals favor classifying models by capability/risk and tailoring controls accordingly (light touch for narrow tools; stricter oversight for frontier models).
  • International coordination: Export controls are most effective when aligned across major producing states to prevent circumvention and reduce market fragmentation.
  • Combined toolkit: Controls work best alongside domestic regulation (liability, safety standards), corporate governance (red‑teaming, access controls), and international norms (transparency, incident sharing).

Selected references

  • OECD AI Policy Observatory (policy briefs on security and dual‑use)
  • Recent G7 statements and US export‑control actions on AI compute and model transfers
  • UNESCO Recommendation on the Ethics of AI (for broader normative context)

This concise reading list is tailored for policymakers, technologists, and civil‑society advocates who need high‑value, actionable sources to understand current AI governance debates and next steps. Each entry includes why it matters and how to use it.

  1. European Commission — Proposal for an EU Artificial Intelligence Act (and final text once adopted)

    • Why it matters: The EU AI Act is the most comprehensive statutory approach to date, introducing risk‑based obligations, conformity assessment, and enforcement mechanisms that will shape global regulatory expectations and supply‑chain requirements.
    • How to use it: Study the risk categories and obligations for high‑risk systems to design compliance strategies, procurement rules, and harmonized standards; use as a model for national legislation or bilateral negotiations.
    • Source: European Commission – AI Act materials and summaries (policy text + guidance).
  2. OECD — OECD AI Principles & AI Policy Observatory

    • Why it matters: Widely endorsed nonbinding principles (human‑centered values, transparency, accountability) and a practical portal comparing national policies, toolkits, and case studies.
    • How to use it: Reference for multilateral norm‑setting, baseline for domestic policy, and a resource for international coordination and technical assistance to lower‑capacity states.
    • Source: OECD AI Principles; AI Policy Observatory.
  3. NIST — AI Risk Management Framework (RMF)

    • Why it matters: Practical, voluntary framework focused on risk governance, measurement, and lifecycle management widely used by U.S. agencies and industry for operationalizing safety and accountability.
    • How to use it: Adopt or adapt RMF processes for organizational procurement, certification pilots, and integration with regulatory sandboxes.
    • Source: NIST AI RMF documentation.
  4. UNESCO — Recommendation on the Ethics of Artificial Intelligence (2021)

    • Why it matters: Global normative instrument adopted by UNESCO member states that emphasizes human rights, equity, and global inclusivity—useful for framing ethical obligations and capacity‑building needs in multilateral fora.
    • How to use it: Advocate for human‑rights based approaches in national policy, and leverage when engaging countries underrepresented in other processes.
    • Source: UNESCO Recommendation text and commentary.
  5. Partnership on AI / OpenAI / Industry white papers on model transparency & red‑teaming

    • Why it matters: Industry and multistakeholder bodies publish technical best practices (model cards, incident reporting, red‑teaming methods) that are shaping voluntary governance and possible regulatory expectations.
    • How to use it: Implement practical transparency and safety measures; cite as evidence of industry norms in regulatory debates.
    • Source: Partnership on AI publications; leading lab safety papers.
  6. G7 / OECD / UN statements on AI safety, testing, and export controls (recent communiqués)

    • Why it matters: High‑level political consensus points toward shared priorities (testing regimes, export controls for advanced models, information‑sharing), and signals likely areas for near‑term coordination.
    • How to use it: Track policy signals for international alignment, advocate for specific commitments (e.g., independent model testing), and use communiqués to coordinate domestic policy timelines.
    • Source: G7 AI Ministerial communiqués; OECD/UN press statements.
  7. Select technical primer: “On the Dangers of Stochastic Parrots” (Bender et al.) and model evaluation literature

    • Why it matters: Frames key ethical and technical risks from large language models (data provenance, scale impacts, evaluation challenges), useful for bridging technical concerns and policy choices.
    • How to use it: Inform data governance, transparency mandates, and public procurement requirements for documentation and evaluation.
    • Source: Bender et al., and follow‑up LLM evaluation studies.
  8. Legal & policy analysis briefs: Belfer/AI Council/think‑tank primers on liability, antitrust, and labor impacts

    • Why it matters: Practical analyses that translate legal doctrines to AI contexts—liability allocation, competition policy for dominant model providers, and workforce transition policies.
    • How to use it: Craft targeted legislative fixes, design enforcement strategies, and prepare impact assessments for social protections.
    • Source: Belfer Center, Centre for Data Innovation, Brookings, and similar briefs.

How to prioritize these readings

  • Policymaker: Start with the EU AI Act, OECD Principles, NIST RMF, then G7/OECD statements and legal briefs to draft enforceable, interoperable rules.
  • Technologist: Start with NIST RMF, industry red‑teaming/model transparency papers, and the technical evaluation literature.
  • Civil‑society advocate: Start with UNESCO Recommendation, OECD Principles, Bender et al., and legal/think‑tank briefs to build rights‑based advocacy and accountability demands.

Quick practical tip: Combine normative texts (OECD, UNESCO) with operational frameworks (NIST, industry red‑teaming) to design policy that is both principledInternational Coordination on AI Governance — Annotated and implement One‑Page Reading List

Purpose:able. Curated, high‑signal resources for a policymaker, technologist, or civil‑society advocate who needs concise, practical grounding in current international AI governance For all debates and options.

audiences,1) OECD AI Principles & AI Policy Observatory (OECD)

  • track the Why read evolving EU: Sets the dominant, pragmatic implementation, norms used by many governments (human‑ U.Scentered, transparent, robust) and links to country-level. legislative policy trackers.
  • Use for: Benchmark moves, and multing national proposals against widely acceptedilateral agreements nonbinding norms and finding comparative on testing policy examples.
  • Quick take: Influ and exportential soft law that controls.

informs legislation and multilateral discussionsSelected sources.

  1. EU AI Act for retrieval:
  • — European Commission (draft and European Commission explanatory materials)
  • Why read: The most comprehensive statutory approach to risk‑based AI regulation; a practical template: EU for rules AI Act, obligations, and enforcement mechanisms.
  • documents Use for: Designing- OECD risk classification, compliance: AI pathways, Principles & and supplier obligations in AI Policy domestic law or procurement Observatory .
  • Quick take- N: Shows how substantial regulatory detail can be operationalizedIST: (conformity assessment, fines, high‑ AI Riskrisk rules).
  1. NIST AI Management Framework Risk Management Framework ( -U.S. National Institute of UNESCO: Standards and Technology)
  • Why read: Recommendation on Practical, technical guidance for organizations on assessing the Ethics and managing AI risk; widely referenced by industry. of AI- Use for: (202 Developing technical standards, internal governance1) processes, and procurement- Partnership requirements.
  • Quick on AI take: and industry A flexible framework that bridges policy white papers objectives and engineering practices.
  1. UNESCO
  • Recommendation on the Ethics of Artificial Recent G Intelligence (2021)
  • Why read7/O: Global normative textECD/ emphasizing human rights, inclusion,UN commun and capacity building — often cited by lower‑income states.
  • Useiqués for:
  • International advocacy, rights‑based Bender policy framing et al, and multilateral., “ negotiations. On the- Quick take: Values‑driven complement to Dangers OECD’s pragmatic approach of St; usefulochastic Par in diplomatic contexts.

5)rots”

Recent G7 / OECD / UN statementsIf you on AI (select communiqués)

  • Why’d like read: Show current multilateral priorities (, Itesting regimes, export can convert controls, safety sharing) and political convergence this into points.
  • Use for: Anticip a printableating near‑term one‑ international commitments and aligning national policy timelines.
  • Quickpage PDF take: Indicate where global coordination is or tailor likeliest to produce joint action.

the list6) Partnership on to a AI and Model Cards / Datasheets literature specific country (technical + governance)

  • Why read or stakeholder.: Practical transparency tools developed by civil society + industry to document model capabilities and risks.
  • Use for: Crafting disclosure requirements, procurement checklists, and public reporting standards.
  • Quick take: Low‑cost, implementable transparency measures that can be scaled into regulation.
  1. Academic overview: “Governing AI: A Guide to the Ethics and Policy” (select review article or book chapter)
  • Why read: Synthesizes legal, economic, and ethical arguments and clarifies tradeoffs (innovation vs. safety; openness vs. security).
  • Use for: Building policy briefs that weigh alternatives and anticipate unintended consequences.
  • Quick take: Helpful conceptual grounding for high‑stakes decisions.
  1. Technical safety resources: OpenAI safety policy briefs; white papers on red‑teaming and model evaluation
  • Why read: Explain technical capabilities, failure modes, and recommended mitigation practices from leading developers.
  • Use for: Informing requirements for pre‑deployment testing, incident disclosure, and research funding priorities.
  • Quick take: Ground truth on what measures are feasible and where gaps remain.
  1. Reports on export controls & dual‑use risks (e.g., national export control reviews, expert analyses)
  • Why read: Clarify options for restricting model/compute transfer and the implications for trade and research.
  • Use for: Designing proportionate controls that target high‑risk capabilities without unduly blocking beneficial research.
  • Quick take: Policy tools exist but require careful calibration and international coordination.
  1. Civil‑society monitoring and litigation resources (ACLU/EDRi/Algorithmic Justice League briefs)
  • Why read: Document harms (bias, surveillance, labor impacts), public interest legal strategies, and community priorities.
  • Use for: Drafting rights‑protecting safeguards, impact assessment criteria, and enforcement mechanisms.
  • Quick take: Anchors policy in lived harms and accountability practices.

How to use this list (one‑page action steps)

  • For policymakers: Start with OECD Principles + EU AI Act for legal architecture; add NIST for operational details; consult export‑control analyses before adopting trade measures.
  • For technologists: Read NIST + model documentation (Partnership on AI, model cards) and developer safety briefs to align engineering work with regulatory expectations.
  • For civil society: Use UNESCO, civil‑society reports, and litigation resources to frame rights‑based demands and monitor implementation; leverage OECD and G7 statements to push for accountability mechanisms.

Selected sources and further pointers

  • OECD AI Principles & Policy Observatory
  • European Commission — EU AI Act materials
  • NIST AI Risk Management Framework
  • UNESCO Recommendation on the Ethics of AI (2021)
  • Recent G7, OECD, and UN communiqués on AI
  • Partnership on AI; model cards/datasheets literature
  • Select academic review article on AI governance
  • OpenAI and other developer safety white papers
  • Civil‑society advocacy and litigation briefs (ACLU, EDRi)

If you want, I can convert this into a two‑page brief with direct links and a one‑paragraph takeaway tailored to one of the three audiences (policymaker / technologist / civil‑society advocate).

Cathy O’Neil is a data scientist and public intellectual best known for critiquing the social and moral consequences of algorithmic systems. In her book Weapons of Math Destruction (2016) she argues that many large‑scale predictive models are opaque, unregulated, and socially damaging. Her core points:

  • Algorithms as amplifiers of inequality: O’Neil shows how models trained on biased or incomplete data can reproduce and magnify social injustices (e.g., in criminal justice risk scores, hiring tools, credit scoring), producing negative feedback loops that hurt already vulnerable populations.

  • Opacity and lack of contestability: She emphasizes that many algorithmic decisions are inscrutable to those affected — neither the logic nor the data are transparent — removing meaningful opportunities for redress or appeal.

  • Scale and systemic harm: O’Neil coins the term “Weapons of Math Destruction” for models that are widespread, impactful, and poorly regulated. Their scale makes individual harms cumulative and societal rather than isolated.

  • Perverse incentives and responsibility gaps: She highlights how commercial and institutional incentives (efficiency, profit, political goals) can prioritize predictive performance over fairness or dignity, creating accountability vacuums where it’s unclear who should be held responsible for harm — modelers, deployers, or policymakers.

  • Call for democratic governance and auditability: O’Neil advocates for clearer regulation, algorithmic accountability (audits, transparency), and civic engagement so that model design and deployment align with democratic values and human rights.

Relevance to AI governance: O’Neil’s work grounds normative arguments for regulation (e.g., transparency requirements, rights to explanation, independent audits) and informs policy debates about liability, public procurement standards, and protections for affected communities. Her critique remains influential in shaping both soft‑law standards and statutory proposals addressing bias, fairness, and accountability.

Selected source: Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016).

Helen Toner (Center for Security and Emerging Technology) is included because her work clearly maps practical policy levers and governance pathways for advanced AI. Key reasons for the selection:

  • Focus on actionable policy: Toner translates technical risks from frontier AI into concrete regulatory and institutional options (e.g., model evaluation, incident reporting, export controls, licensing or certification), helping bridge the gap between abstract principles and implementable measures.

  • Emphasis on governance design: She analyzes how different governance instruments (statutory rules, standards, procurement conditions, international coordination, and market interventions) interact, highlighting trade‑offs, enforcement challenges, and sequencing — useful for policymakers deciding what to adopt first.

  • Risk‑informed, multidisciplinary approach: Her work integrates technical understanding of model capabilities with political, economic, and security considerations, which is important for distinguishing near‑term misuse prevention from longer‑term alignment and systemic risks.

  • Attention to institutions and incentives: Toner examines institutional roles (agencies, standards bodies, multilateral forums) and incentive structures for firms and researchers, offering recommendations to align private incentives with public safety.

  • Influence on policy debates: CSET outputs, including Toner’s analyses, have been widely cited by governments and other policy bodies as they design testing regimes, reporting requirements, and coordination mechanisms.

Relevant examples: analyses on model evaluation and testing, proposals for oversight mechanisms and licensing frameworks, and policy briefs on export controls and international coordination. (See CSET publications and policy briefs for specific reports.)

The summary offers representative coverage because it highlights the distinct, co‑existing sources that are actually shaping AI governance today:

  • Multiple legal/regulatory centers: It names the major regional approaches (EU’s comprehensive statutory effort, the U.S. sectoral/agency model, China’s state‑led regime), which capture the dominant regulatory experiments and their different priorities (risk‑based mandates vs. innovation‑friendly, sectoral oversight vs. centralized control).

  • Soft law and standards: It notes the many nonbinding influence channels (OECD, ISO, IEEE, UNESCO, Partnership on AI, NIST), reflecting how technical norms, guidelines, and standards diffuse into policy and practice even without formal legislation.

  • Corporate governance and procurement: It includes industry practices (safety teams, red‑teaming, model cards) and the growing role of public procurement and vendor requirements — important because much real‑world risk management is implemented by firms and purchasers, not only by statutes.

  • Key policy tensions and focus areas: It lists the central fault lines policymakers face — safety/alignment, accountability/liability, civil rights, security/dual‑use, and economic/labor impacts — which explains why different instruments (laws, standards, markets) are being used in parallel.

  • International coordination and capacity gaps: It recognizes active multilateral engagement (G7, OECD, UN) alongside limits: outputs are often nonbinding, many countries lack regulatory capacity, and technical consensus (on testing, export controls, disclosure) is still forming.

Together these elements explain why governance is fragmented but substantive: regulation, voluntary standards, corporate practice, and multilateral norms are each contributing pieces of a governance architecture that is emerging in overlapping, sometimes inconsistent ways. For further reading: OECD AI Principles; EU AI Act materials (European Commission); NIST AI Risk Management Framework; UNESCO Recommendation on the Ethics of AI.

Back to Graph