← Back to all writing

A map of AI governance

February 1, 2026

If you’ve been following AI governance, you’ve seen a lot of activity. AI Safety Summits. UN panels. EU regulations. Company pledges. Open letters signed by Nobel laureates. It feels like a lot is happening.

Then you look at the numbers. Global spending on AI safety and governance: roughly $200-400 million per year. Global corporate AI investment: $252 billion in 2024. That’s a 600:1 to 1,200:1 ratio. For every dollar going toward making sure AI is safe and governed, six hundred to twelve hundred dollars go toward making it more capable.

This article maps what actually exists. Not what’s been announced, pledged, or promised — what has binding authority, enforcement mechanisms, and real impact. The picture is less reassuring than the headlines suggest.


International: wide but shallow

The UN established an AI Advisory Body in 2023, upgraded to a permanent Independent Scientific Panel on AI in August 2025 — 40 experts selected from 2,600+ candidates. The Panel held its first meeting in March 2026. Its first report won’t come until late 2026. It is purely advisory. No government is obligated to act on anything it says. Influence: 4/10.

The OECD published the first intergovernmental AI principles in 2019 (updated 2024) and now tracks 800+ AI policy initiatives across 80+ countries. These principles have genuinely influenced the EU AI Act and other frameworks. But they are recommendations, not requirements. Influence: 5/10.

The AI Safety Summit series (Bletchley 2023 → Seoul 2024 → Paris 2025) has produced declarations, voluntary company commitments, and the launch of AI Safety Institutes. At Seoul, 16 companies made voluntary frontier AI safety commitments. At Paris, the framing shifted from “safety” toward “action and innovation” — a deliberate rebalancing from the safety emphasis of Bletchley. No summit has produced anything binding.

The Council of Europe AI Framework Convention is the first legally binding international AI treaty. 44 signatories including the US, UK, EU, Canada, and Israel. It covers human rights, democracy, and rule of law in AI. But: it requires 5 ratifications to enter into force, and as of April 2026 there are zero ratifications. China and Russia are not participating. The obligations are broadly worded and the enforcement mechanism relies on national implementation. Influence: 5/10 — historically significant, practically toothless for now.

UNESCO’s AI Ethics Recommendation was adopted by 194 countries in 2021. It has a Readiness Assessment Methodology that over 50 countries have used. But no enforcement mechanism, no penalties, no binding obligations. Influence: 3/10.

The pattern: many institutions, many declarations, almost zero enforcement power. The international AI governance landscape is wide but shallow.


National: three incompatible directions

United States: the vacuum

The US — responsible for roughly 50% of frontier AI development by investment — has no binding federal AI safety regulation.

Biden’s Executive Order 14110 (October 2023) was the most comprehensive attempt: mandatory safety testing reports for large models, standards development through NIST, agency-level guidance. Trump revoked it on January 20, 2025, replacing it with an order focused on “removing barriers to American leadership.” Much of the work done under EO 14110 (NIST guidelines, agency reports) technically survives, but the mandate to continue is gone.

State-level regulation has been mixed. California’s SB 1047 — which would have required safety testing for large models — was vetoed by Governor Newsom in September 2024. Colorado passed an AI consumer protection act focused on algorithmic discrimination. The patchwork is thin.

NIST’s AI Safety Institute was established under Biden. Its current status under Trump is uncertain — budget around $10 million, renamed to “Center for AI Safety” (CAISI), losing staff and scope.

Congress has introduced various bills (AIDA, CREATE AI Act, and others) but none have passed. There is no comprehensive federal AI legislation on the horizon.

The US is not just ungoverned — it is actively working to prevent governance at both federal and state levels. This is the single most consequential fact in the global AI governance landscape.

European Union: the lonely regulator

The EU AI Act is the only comprehensive binding AI regulation with significant penalties (up to 7% of global turnover). Its structure:

  • Prohibited practices (social scoring, untargeted facial recognition) — in force since February 2025
  • General-purpose AI model rules — August 2025
  • High-risk AI systems — delayed to late 2027 via the AI Omnibus amendment
  • Enforcement — through the EU AI Office (~60-140 staff) plus national authorities

The Act has real teeth — but faces massive implementation challenges. The AI Office is small. Member state capacity varies enormously. The AI Omnibus (November 2025) delayed deadlines and loosened some requirements in the name of “competitiveness.” The EU accounts for only 4-5% of global AI compute — raising the question of whether it has enough market power for the Brussels Effect to work on AI as it did for data privacy.

China: effective but different goals

China has built the most operationally comprehensive AI regulatory framework:

This is operationally effective — every GenAI service undergoes pre-market review. But the goals are primarily state control and social stability, not the kind of safety concerns (alignment, existential risk) that dominate Western AI safety discourse.

Others worth noting

UK: Shifted from “pro-innovation” approach to rebranding its AI Safety Institute as the “AI Security Institute” with a narrower national security focus. Budget: £66 million. The most well-funded safety institute globally but losing its broader safety mandate.

South Korea: AI Basic Act took effect January 2026 — making it the third country (after EU and China) with comprehensive binding AI regulation. Includes mandatory risk assessments for high-impact AI.

118 countries are not part of any major AI governance initiative — mostly in Africa, Latin America, and parts of Asia. They are rule-takers, not rule-makers.


Bilateral: barely talking

US-China bilateral AI engagement is nearly nonexistent. The Biden-Xi nuclear AI statement (November 2024) — humans, not AI, should control nuclear weapons — is the only substantive agreement. There are Track II academic dialogues (Stanford-Tsinghua, Johns Hopkins-Peking) with no policy impact. The Trump administration’s approach to China combines relaxed chip export controls with aggressive tariffs — using AI governance as a competitive tool, not a cooperative framework.

These are the two countries that control ~90% of global AI compute. Their inability to coordinate on AI safety is the largest single gap in the governance landscape.

US-EU coordination exists through the Trade and Technology Council, but AI discussions have been overshadowed by data flow disagreements and DMA enforcement against US tech companies.


Civil society: underfunded but influential

A few dozen organizations do most of the world’s independent AI safety and governance work. Their combined annual budget is roughly $50-100 million — less than what a single frontier model costs to train.

Research organizations: METR (rigorous capability evaluations), Apollo Research (scheming detection), Anthropic’s alignment team (constitutional AI, interpretability), ARC Evals (evaluation frameworks). These produce the measurements that governance depends on.

Policy organizations: GovAI at Oxford (compute governance, international coordination frameworks), AI Now Institute at NYU (industry accountability), Center for AI Safety (research and advocacy). GovAI’s compute governance framework is the most detailed proposal for how to actually implement AI governance through hardware.

Advocacy: Future of Life Institute organized the 2023 open letter calling for a 6-month pause on frontier AI training (signed by 33,000+, including Yoshua Bengio, Stuart Russell, Elon Musk). The pause didn’t happen. In 2025, 400+ AI scientists and Nobel laureates signed a call for binding AI red lines by end of 2026. Pause AI organizes public protests. These raise awareness but have not yet produced policy change.

Academic centers: Stanford HAI publishes the annual AI Index (the most cited quantitative overview of AI progress). Oxford’s Future of Humanity Institute — one of the founding institutions of AI safety research — closed in 2024 due to university administrative issues. This is what “adequate institutional support” looks like in practice.


Industry self-regulation: the actual governance

In the absence of binding regulation (outside the EU and China), industry self-regulation is the de facto governance mechanism for frontier AI.

Anthropic’s Responsible Scaling Policy is the most detailed corporate safety commitment — defining AI Safety Levels (ASL-1 through ASL-4) with specific capability thresholds that trigger enhanced containment and security requirements. It’s genuinely more operationally detailed than most government policies.

OpenAI made safety commitments but its track record raises questions. The superalignment team (announced July 2023, promised 20% of compute) was effectively dissolved by May 2024 when co-lead Jan Leike resigned, saying safety culture had “taken a backseat to shiny products.” Ilya Sutskever, co-founder and safety champion, also left. OpenAI now frames safety within its commercial product development rather than as a separate research priority.

The Frontier Model Forum — founded by Anthropic, Google, Microsoft, and OpenAI — has published research and funded evaluations but has no enforcement authority over its own members.

The fundamental problem with industry self-regulation: competitive pressure. If one company relaxes safety commitments and ships faster, others face pressure to follow. OpenAI’s trajectory — from “our mission is to ensure AI benefits all of humanity” to a for-profit restructuring — illustrates how quickly mission-driven safety can yield to commercial incentives.


The gap

All of this adds up to a governance landscape where:

~30-40% of global AI development is subject to some form of binding regulation, but most of this regulation is either not yet enforced (EU), serves different goals than safety (China), or is just beginning (South Korea). The US — responsible for ~50% of frontier AI development — has no binding federal AI safety regulation and is actively preventing it.

The ideal governance system would include:

What’s neededWhat existsGap
Binding international treaty with enforcementCoE Convention — no ratifications, no China/RussiaCritical
Mandatory pre-deployment testing for frontier modelsOnly China requires it. EU from 2026-27. US: nothingCritical
International body with monitoring authorityUN Panel is advisory only. AISIs have no regulatory powerCritical
Compute governance (tracking large training runs)Biden EO attempted it. Trump rescinded itCritical
Emergency stop authorityNo government has this (except China)Critical
Adequate safety research funding$200-400M vs $252B+ capability investmentCatastrophic
US-China cooperation frameworkOne symbolic statement. No working groupCritical
Democratic input into AI developmentEssentially zero. Decisions made by <10 executivesCritical

The most damning number: for every $1 spent on AI safety and governance globally, $600-1,200 goes to making AI more capable. This is not a governance gap. It is a governance absence.


What this means for the happy path

In my previous article on the happy path with AI, Phase 2 argues for building regulation and international coordination with real power. This map shows how far we are from that:

  • The institutions exist but lack enforcement
  • The summits continue but are shifting from safety toward innovation
  • The only country with effective pre-market AI review (China) uses it for state control, not safety
  • The leading AI power (US) is actively deregulating
  • Civil society is doing serious work on tiny budgets
  • Industry self-regulation is better than nothing but structurally unreliable

None of this means governance is impossible. The Basel Accords for banking also started as voluntary standards before the 2008 crisis forced binding rules. Chatham House argues binding AI governance may only become politically feasible after a crisis. The question is whether we can build enough institutional infrastructure before the crisis that response is rapid when political will materializes.

The institutions are being built. The question is whether they’ll have teeth before they need them.


Sources

Full research with detailed data for every organization, including budgets, staff counts, and influence ratings: