When I wrote about who is governing AI and the happy path, one question kept coming up: what can an individual actually do?
The honest answer is that AI governance currently lacks not policy proposals but public pressure. There are dozens of organizations doing serious work on tiny budgets — $200-400 million globally for AI safety versus $252 billion in corporate AI investment. Many of them need people more than they need ideas.
This is a practical guide. Every organization is tagged with what it does, what the barrier to entry is, and how to get involved. Whether you’re a student, an engineer, a researcher, or someone who just started paying attention — there’s a way in.
How to read the ratings:
- ⭐ Anyone (interest is enough)
- ⭐⭐ Some background (undergrad degree, technical skills, or relevant experience)
- ⭐⭐⭐ Professional background (graduate degree, research experience, or industry experience)
- ⭐⭐⭐⭐ Expert (PhD + publications, professor, senior industry)
Things anyone can do today
Before the organization list — these take minutes and require no credentials.
| Action | Where | What it takes |
|---|---|---|
| Sign the AI safety open letter | CBPAI / FLI / CAIS | An email address |
| Write to your representative supporting AI regulation | Your district | 5 minutes. 70-86% of the public supports AI regulation — politicians need to hear it |
| Join a local Effective Altruism group | Global, most cities | Show up |
| Attend a Pause AI event | Global chapters | Show up |
| Read one report from METR, GovAI, or Stanford HAI | Online | 30 minutes |
| Use 80,000 Hours career planning | Online | 30 minutes |
| Share reliable AI information (not hype) on social media | Your accounts | Judgment |
| Donate to an underfunded AI safety org | FLI, CAIS, METR | Any amount |
These actions look small. They may be the most impactful things on this list. AI governance is stuck not because the policy ideas don’t exist but because there isn’t enough political pressure to implement them. Every person who contacts a representative, joins a group, or donates shifts that equation.
United States
Research organizations
MIRI (Machine Intelligence Research Institute) — The oldest alignment research organization (founded 2000). Theoretical AI alignment research focused on mathematical foundations. Berkeley, CA.
- ⭐⭐⭐⭐ Research positions require strong math/CS. Anyone can read their work and support through donations.
CHAI (Center for Human-Compatible AI) — Stuart Russell’s alignment research center at UC Berkeley. Focuses on the problem of AI systems that are uncertain about human preferences.
- ⭐⭐⭐ Research positions need PhD-level. Sometimes hires undergrad research assistants. Open lectures and seminars.
METR (Model Evaluation & Threat Research) — Measures what AI can actually do. Produced the task horizon data showing AI capability doubling every 3-7 months. Their evaluations are what governance depends on.
- ⭐⭐⭐ Research and engineering roles need strong technical background. Their reports are public and worth reading for anyone.
ARC Evals (Alignment Research Center) — AI safety evaluation frameworks. Founded by Paul Christiano (former OpenAI alignment lead).
- ⭐⭐⭐ Research positions need ML/safety background.
Apollo Research — Detects deceptive AI behavior. Found that 5 of 6 frontier models exhibit “scheming” — covertly pursuing misaligned goals.
- ⭐⭐⭐ ML research background. London / remote.
Redwood Research — Technical alignment research (interpretability, adversarial training). Very selective. Berkeley, CA.
- ⭐⭐⭐⭐
FAR.AI — AI safety research incubator. Helps early researchers start alignment projects. Lower bar than MIRI/Redwood — explicitly designed as an on-ramp.
- ⭐⭐⭐ Paid fellowships. Berkeley, CA.
Policy and governance
Center for AI Safety (CAIS) — Research and advocacy. Organized the “AI extinction risk” statement signed by Hinton, Bengio, Altman, and hundreds of researchers. Has fellowship programs for grad students and early-career researchers.
- ⭐⭐ Fellowship open to graduate students. San Francisco.
AI Now Institute — Social impact of AI: labor, fairness, corporate power. Based at NYU.
- ⭐⭐ Research positions need social science/policy background. Public reports available.
AI Policy Institute — Direct policy advocacy and lobbying for AI safety legislation. Washington DC.
- ⭐⭐ Policy or communications background.
OpenResearch — Open-ended social research. Ran the largest US UBI experiment (backed by Sam Altman). San Francisco.
- ⭐⭐⭐ Social science / economics background.
AI Futures Project — AI forecasting and scenario planning. Founded by Daniel Kokotajlo (former OpenAI governance researcher). Produced the AI 2027 scenario. Berkeley, CA.
- ⭐⭐⭐ AI / policy background.
Academic centers
Stanford HAI — Publishes the annual AI Index (the most cited quantitative overview of AI). Research, policy analysis, and public events. Has undergraduate research opportunities.
- ⭐⭐⭐ for research. ⭐ for public events and reports.
MIT FutureTech — Technology’s impact on work and society.
- ⭐⭐⭐
Advocacy
Future of Life Institute (FLI) — Organized the 2023 pause letter (33,000+ signatures). Also distributes research grants ($17M annual budget). Boston / global.
- ⭐ Anyone can sign letters, attend events, donate. ⭐⭐⭐ Apply for research grants.
Pause AI — Grassroots movement organizing public protests and advocacy for pausing frontier AI training. Has chapters globally.
- ⭐ Anyone can attend events and join. Volunteer-run.
Center for Humane Technology — Technology ethics advocacy (Tristan Harris, Aza Raskin). Produced “The AI Dilemma” and other public talks.
- ⭐ Public content available to all. ⭐⭐ Volunteer opportunities.
Public benefit / AI for good
Partnership on AI — Multi-stakeholder AI governance dialogue. Members include companies, academics, and civil society.
- ⭐⭐ Organizations can apply for membership.
AI4ALL — Increasing diversity in AI. Programs for high school students. Adults can mentor or volunteer.
- ⭐ High schoolers can apply. Adults can mentor.
DataKind — Data science for social good. Volunteer-driven projects helping nonprofits.
- ⭐⭐ Needs data/programming skills. Volunteer.
International
GovAI (Centre for the Governance of AI, Oxford) — Arguably the most influential AI governance research organization. Published the compute governance framework and international coordination proposals. Has a competitive summer fellowship for PhD students.
- ⭐⭐⭐ Research fellowship needs strong academic background. Their research reports are public and extremely high quality.
Ada Lovelace Institute — Social impact of AI and data. UK and European policy focus. London.
- ⭐⭐ Research positions and public engagement.
Cambridge LCFI (Leverhulme Centre for the Future of Intelligence) — Long-term AI impact: philosophy, ethics, society. Cambridge, UK.
- ⭐⭐⭐ Research positions. Public seminars.
AI Safety Camp — Multi-week research training program. Explicitly designed to help newcomers enter alignment research. This is the single best on-ramp if you have technical background but aren’t in the field yet.
- ⭐⭐ Needs some technical foundation but welcomes newcomers. Free (sometimes offers funding). Global / remote.
Alignment Forum / LessWrong — Online communities where much of alignment research discussion happens first. Important research often appears here before journals.
- ⭐ Anyone can read. ⭐⭐ Background needed to contribute meaningfully.
80,000 Hours — Career guidance focused on high-impact work. Extensive analysis of AI safety career paths. Free 1-on-1 advising.
- ⭐ Anyone. Free.
Effective Altruism community — AI safety is a core focus. Local groups in most major cities. Global conferences. Grants for AI safety projects.
- ⭐ Anyone can join local groups. Grants available for projects.
China
The AI safety and governance community is much thinner in China. Most organizations are government-affiliated research institutes rather than independent civil society.
CAICT (中国信通院) — Ministry of Industry research institute. Publishes AI white papers, participates in policy design.
- ⭐⭐⭐ Internal researchers. Public reports available.
Tsinghua AI Research Institute (清华大学人工智能研究院) — AI research and governance. Led by Academician Zhang Bo.
- ⭐⭐⭐⭐ PhD level.
BAAI (北京智源人工智能研究院) — Published the “Beijing AI Principles.” Some alignment-adjacent work.
- ⭐⭐⭐
Concordia AI — One of the few organizations bridging US-China AI safety dialogue. Bilingual research and exchange. If you’re bilingual and care about this space, this is uniquely valuable.
- ⭐⭐⭐ AI / policy background + English-Chinese bilingual.
Chinese EA community (有效利他主义中国) — Small but growing. Local events in major cities.
- ⭐ Interest + showing up.
For general awareness in Chinese: 机器之心, 量子位, AI前线 (WeChat public accounts) cover AI developments. For AI safety specifically, resources are sparse — most serious discussion happens in English, which is itself a barrier.
What’s missing from this list
A few observations:
China’s civil society gap is the most striking. Almost all independent AI safety/governance organizations are in the US and UK. China has government-affiliated research institutes but almost no independent civil society working on AI safety. Given that China is one of the two most important countries for AI governance, this is a critical gap. Concordia AI is one of the only bridges.
Most organizations are severely underfunded. The entire global AI safety NGO sector operates on roughly $50-100 million per year — less than what it costs to train a single frontier model. Donations, even small ones, are disproportionately impactful.
The best entry point for technical people is AI Safety Camp. Multiple alignment researchers got their start there. If you have engineering or ML skills and want to work on alignment, this is where to begin.
The most impactful thing non-technical people can do is create political pressure. Write to representatives. Talk to people about AI governance. Share informed (not hyped) content. Vote for candidates who take AI seriously. The constraint on AI governance is not ideas — it’s political will. Political will comes from people.
Sources
- Who is governing AI? A map of the landscape — the broader governance analysis
- The happy path with AI — why these organizations matter
- Full organization guide with detailed notes — the complete reference with budget data and influence ratings