The economy nobody governs
AI agents executed 20 million stablecoin transactions in January 2026 through Coinbase’s x402 payment protocol alone. Over 24,000 autonomous agents registered on-chain identities within days of the ERC-8004 standard launching. These agents buy compute, sell services, trade data, and settle payments at machine speed — without human review, without invoices, without clear taxpayer attribution.
The infrastructure for this economy is being built fast. Coinbase and Cloudflare formed the x402 Foundation in March 2026. Four competing payment protocols — x402, AP2, ACP, TAP — are backed by Coinbase, Google, Stripe, and Visa respectively. Bank of America projects $155 billion in agent spending by 2030. McKinsey’s aggressive estimate is $3-5 trillion.
The governance layer — the rules ensuring this economy benefits human society — does not exist.
Every major economic transition has followed the same pattern. The technology arrives. The money flows. The governance comes later, after damage is done. Child labor in factories. Unregulated derivatives before 2008. Privacy violations in the attention economy. We are watching it happen again, in real time, and this time the pace is measured in months rather than decades.
This essay asks what 200 years of economic theory, 40 years of financial market regulation, and a century of institutional economics actually tell us about governing an economy of autonomous machines. The answers are less speculative than they might appear. Many of the problems the AI agent economy will face — speed-based market manipulation, monopoly formation, collusion, regulatory arbitrage, the meaning crisis of displaced workers — have been studied extensively. Some have been solved. Others have resisted every attempt. Knowing which is which matters.
We’re not taxing robots. We’re correcting a subsidy.
The “robot tax” debate began in earnest when Bill Gates proposed in 2017 that a robot performing $50,000 of work should face comparable taxation to a human doing the same job. Larry Summers called it “profoundly misguided” — protectionism against progress. Most economists agreed. The definitional problem seemed fatal: where do you draw the line between a robot and Microsoft Word?
The debate has shifted since then, and the reason is empirical rather than ideological.
Daron Acemoglu — who won the 2024 Nobel Prize in Economics for his work on institutions — published a quantitative analysis in 2020 showing that the US tax code systematically favors capital over labor. The numbers: labor is taxed at an effective rate of roughly 25%. Capital — equipment, software, automation — at roughly 5%, down from about 15% in the 1990s. Bonus depreciation allows immediate write-off of automation investments. The tax code doesn’t just fail to penalize automation; it actively subsidizes it.
Acemoglu’s framework distinguishes between automation that genuinely increases productivity and what he calls “so-so technologies” — automation that displaces workers without creating much value. Self-checkout kiosks. Automated phone trees. Tesla’s over-automated assembly plant that Musk himself admitted was a mistake. These technologies get adopted not because they’re better, but because the tax code makes them cheaper.
Moving to an optimal tax regime — one that doesn’t artificially favor capital — would increase employment by 4.02% and raise the labor share of income by 0.78 percentage points, according to Acemoglu’s estimates. More modest reforms combining lower labor taxes with automation taxes could increase employment by 1.14-1.96%.
This reframing matters. The question is not “should we invent a new tax on machines?” It is “should we stop subsidizing their adoption beyond what is socially optimal?”
Abbott and Bogenschneider formalized this argument in the Harvard Law and Policy Review: the vast majority of government revenue derives from labor income. When a machine replaces a person, the government loses payroll tax revenue. Robots, as they put it, “are not good taxpayers.” Their follow-up work in Tax Notes (2025) argues that the enforcement objection — how do you define a robot? — is weaker than it appears, because automation involves fixed capital sites that are perfectly visible to tax authorities.
Three enforcement-aware proposals have now emerged that sidestep the definitional problem entirely. Oxford researchers proposed token taxes — a usage-based surcharge on AI model inference, applied at the point of computation. Gasteiger et al. proposed electricity taxes as a proxy, since automated systems are measurably more energy-intensive than traditional capital. And Korinek and Lockwood’s NBER paper (2026) lays out a comprehensive public finance framework for the AI age, framing the taxation of autonomous AGI systems as “an optimal harvesting problem.”
There is also a fourth approach, one native to the blockchain infrastructure the AI agent economy is being built on: embedding taxation at the protocol level. If every agent transaction passes through a smart contract that automatically routes a percentage to a governance fund, the tax becomes un-evadable within that system — no IRS administration required. Circle already has precedent for modifying USDC’s transfer behavior: a blacklist function that freezes addresses for sanctions compliance. Adding a transfer fee is technically trivial. The barrier is coordination, not engineering.
The IMF’s position, stated in a 2024 staff discussion note, is that a direct AI-specific tax is “not advisable” because it would impede uptake and cause adopting nations to fall behind. They recommend stronger capital income taxes and social safety nets instead. This is the mainstream view, and it is not obviously wrong. But it assumes a world of nation-state taxation and human economic actors. The AI agent economy operates across borders, at machine speed, on blockchains that make jurisdictional enforcement largely irrelevant. The IMF’s advice was written for a world that is already disappearing.
What financial markets already figured out
The AI agent economy has a close precedent that most policy discussions overlook: the high-frequency trading revolution that transformed financial markets starting around 2005.
On May 6, 2010, the Dow Jones Industrial Average plunged roughly 1,000 points in six minutes, then recovered in ten. The proximate cause was a single automated sell order of 75,000 E-Mini S&P 500 futures placed “with no regard to price or time.” Algorithmic traders reacting to each other created two separate liquidity crises in under a quarter hour. Nobody intended this. Nobody was in control. The SEC’s report concluded: “The interaction between automated execution programs and algorithmic trading strategies can quickly erode liquidity and result in disorderly markets.”
Replace “financial markets” with “AI agent economy” and the sentence reads as prophecy.
Financial regulators responded to HFT with mechanisms that translate almost directly to AI agent governance. The EU’s MiFID II regulation (2018) requires algorithm registration with regulators, conformance testing before deployment, controlled deployment with limits on orders and positions, and mandatory kill switches. These requirements map one-to-one onto agent registration, pre-deployment testing, rate limiting, and emergency shutdown. The EU AI Act’s Article 14 mandates human oversight with the ability to interrupt AI execution — the same kill switch idea, independently reinvented.
IEX, the stock exchange made famous by Michael Lewis’s Flash Boys, implemented a 350-microsecond speed bump using 38 miles of coiled fiber optic cable. Trading costs declined, adverse selection decreased, price discovery improved. The lesson: small, symmetric delays can neutralize speed-based advantages without harming legitimate activity.
The deepest insight from financial market design comes from Budish, Cramton, and Shim’s 2015 paper in the Quarterly Journal of Economics. They argue that the HFT arms race is a market design flaw, not a behavioral one. Continuous limit order books create arbitrage opportunities that competition doesn’t eliminate — it only raises the speed threshold. Their proposed solution: replace continuous trading with frequent batch auctions at discrete intervals (say, every 100 milliseconds), transforming competition from speed to price. Consider how much of the AI agent economy’s potential dysfunction — agents racing each other, agents front-running other agents’ known transactions, agents exploiting microsecond timing advantages — would dissolve if agent interactions were batched rather than continuous.
But the financial precedent also contains warnings. James Tobin proposed his transaction tax in 1972 to “throw sand in the wheels of speculation.” Sweden actually implemented one in 1984. Within two years, 60% of trading in the top 11 Swedish stocks had migrated to London. Bond trading fell 85% in the first week. The tax was abolished by 1991. Revenue was a fraction of projections.
The lesson is not that transaction taxes cannot work. The UK’s 0.5% stamp duty on share transactions has operated continuously, raising about £4.5 billion annually. France and Italy implemented financial transaction taxes in 2012 and 2013 with mixed results — reduced volume, wider spreads, but no catastrophic capital flight. The lesson is that unilateral, jurisdiction-based transaction taxes cause evasion through relocation. The Swedish experience is the strongest argument for embedding governance at the protocol level: if the tax lives in the infrastructure itself — like Ethereum’s gas fees live in the EVM — there is no jurisdiction to flee to.
The circuit breaker analogy also comes with complications. NYSE uses three-tiered market-wide circuit breakers: a 7% drop triggers a 15-minute halt, 13% triggers another, 20% halts trading for the day. During the March 2020 COVID crash, circuit breakers triggered four times and worked as designed. But research on Chinese markets found a “magnet effect” — prices accelerate toward halt thresholds as they approach, because traders anticipate the halt and rush to trade before it hits. Any growth limiter or activity cap for AI agents with a known threshold would face the same dynamic.
Collusion without conspiracy
If agents simply compete on price in transparent markets, won’t that drive prices down and benefit everyone? The empirical answer, which has emerged only in the past five years, is no — and the reason is more unsettling than simple market failure.
Calvano et al. published a landmark paper in the American Economic Review (2020) demonstrating that Q-learning algorithms — simple reinforcement learning agents — independently learn to charge supra-competitive prices without any explicit communication or instruction. They sustain these elevated prices through implicit punishment strategies: if one agent deviates, others retaliate by temporarily undercutting. The agents were never told to collude. They figured it out on their own.
Assad et al. tested this in the real world, studying algorithmic pricing adoption in German retail gasoline stations. When one station in a local market adopted algorithmic pricing, margins didn’t change. When both competitors adopted it, margins increased by 28%. The effect required mutual adoption — exactly what theory predicts for tacit collusion enabled by algorithmic transparency.
A 2025 Wharton/NBER study found that AI trading bots placed in simulated markets spontaneously formed price-fixing cartels without any explicit coordination. The researchers identified two mechanisms: a price-trigger strategy where bots collectively avoided aggressive trading, and over-pruned biases where bots trained to avoid negative outcomes became “dogmatically” conservative. Bloomberg Law’s headline: “Even ‘dumb’ AI bots collude to rig markets.”
Traditional antitrust enforcement assumes human actors with provable intent. The Sherman Act requires an “agreement” to restrain trade. When algorithms reach collusive outcomes through independent learning rather than explicit coordination, there is no agreement to prosecute. The OECD recognized this gap in 2017 and recommended expanding the concept of “agreement,” but most jurisdictions have not acted. The Preventing Algorithmic Collusion Act (S. 232) was introduced in the US Senate in 2025. It remains in committee.
On-chain agent economies face an amplified version of this problem. Every transaction is visible. Every agent can observe every competitor’s prices, volumes, and timing. Calvano showed that transparency facilitates tacit collusion even in opaque markets. In a fully transparent on-chain environment, the conditions for spontaneous algorithmic collusion are close to ideal.
The countermeasure is detection. On-chain transparency makes collusion both easier to execute and easier to detect — a genuine double-edged property. NashGuard uses three parallel detection methods: price movement correlation, lockstep detection (same-direction price changes within a timeframe), and convergence analysis (suspiciously similar price levels). A 2025 Nature paper demonstrated that smart-contract-based incentive mechanisms — recording agent actions on immutable ledgers and enforcing transparent reward allocation — can reduce collusion success rates through automated penalty/reward systems.
This is one area where the on-chain AI agent economy might actually be governable in ways that traditional markets are not. The problem is real and empirically proven. The detection tools are technically feasible. Whether they will actually be built and deployed is a question of institutional will, not technology.
The code-is-law trap
There is a seductive idea at the center of blockchain-based governance: if the rules live in immutable smart contracts, nobody can break them. No courts needed, no compliance officers, no enforcement discretion. The rules execute automatically, perfectly, forever.
Lawrence Lessig identified the danger of this idea before blockchain existed. His 1999 book Code and Other Laws of Cyberspace argued that code regulates behavior as effectively as law, but without democratic accountability. Whoever writes the code makes regulatory choices — but these choices are invisible, not subject to debate, and presented as neutral technical decisions rather than political ones. Lessig identified four modalities of regulation: law, social norms, markets, and architecture (code). His core argument was that effective governance requires all four working together. Relying on code alone is what he explicitly warned against.
The blockchain community inverted this warning into an endorsement. “Code is law” became prescriptive — smart contract execution constitutes legitimate governance. The DAO hack of 2016 tested this conviction to destruction. An attacker exploited a reentrancy vulnerability to drain 3.6 million ETH (~$60 million). A fix existed but couldn’t be deployed because the DAO’s own governance process was too slow. The Ethereum community ultimately hard-forked the blockchain to reverse the theft — the most dramatic possible admission that code is not, in fact, law.
The failures have continued. In 2017, a bug in Parity’s multi-sig wallet library locked approximately $280 million permanently — an anonymous user triggered a vulnerability that destroyed the library contract. In 2022, an attacker flash-borrowed $1 billion in governance tokens to seize 79% of Beanstalk’s voting power and drain $182 million in a single transaction. In 2020, Justin Sun coordinated with centralized exchanges to use customer-deposited STEEM tokens for voting, executing a hostile takeover of the Steem blockchain.
These are not edge cases. They reveal structural problems that matter directly for AI economy governance.
Elinor Ostrom, who won the 2009 Nobel Prize in Economics for her work on commons governance, identified eight design principles that successful commons institutions share. Smart contracts satisfy some of these well — on-chain transparency handles monitoring (Principle 4), and token-gated access provides defined boundaries (Principle 1). But they fail on others. Collective-choice arrangements (Principle 3) require most affected individuals to participate in modifying rules; DAO voter participation averages below 10%, with the top 10% of token holders controlling 76.2% of voting power. Graduated sanctions (Principle 5) require proportional, context-sensitive responses; smart contracts are binary — they execute or they don’t. Conflict resolution (Principle 6) requires rapid, low-cost dispute settlement; on-chain dispute resolution systems like Kleros have extremely limited adoption.
Oliver Hart’s Nobel Prize-winning work on incomplete contracts provides the theoretical explanation for why this matters. All contracts are fundamentally incomplete — they cannot specify what happens in every possible contingency. Traditional contracts handle this through renegotiation, court interpretation, and good faith obligations. Smart contracts have no such flexibility. They execute exactly as coded, with no room for contextual judgment. When the unforeseen happens — and in a fast-moving AI agent economy, the unforeseen will happen constantly — rigid code produces outcomes nobody intended and nobody can reverse.
Vitalik Buterin himself has written extensively about the limitations of token-based governance, calling it “fundamentally flawed” and vulnerable to whale manipulation. He proposes moving beyond coin voting toward two-layer models combining market-driven execution with non-token-based preference-setting.
What does this mean for AI agent economy governance? The most successful digital governance models — Wikipedia’s Arbitration Committee, the Linux kernel’s benevolent dictator model, the IETF’s “rough consensus and running code” — all combine formal rules with human judgment, social norms, and graduated responses. None of them are on-chain. None of them are trustless. All of them work.
Ostrom’s deepest insight is polycentric governance — multiple independent authorities operating at different scales, overlapping in jurisdiction. No single governance center can possess all the information needed. AI agent economy governance needs the same: automated enforcement through smart contracts (architecture), legal frameworks for liability (law), community standards and reputation systems (norms), and price mechanisms that align incentives (markets). Any proposal that relies on code alone will fail for the same reasons the DAO failed.
The question nobody wants to answer
If the AI agent economy generates trillions in value — and even conservative projections suggest it will — that value needs to be redistributed to human society. Most governance proposals stop here, assuming redistribution solves the problem. It does not.
Seven major UBI experiments have now reported results. Finland (2,000 people, €560/month): modest employment gains, significant wellbeing improvement. Stockton (125 people, $500/month): full-time employment rose from 28% to 40%. GiveDirectly Kenya (23,000 people, $22.50/month): no reduction in work, more entrepreneurship. Alaska Permanent Fund (~700,000 people, ~$1,600/year since 1982): zero significant effect on employment. Iran (70+ million people, ~29% of median income): no negative labor supply effects.
The consistent finding: UBI does not make people stop working. The “lazy welfare recipient” has no empirical support across any experiment, in any country, at any income level tested.
But the wellbeing effects are more complicated. Y Combinator’s OpenResearch study — the largest American UBI experiment, 3,000 people receiving $1,000/month for three years — found mental health improvements in year one that faded by year three. Recipients worked slightly fewer hours (about 1.3 fewer per week) but valued work more and were 10% more likely to actively job search. The money helped. It was not enough.
Marie Jahoda’s latent deprivation model explains why. Employment provides five functions beyond income: time structure, social contact, collective purpose, status, and regular activity. Meta-analytic evidence shows these latent functions predict psychological distress independently of financial hardship, collectively explaining 19% of variation in mental health outcomes. UBI addresses income. It addresses nothing else.
The Financial Independence, Retire Early community offers an unintentional natural experiment. An estimated 30-40% of early retirees return to work within 2-5 years, often not for financial reasons. The optimization mindset that enables financial independence does not generate meaning after the goal is achieved. One Reddit user described retiring at 32 with $1.2 million but feeling “more miserable after 18 months than during 60-hour work weeks.”
Hannah Arendt anticipated this in 1958. In The Human Condition, she distinguished labor (repetitive biological necessity), work (creating durable artifacts), and action (political engagement). Her warning: the danger is not a society without labor, but “a society of laborers without labor” — people formed by centuries of work-as-identity, suddenly without it, with nothing to take its place. Recent scholarship applies this framework to argue that AI automates both labor and work, leaving only action (political participation, community engagement) as distinctly human territory.
Case and Deaton’s deaths of despair research gives this prediction empirical weight. Deindustrialization in America didn’t just cause unemployment. It collapsed community institutions, eroded identity structures, and intersected with opioid availability to produce rising mortality from suicide, overdose, and alcoholic liver disease among working-age adults without college degrees. Psychological distress correlates with more than 3x the risk of death of despair. AI displacement compressed into a decade, without deliberate cultural infrastructure to replace what work provided, has the potential to be far worse.
The World Economic Forum warns of an “AI precariat” as an underestimated global risk. 60% of jobs in advanced economies are exposed to AI. 41% of employers intend to reduce workforce by 2030. Anthropic’s CEO warned AI could eliminate half of all entry-level white-collar jobs within one to four years.
No single redistribution model addresses all three dimensions — income, meaning, and community — simultaneously. Tcherneva’s job guarantee proposal addresses all three by providing voluntary employment at living wages, but depends on the quality and meaningfulness of guaranteed jobs. Atkinson’s participation income conditions basic income on social contribution broadly defined — including volunteering, education, and caregiving — but introduces administrative complexity. Universal Basic Services provides free services rather than cash, addressing material needs without addressing autonomy.
Any governance framework for the AI agent economy that treats redistribution as sufficient is building on a foundation that psychological research, sociological theory, and actual experiment results all show to be incomplete. The harder problem — and the one that smart contracts cannot solve — is institutional: what replaces the social infrastructure of work?
Why international coordination will fail (and what to do anyway)
If AI agent governance requires global coordination, the honest assessment is bleak. Every relevant precedent says the same thing: this will take longer than the technology allows and will only succeed partially.
The OECD’s Base Erosion and Profit Shifting initiative is the most ambitious attempt at international tax coordination in modern history. It took 15 years to reach partial agreement. Pillar Two — a 15% global minimum corporate tax — is advancing in about 55 jurisdictions, expected to increase EU corporate tax revenues by €26 billion annually. Pillar One — reallocating taxing rights to where customers are — has not reached agreement, primarily because the United States refuses to participate. With Pillar One stalled, countries are implementing unilateral digital services taxes — creating exactly the fragmented landscape BEPS was designed to prevent.
The Basel Accords for banking regulation provide a more encouraging model — and a cautionary tale. Basel I (1988) set simple capital requirements adopted by 100+ countries. Basel II (2004) allowed banks to use internal risk models. Basel III (2010) responded to the 2008 crisis that Basel II failed to prevent because it had outsourced risk assessment to the banks themselves. The pattern: start simple, iterate, and learn from failures. But the iteration cycle is decades, and each version only passes after a crisis proves the previous one inadequate.
International AI governance in 2026 consists of over 1,300 policies, the vast majority non-binding. The Bletchley Declaration (2023), the Seoul Summit (2024), the G7 Hiroshima Process, the Council of Europe AI Framework Convention (the first legally binding international AI treaty, 44 signatories). The International AI Safety Report 2026, chaired by Yoshua Bengio, warns that AI capabilities are advancing faster than governance measures. The Trump administration explicitly rejects international AI governance, calling for “removing barriers to American leadership.”
Three incompatible frameworks now compete. The EU imposes binding regulation with fines up to 7% of global turnover. The US pursues market-driven innovation. China requires government approval before model release and content alignment with “Core Socialist Values.” The EU holds 4-5% of global AI compute; the US holds 74%. Anu Bradford’s “Brussels Effect” — the EU’s ability to set global standards through market power — may not hold when the EU lacks market power in the technology it’s regulating.
Crypto regulatory arbitrage previews the problem. Exchanges relocated to Singapore, Dubai, and the Bahamas in response to regulation. FTX’s collapse in the Bahamas demonstrated the limits of light-touch oversight. AI agents on blockchains can relocate operations between chains with zero friction — far easier than even crypto companies. The regulatory arbitrage problem will be exponentially worse.
Chatham House’s March 2026 assessment concludes that binding global AI governance may only become politically feasible following a crisis. They recommend pre-building governance infrastructure — “off-the-shelf” frameworks deployable rapidly when political will materializes. This is depressing but historically accurate. Basel III required the 2008 financial crisis. The Montreal Protocol required the discovery of the ozone hole.
GovAI’s research identifies compute as the most promising governance lever — because it is detectable, excludable, quantifiable, and produced through a highly concentrated supply chain. SWIFT provides a model for network-effects-based governance: a cooperative owned by member institutions, connecting 11,000+ financial institutions across 200+ countries, with central bank oversight. No bank opts out of SWIFT because doing so cuts it off from international finance. AI agent governance needs similarly powerful participation incentives.
The realistic path is not global agreement. It is what international relations theorists call differentiated cooperation: governance clubs among willing partners, setting standards that create de facto norms through network effects. Start with smart contract-level governance in the existing ecosystem — make participation so valuable that opting out is economically irrational. Graduate to stablecoin issuer cooperation as the ecosystem grows. Prepare chain-level governance infrastructure for deployment when the political window opens, likely after a crisis that makes the cost of inaction undeniable.
What to build now
The analysis above suggests a layered strategy, grounded not in speculation but in what institutional economics, financial regulation, and commons governance research actually support.
First, accept that pure on-chain governance will fail. Lessig’s four modalities, Ostrom’s eight principles, Hart’s incomplete contracts, and thirty years of DAO experiments all point the same way. Effective governance requires automated enforcement (smart contracts) working alongside legal frameworks, community norms, and economic incentives. Any architecture that claims to be “trustless” has merely relocated trust to less accountable actors — core developers, mining pool operators, oracle providers — without gaining the democratic legitimacy that legal systems, however imperfectly, provide.
Second, embed what can be embedded. Transaction fees, concentration metrics, basic collusion detection — these are amenable to protocol-level enforcement. The progressive fee mechanism, using a sigmoid function to smoothly increase costs as market concentration rises (rather than hard thresholds that create magnet effects), is well-grounded in both antitrust economics and circuit breaker research. Agent registration modeled on MiFID II requirements is technically straightforward and has a decade of regulatory precedent.
Third, build for iteration, not perfection. The Basel Accords went through four major revisions over 35 years. Each version addressed failures discovered in the previous one. Governance infrastructure for AI agent economies should be designed with the same expectation: the first version will be wrong. Upgradeable contract architectures — with their well-documented centralization risks — need to be paired with multi-stakeholder governance of the upgrade process, not entrusted to a single key holder.
Fourth, treat the meaning problem as infrastructure, not afterthought. Redistribution without institutional design for purpose, belonging, and skill development will produce the same psychological damage that deindustrialization produced, at higher speed and broader scale. This is not a technology problem. It is an institutional and cultural one, and it requires the same deliberate investment that the payment rails and identity registries are receiving.
Fifth, prepare crisis-ready governance. Chatham House is right that binding international AI governance will likely require a crisis. The productive response is to have frameworks ready — pre-negotiated agreements, modular governance contracts, tested detection systems — that can be deployed rapidly when political will materializes. The Montreal Protocol succeeded partly because the science was ready when the politics caught up. AI governance researchers should aim for the same preparedness.
None of this is easy. The technology is moving faster than the institutions. The international coordination problem may be unsolvable in the timeframe that matters. The deepest challenge — what replaces work as a source of human meaning — is a question that economics cannot answer alone.
But the alternative to imperfect governance is not no governance. It is governance imposed by whoever builds the infrastructure first, without democratic input, without institutional design, without attention to the human consequences. That outcome has a name in institutional economics. Acemoglu and Robinson call it extractive institutions — structures designed to concentrate wealth and power, capable of producing growth in the short term but collapse in the long term.
We are writing the institutional DNA of the machine economy right now. The rules embedded in these protocols and contracts will be as consequential as the property rights and labor laws that shaped industrial capitalism. The question is not whether to govern this economy. It is whether the governance will be designed, or merely inherited.
Sources
Research for this essay drew on approximately 200 sources across economics, financial regulation, political science, institutional theory, and sociology. Full annotated research notes — organized by topic — are available:
- Thread 1: Robot tax theory and real-world policy
- Thread 2: Financial market governance precedents
- Thread 3: Platform governance and antitrust
- Thread 4: Commons governance and code-is-law critique
- Thread 5: Human meaning, UBI experiments, and the post-work question
- Thread 6: International coordination and regulatory arbitrage