← Back to all writing

Why is switching AI platforms so hard -- Memory

April 1, 2026

The new lock-in

In 2025, three things happened almost simultaneously. OpenAI expanded ChatGPT’s memory to reference all past conversations. Google launched Gemini personalization powered by users’ entire Search history. Meta began using Facebook and Instagram behavioral data to personalize Meta AI responses — with no opt-out.

The race for model capabilities is plateauing. GPT-4o, Claude 3.5 Sonnet, and Gemini 2.0 are converging in benchmarks. The new competitive frontier is personalization: who understands the user best. And the data that feeds personalization — preferences, memories, behavioral patterns, communication styles — is being locked inside each platform’s proprietary systems.

I use multiple AI platforms daily. As CTO of a startup, I have deep conversation histories with both ChatGPT and Claude. I’ve accumulated months of memories, custom instructions, and workflow integrations. When I tried switching primary platforms, I discovered something that surprised me: the explicit memories I could see and export were the least important part. What I couldn’t take with me — the model’s implicit understanding of my technical level, my communication preferences, my project contexts — was where the real value lived. And that understanding is non-portable by any current or proposed mechanism.

This essay examines why AI preference portability is structurally harder than any previous data portability challenge, why voluntary adoption by platforms will not occur, and what conditions would need to hold for portability to succeed.


What switching cost theory predicts

The economics of switching costs have been studied for decades. Farrell and Klemperer’s comprehensive treatment in the Handbook of Industrial Organization (2007) established the core framework: when consumers face costs of switching between competing products, firms exploit that lock-in to extract surplus. Firms systematically prefer incompatibility because it’s more profitable, even when compatibility would increase total social welfare.

The predictions are specific. Incumbents with large installed bases resist compatibility because they have more users to lose than to gain. Challengers favor compatibility because it lets them attract the incumbent’s locked-in users. When both sides independently choose their compatibility strategy, the result is a Prisoner’s Dilemma: non-adoption is the dominant strategy, even though mutual adoption would leave everyone better off.

These predictions are not abstract. They describe exactly what is happening in the AI market right now.

Anthropic (the challenger) launched a memory import tool in March 2026, allowing users to bring their ChatGPT memories into Claude. It donated MCP to the Linux Foundation. It publicly declared a “no lock-in” philosophy. Every move reduces switching costs from ChatGPT to Claude — exactly what Farrell and Klemperer predict a challenger would do.

OpenAI (the incumbent) has done none of this. Its data export produces a ZIP file of raw JSON that no competing platform can ingest. Users report the export is incomplete. OpenAI has issued no public response to Anthropic’s import tool. And it continues to retire popular models — GPT-4o, GPT-4.1 — forcing users onto new versions with no migration path and no choice. This is not accidental. Farrell and Klemperer’s framework doesn’t require platforms to actively block switching. Strategic neglect — failing to improve export tools, failing to standardize formats — achieves the same result.

Google occupies an interesting middle ground. It launched Google Takeout in 2011 and co-founded the Data Transfer Project in 2018. This looks like pro-portability behavior from an incumbent — until you examine the results. Fifteen years of Google Takeout have not reduced Google’s market share by a single percentage point. This is consistent with Lam and Liu’s finding, discussed below, that portability can paradoxically strengthen incumbents.


The portability illusion

AI preference data exhibits a property that distinguishes it from every previous form of digital lock-in. I call it the portability illusion: the subjective perception of low switching costs masking objectively high ones.

Traditional data lock-in — bank records in proprietary systems, social graphs stored on Facebook’s servers — is opaque. Users know they can’t see or access the data, and this opacity signals high switching costs. AI memory works differently. You can ask ChatGPT “what do you remember about me?” and get a clear, readable answer. This transparency creates a false sense that your relationship with the AI is fully portable: if you can see it, surely you can take it with you.

But what you can see is only the surface layer.

Explicit vs. derived preferences

AI personalization operates through two fundamentally different mechanisms. Explicit preferences are what you tell the AI directly: “I prefer concise responses,” “I’m vegetarian,” “I’m working on a React project.” These are stored as structured text entries and can, in principle, be exported.

Derived preferences are what the platform learns from your behavior. Over thousands of interactions, ChatGPT learns that you’re a senior software engineer who prefers Python, works in Pacific time, and responds better to direct technical answers than pedagogical explanations. This derived understanding constitutes the majority of the personalization value — and it is non-portable by any current or proposed mechanism.

The distinction maps directly onto GDPR’s categorization of data “provided by” versus “derived by” the data controller (Article 29 Working Party, 2017). Under current EU law, only the former falls within the scope of data portability rights under Article 20. The most valuable layer of personalization is legally unprotected.

The Anthropic experiment

Anthropic’s March 2026 memory import tool provides a natural experiment in what happens when you try to port AI preferences. The mechanism is simple: Claude generates a structured prompt, the user pastes it into ChatGPT, copies the output, and pastes it back into Claude’s import interface.

The results are instructive. Users report that imports are incomplete — one user with a ChatGPT account since 2022 received memories only from September 2024 onward. More fundamentally, the import captures what ChatGPT was told, not what ChatGPT learned. The derived personalization model — the implicit understanding of the user’s style, level, and patterns — does not transfer.

This is a one-time migration, not interoperability. After the import, your preferences are locked into Claude’s system. It’s analogous to switching banks by manually carrying paper statements, not to the continuous API-based data flow that PSD2 enables for banking.

And yet: Anthropic reported that since January 2026, free users grew 60% and paid users more than doubled. The Hacker News discussion drew 592 points and 273 comments. Users clearly want portability. The question is whether the market will provide it.


Three forces against voluntary adoption

Three independent lines of research converge on the same prediction: platforms will not voluntarily adopt preference portability.

Force 1: Lock-in profits (Farrell & Klemperer)

The basic argument is straightforward. Preference data lock-in generates surplus for platforms by making users costly to poach. Adopting portability surrenders this surplus. Each platform independently prefers non-adoption regardless of what competitors do. The Nash equilibrium is mutual non-adoption.

The payoff structure is a Prisoner’s Dilemma:

Entrant adoptsEntrant does not
Incumbent adoptsBoth lose lock-in surplus; competition intensifiesIncumbent loses users; entrant free-rides
Incumbent does notIncumbent keeps users; entrant’s few users can leaveStatus quo: maximum lock-in

Non-adoption is a dominant strategy for the incumbent. Given the incumbent’s choice, non-adoption is also the entrant’s best response (unilateral adoption without reciprocation is the worst outcome: your users can leave but you can’t attract the incumbent’s users).

Force 2: Data hoarding (Jones & Tonetti)

Jones and Tonetti’s influential paper in the American Economic Review (2020) established that data is non-rival: your preference for concise responses can simultaneously inform ChatGPT, Claude, and Gemini without degradation. This means broad data sharing is socially optimal — consumer ownership of data produces near-first-best welfare outcomes.

But markets don’t reach this optimum. Firms hoard data because of what Jones and Tonetti call “creative destruction fear”: if Tesla shares its driving data with Waymo, Waymo might build a better self-driving system and displace Tesla. So Tesla hoards, even though sharing wouldn’t diminish its own data. Every firm reasons the same way. The equilibrium is universal hoarding.

AI preference data is a textbook case. Your preferences are perfectly non-rival. But OpenAI, Google, and Anthropic each store them in proprietary, incompatible systems. The socially optimal outcome — your preferences flowing freely across all platforms — is blocked by competitive incentives.

Force 3: The demand-expansion paradox (Lam & Liu)

Lam and Liu’s game-theoretic model (2020) delivers the most counterintuitive finding: data portability can strengthen incumbents rather than weaken them.

The mechanism involves two opposing forces. The switch-facilitating effect is intuitive: portable data lowers switching costs, helping entrants attract users. But there is a second force — the demand-expansion effect — that works in the opposite direction: when users know they can take their data with them, they provide more data to the current platform (since they’re not worried about being trapped). This makes the incumbent’s derived services better, deepening lock-in through the non-portable derived channel.

Under GDPR’s provided/derived distinction, only “provided” data is portable. Users provide more raw data (portable) → the platform derives better personalization (non-portable) → the user is more locked in, not less. When the value of derived services is high enough, the demand-expansion effect dominates and portability hurts entrants.

Google’s fifteen-year experience with Takeout validates this prediction. Google has offered the most comprehensive data export tools of any major platform since 2011. Its market share has not declined. Google, Facebook, Microsoft, and Twitter co-founded the Data Transfer Project in 2018 — large incumbents proactively supporting data portability, not out of altruism, but because they understood it wouldn’t hurt them.

A fourth mechanism, identified by Siciliani and Giovannetti (2019), compounds the problem. When switching costs are reduced, incumbents don’t relax — they become more aggressive in defending their user base. The analogy: a castle with high walls has relaxed guards; tear the walls down and the guards fight harder. Data portability can make incumbents more competitive, not less, squeezing entrants even further.


What history teaches

Six historical portability regimes provide empirical grounding for these theoretical predictions. I evaluated each across five dimensions: regulatory mandate, technical standardization feasibility, incumbent incentive alignment, user demand intensity, and timing relative to market concentration.

CaseRegulatory mandateTechnical simplicityIncumbent incentiveUser demandTimingOutcome
Number Portability (US, 1993)FCC mandateTrivial (10-digit number)OpposedHighPost-concentrationSuccess: prices fell, competition increased
Open Banking / PSD2 (EU, 2018)EU DirectiveModerate (standardized APIs)OpposedModeratePost-concentrationPartial success: fintech entry +50%, limited consumer switching
Email / SMTP (1982+)None neededSimple (text protocol)N/A (no incumbent)N/APre-concentrationSuccess: open by design
GDPR Art. 20 (EU, 2018)Legal right existsNo format standardizationMinimal complianceVery lowPost-concentrationFailure: 16% consistent compliance
Solid Pods (2016+)NoneComplex (self-hosting)Against (conflicts with ad model)Very lowPost-concentrationFailure: “no one has built a Solid-based platform”
Web3 Data Wallets (2017+)NoneVery complex (key management)N/AVery lowN/AFailure: extreme user friction

The pattern is stark. Every successful portability regime combined regulatory mandate with technical standardization feasibility. Having only one is insufficient. GDPR provides a legal right but mandates no standard format — result: 16% compliance. Solid provides a technical architecture but has no regulatory backing — result: no adoption. Number portability and PSD2 provide both — result: meaningful market impact.

Viard’s empirical study of telephone number portability (Stanford GSB, 2007) provides the cleanest natural experiment. When the FCC mandated 800-number portability, both AT&T and MCI lowered prices. Larger contracts saw bigger price drops — exactly what switching cost theory predicts (deeper lock-in yields larger competitive effects when released). But the preconditions were exceptionally favorable: a phone number is a 10-digit string, perfectly standardized, trivial to transfer. AI preferences are natural language, context-dependent, and architecturally entangled with the platform’s inference systems.

The open banking case (Babina et al., 2025) offers a more realistic analogy. PSD2 mandated standardized APIs for banking data. The result was not a wave of consumers switching banks — that barely happened — but a 50% increase in fintech venture capital investment. The competitive benefit came through new market entry, not consumer switching. This distinction matters for AI: even if preference portability doesn’t cause mass migration, it could enable new categories of preference management services.


Why AI preferences are structurally harder

Every previous portability challenge involved structured, well-defined data: phone numbers, bank account balances, email messages. AI preference portability involves something fundamentally different.

The provided-derived gap

When a user tells ChatGPT “I prefer concise responses,” this is easily exportable. When ChatGPT learns from 2,000 conversations that this user is a senior engineer who prefers Python, works in PST, and responds better to direct answers than Socratic questioning — this derived understanding is the majority of the personalization value and is non-portable.

No proposed standard — not HCP, not GDPR Article 20, not any technical specification — addresses how to port derived knowledge between AI systems. This would require platforms to produce “personalization summaries” or “preference profiles” that capture what the model has learned about a user in a standardized, interoperable format. It is technically possible (LLMs can self-describe their understanding) but there is zero commercial incentive to implement it.

Ecosystem lock-in

Even perfect preference portability would leave substantial switching costs in place. The real moats are ecosystems:

  • Google: years of Search history, Gmail context, Calendar, Maps, Photos — all feeding Gemini’s personalization
  • Meta: a social graph of 3 billion users, Instagram behavioral patterns, WhatsApp communications
  • Microsoft: M365, Graph API, Teams, Entra identity federation — bundled into E7 at $99/user/month
  • OpenAI: Custom GPTs, API integrations, conversation history spanning years

These create switching costs that no preference portability protocol can address, because they involve infrastructure rather than data.

Model retirement as forced lock-in

OpenAI retired GPT-4o and other popular models from ChatGPT in February 2026, forcing all users onto newer versions. Users who had calibrated their prompts, workflows, and expectations to specific model behaviors lost that calibration overnight. This is a form of lock-in that operates through the platform’s control of the model itself — a dimension that has no analogue in banking, telecom, or social media.


The Human Context Protocol: right diagnosis, wrong prescription

Shah et al. (2025) at Stanford’s Digital Economy Lab proposed the Human Context Protocol (HCP): a user-centric architecture for portable, interoperable preference management. HCP stores preferences in natural language with scoped access controls and revocation mechanisms. It builds on existing standards (MCP, OAuth 2.0) and has an open-source prototype.

The diagnosis is correct. Users should own their AI preferences. Jones and Tonetti’s welfare analysis proves that consumer data ownership produces near-first-best social outcomes. The technological architecture is sound.

The prescription underestimates the adoption problem. The HCP paper acknowledges that “market forces alone may not incentivize providers to relinquish their control of user data,” but suggests that competition between new preference management firms may organically solve the problem. This optimism is inconsistent with the switching cost literature.

New preference management firms face the same chicken-and-egg problem as any multi-sided platform: without AI provider integration, the tool has no value to users; without users, AI providers have no incentive to integrate. The paper cites TCP/IP and HTML as precedents for successful standard adoption — but those standards were established before any dominant platform existed. AI preferences are being standardized after OpenAI, Google, and Meta have already built massive installed bases. The historical cases above show that this distinction is decisive.

Jeon and Menicucci’s theoretical analysis (2023) identifies one scenario where portability could benefit both consumers and platforms: when the market already has free tiers (non-negativity price constraint is binding) and competition occurs primarily through feature bundling rather than price. Under these conditions, portability reduces the need for wasteful competitive “freebies,” saving platforms money that exceeds the revenue lost from reduced lock-in. The AI market — with its free tiers and feature-based competition — may plausibly satisfy these conditions. But realizing this outcome requires coordinated adoption, which brings us back to the Prisoner’s Dilemma.


What would actually work

The comparative framework yields clear prescriptions. Effective AI preference portability requires all of the following:

1. Regulatory mandate with enforcement teeth. Modeled on PSD2, not GDPR Article 20. PSD2 requires banks to implement standardized APIs for data sharing — not merely to allow export, but to enable continuous, bidirectional data flow. GDPR Article 20 grants a right to port but mandates no format and no receiving obligation. The former produces a 50% increase in fintech entry. The latter produces 16% compliance.

2. Standardized formats for preference representation. HCP begins to address this. Natural language preference storage is a reasonable starting point. But standardization must extend to schema definitions, access scopes, revocation mechanisms, and — critically — derived preference summaries.

3. Inclusion of derived data. This is the hardest requirement. If portability covers only explicit memories while leaving derived personalization models proprietary, it will replicate GDPR Article 20’s failure pattern. Platforms would need to produce standardized “personalization profiles” that capture derived understanding. Whether this is technically feasible at meaningful fidelity is an open research question — and an area where mechanistic interpretability research could have direct policy relevance.

4. A layered regulatory strategy. Preference portability alone won’t break ecosystem lock-in. It needs to work alongside the EU’s Data Act (cloud service switching), the DMA (platform competition), and AI-specific transparency obligations. The EU’s enforcement actions against Meta — requiring it to allow third-party AI assistants on WhatsApp — suggest that regulators are beginning to recognize AI-specific lock-in as a competition concern.

5. Timing: before the market tips. Number portability was mandated when the US telecom market already had multiple large competitors. Open banking was mandated when the EU banking market was well-established. AI personalization is still in its early stages — switching costs are growing but haven’t reached the point of irreversibility. The Data Transfer Initiative warns that AI “risks trapping users in their data all over again.” The window for intervention is open, but narrowing.


The agent wildcard

One AI-specific dynamic could partially disrupt the lock-in equilibrium. As AI agents increasingly act on behalf of users across platforms — booking travel, managing calendars, drafting emails — they create functional demand for cross-platform preference consistency. An agent that works across services needs to understand the user’s preferences regardless of which platform it’s interacting with.

Anthropic’s MCP provides the tool-connection layer: a standardized way for AI to interact with external services. What remains missing is the preference layer — a standardized representation of who the user is and what they want. If agent-mediated interactions become the primary mode of AI use, platforms may face pressure to support preference portability not because users demand it, but because agents require it to function effectively.

This bottom-up pressure from the agent ecosystem could complement top-down regulatory pressure. But it is not a substitute. Agents themselves can be locked into platform ecosystems (OpenAI agents operating only within OpenAI’s ecosystem, Google agents within Google’s). The incentive structure doesn’t change just because the consumer’s representative is software instead of a person.


Conclusion

AI personalization is creating the deepest consumer lock-in in the history of digital technology. The switching costs are endogenous (they grow with use), asymmetrically perceived (the portability illusion), and architecturally reinforced (ecosystem dependencies, model retirement, the provided-derived gap).

Three independent theoretical frameworks — Farrell and Klemperer’s switching cost theory, Jones and Tonetti’s data nonrivalry analysis, and Lam and Liu’s demand-expansion paradox — converge on the same prediction: voluntary adoption of preference portability will not occur. Six historical case studies confirm that every successful portability regime required regulatory mandate combined with technical standardization.

The Human Context Protocol is a necessary technical contribution. It is not, by itself, sufficient. Without PSD2-style regulation adapted to AI-specific data structures — particularly the inclusion of derived preferences — the market equilibrium will remain one of data hoarding and strategic incompatibility.

The counterargument is that AI markets are still young and that competition will self-correct. Perhaps. But the empirical pattern suggests that lock-in deepens faster than policy responds. Every personalization feature launched in 2025 — OpenAI’s cross-conversation memory, Google’s Search-powered Gemini, Meta’s social data integration — increases the cost of eventual portability regulation. The longer we wait, the more expensive it gets.

The question is not whether AI preference portability is desirable. The welfare economics are clear: it is. The question is whether the political will to mandate it will arrive before the market has tipped beyond the point where regulation can meaningfully intervene. History suggests the answer is: only after a crisis.


References

Babina, T., et al. (2025). Customer data access and fintech entry: Early evidence from open banking. Journal of Financial Economics, 169, 103950. NBER

Farrell, J., & Klemperer, P. (2007). Coordination and lock-in: Competition with switching costs and network effects. Handbook of Industrial Organization, 3, 1967-2072. Working paper

Jeon, D.-S., & Menicucci, D. (2023). Data portability and competition. European Journal of Law and Economics, 57, 145-162. Paper

Jones, C. I., & Tonetti, C. (2020). Nonrivalry and the economics of data. American Economic Review, 110(9), 2819-2858. NBER

Lam, W. M. W., & Liu, X. (2020). Does data portability facilitate entry? International Journal of Industrial Organization, 69, 102564. Paper

Shah, A. V., South, T., Evans, T., Kirk, H. R., Pei, J., Trask, A., Weyl, E. G., & Bakker, M. A. (2025). Robust AI personalization controls: The Human Context Protocol. SSRN Working Paper.

Siciliani, P., & Giovannetti, E. (2019). Platform competition and incumbency advantage under heterogeneous switching cost. Bank of England Working Paper No. 839. Paper

Syrmoudis, E., et al. (2024). Data portability between online services: An empirical analysis on the effectiveness of GDPR Art. 20. ACSAC 2024. Paper

Viard, V. B. (2007). Do switching costs make markets more or less competitive? The case of 800-number portability. RAND Journal of Economics, 38(1), 146-163. Stanford GSB