What writing actually is
Honestly, I’m not great at writing. English isn’t my first language, I’ve always struggled with it, and even now writing in English takes real effort. I was never someone who loved to write. After ChatGPT came out, I handed most of my writing to AI — emails, documents, LinkedIn posts. For a while I thought it was efficient. But lately I’ve started to notice I lost things along the way: the learning process, a sense of control over what I was saying, my own voice, and something harder to name — a feeling of agency. Writing is personal.
Writing is also a unique, irreplaceable way of thinking. Joan Didion said it in 1976: “I write entirely to find out what I’m thinking.”
That’s not rhetoric. Cognitive science backs it up: writing is a recursive cognitive process — plan, translate ideas into sentences, review, revise, plan again — and the cycle itself generates new ideas. You’re not writing down what’s already in your head. You’re discovering what you didn’t know you didn’t know. A blank page doesn’t reveal your writing ability. It reveals the holes in your thinking.
So writing isn’t about producing text. It’s about thinking itself. The text is a byproduct. What’s valuable is the cognitive work that happens during the process.
Ted Chiang compared ChatGPT to a “blurry JPEG of the internet” in 2023. JPEG preserves most of the information but loses detail. AI-generated text looks complete, fluent, grammatically correct, but it’s the statistically “most likely” text, not the “most accurate” text arrived at through thought. When you read an AI-written article, you’re reading everyone’s average opinion, not one person’s hard-won insight.
That’s why I started this website — I want to form and express my own voice and views. Every article here, the structure and core arguments are mine first, then AI helps polish the language.
“Writing” in this article covers a wide range — from business emails and technical docs to novels, poetry, and personal journals. AI affects them very differently, but some common patterns are worth exploring.
The good news first
AI writing tools are genuinely helping people, and it’s the people who need help most.
I’m one of them. As a non-native English speaker, I used to agonize over phrasing. Now AI helps me polish language so I can focus on the ideas. Only 5% of the world speaks English natively, but academia and business demand fluency. An Oxford researcher called this the “hidden tax on ESL speakers” — paying for proofreading that often made text worse, not better. AI-assisted writing adoption in non-English-speaking countries grew roughly 400%, far outpacing the 183% in English-speaking ones.
The benefits for disabled writers are even more direct. Google and UW’s SpeakFaster helps ALS patients type through eye-gaze tracking, cutting motor actions by 57%. LaMPost built an AI email editor for adults with dyslexia — about 20% of the population. An ADHD user diagnosed at 40 put it simply: “If I had this technology 20 years ago, my life would be completely different.”
One case stuck with me. Yale researchers analyzed over 1.1 million consumer financial complaints (published in Nature Human Behaviour) and found AI-assisted complaints succeeded about 50% of the time versus 40% for human-written ones. The first adopters were areas with limited English proficiency. Three controlled experiments confirmed the mechanism: AI improved the clarity of presentation, not the factual content. Ordinary people could finally articulate their grievances clearly.
But there’s a contradiction I can’t shake. A Science Advances study found AI made each person’s stories better (26.6% better written, 22.6% more enjoyable), with the least creative writers gaining the most. Sounds great. But the same study found collective diversity dropped 10.7%. Everyone got better. Everyone got more similar. AI is an equalizer at the individual level and a homogenizer at the collective level. Both are true, and there’s no simple fix.
How AI is changing writing work
The impact depends on how much writing matters in your job.
People who write for a living
This group got hit hardest. Eight months after ChatGPT launched, freelance writing demand dropped about 30% — the steepest decline of any job category. Writing projects on Upwork fell 32% year-over-year in 2025, with entry-level availability below 9%. Blog content rates dropped from $0.15/word to $0.08. A veteran copywriter described her experience: her agency’s revenue went from $600K to under $10K. Her skills hadn’t changed. Clients just started accepting cheaper, worse, but “good enough” AI output.
The market split into two tiers. The bottom collapsed, the top thrived. Human-written content now commands a 4.7x price premium ($611/post vs $131). The Authors Guild launched a “Human Authored” certification — the organic food label of writing. What the market is eliminating is fungible text production. What it’s keeping is judgment and distinctive voice.
News and publishing haven’t been spared. Sports Illustrated published articles under fake AI authors with generated headshots. Over half of CNET’s AI articles had factual errors. On Amazon, one person published 200+ romance novels in a year using Claude. An estimated 77% of self-help books may be AI-written. Nearly 10,000 authors — including Kazuo Ishiguro — protested at the London Book Fair. Fictitious AI journalists published in mainstream outlets. Real freelancers started offering Google Docs version histories to prove their work was human.
People who write as part of their job
Engineers writing docs, consultants writing reports, marketers sending emails, academics writing papers. For these people, AI genuinely saves time. IBM data shows technical documentation time down 59%. 87% of marketing teams use AI for email. The MIT study found professional writing tasks were 40% faster, BCG found consulting writing 25% faster and 40% higher quality. Though Harvard’s same study found that on tasks beyond AI’s capability, AI users were 19 percentage points less accurate.
Academic adoption is accelerating: at least 13.5% of 2024 biomedical abstracts were processed by LLMs, 22.5% of CS paper sentences show AI modification, and 24% of corporate press releases bear AI traces.
There’s a Jevons paradox here, same as I described in my coding article. AI dropped writing costs to near zero, and the result wasn’t less writing but dramatically more. LinkedIn content grew 60% year-over-year. 74% of new web pages contain AI content. AI-generated articles exceeded human-written ones for the first time in late 2024. But reading time barely moved — Nielsen says 5% growth. More content didn’t bring more readers. It brought more noise.
A Gotham Ghostwriters survey of 1,481 writing professionals: 61% use AI, 92% of heavy users report productivity gains. But only 7% publish unedited AI text. Most treat AI as a tool, not a substitute.
AI and the joy of writing
That was writing as work. But writing has another side — it can be something you enjoy. On this side, AI’s impact is more personal.
An author of 108 novels nearly quit writing because of AI. Not because it wrote badly, but because it stole the part he loved: the drafting phase, creating something from nothing. AI shortened his favorite part and extended the part he disliked (endlessly editing AI output). His solution was to keep drafting for himself and use AI only for proofreading and cover design. Keep the joyful part.
Another writer who used AI daily for a year said opening a blank document stopped feeling exciting. The writing no longer felt like his. He could still write, he just didn’t want to. The sense of “my own voice” was gone.
Others found new possibilities through AI. A 77-year-old writer in the r/WritingWithAI community found practical advice instead of moral judgment.
Thinking about these stories, the key variable seems to be joy. Writers who thrive in the AI era protect whatever part of the process makes them happy, and outsource the rest. But when AI takes over the part you love, the loss is real.
Psychology Today’s analysis put it plainly: AI removes friction from creation, but friction is what makes creation enjoyable. Poetry is maybe the clearest example. Poet Eileen Myles: “AI poetry is always bad. AI can only regurgitate the past. Poetry needs to be new.” Sam Altman himself admits even GPT-7 might only manage “a real poet’s okay poem.” Poetry demands exactly what AI can’t do: unpredictability, personal vulnerability, deliberate rule-breaking.
Self-Determination Theory explains why this matters. People need three things from work: autonomy (I decide what to write), competence (mastery through struggle), and relatedness (connecting with readers through creation). When AI writes the draft and you just edit, all three get threatened. You shift from author to reviewer. A study of 10,131 work tasks found that tasks most associated with agency and happiness are the ones most exposed to AI automation. The paper’s title is the question itself: “Are We Automating the Joy Out of Work?”
Writing is different from most work. It’s one of the few activities where the process is the point. You don’t write to have an article. You write to write. The article is a byproduct. When AI eliminates the process, you haven’t gotten the result more efficiently. You’ve lost the reason for doing it.
AI and cognition
Everything above is about individuals and markets. What follows goes deeper, and it’s the part that unsettled me most while writing this.
The great smoothing
Stanford Daily gave it a name in March 2026: “The Great Smoothing.”
You open Gmail. The system floats a grey “Thank you for reaching out.” You didn’t decide to write that. You just hit Tab. Your phone predicts the next word before you finish typing. You paste rough text into ChatGPT and ask it to sound “more professional.”
Each one is a tiny convenience. But researchers started tracking the cumulative effects. A 2020 study found people using predictive text wrote shorter, more predictable captions with fewer unique words. AI suggests “man,” people stop writing “baseball player.” An ICLR 2024 study found essays co-written with InstructGPT were more homogenized across users, with identical five-word phrases appearing in different people’s work. People didn’t get lazy. RLHF just optimizes for everyone’s average preference.
2026 data put numbers on it: heavy LLM users dropped personal pronoun use by 50% and increased neutral language by 69%. 6,875 student essays showed quality went up but structural variance dropped 70-78%. Cornell researchers found something that bothered me: with AI assistance, an Indian participant writing about “favorite food” got recommended “pizza.” Typing “S” for Bollywood star Shah Rukh Khan, the AI autocompleted to “Shaquille O’Neal.” AI pulls the world’s writing toward one cultural default: Standard American English. Not because users chose it, but because the dropdown only offered that option.
And it’s not just writing anymore. Max Planck Institute research found that after ChatGPT’s launch, speakers in academic YouTube videos used AI-favored words like “delve,” “meticulous,” and “realm” 51% more often. They weren’t reading scripts. They were just talking. But their spoken language had already been colonized by AI’s vocabulary.
Why the smoothing matters more than you think
I first thought this was a style problem — writing getting boring. But I kept pulling at it.
Language isn’t just a communication tool. It’s the infrastructure of thought. Russian distinguishes dark blue (siniy) from light blue (goluboy), English just has “blue.” A PNAS experiment proved Russian speakers discriminate blue shades faster — and the advantage disappeared under verbal interference, proving it was language helping perception. Guugu Yimithirr speakers use only cardinal directions, never left/right, and always know which direction they’re facing. Pirahã speakers have no number words — a linguist spent decades trying to teach counting, and none of 30 students learned to count to 10. Not an intelligence problem. The language simply didn’t give them that cognitive tool.
Languages aren’t different versions of the same tool. They’re different toolboxes. When AI pushes everyone’s expression toward one statistical center, what’s being compressed isn’t style. It’s cognitive diversity.
There’s an angle I didn’t expect: emotional granularity. People who can distinguish “frustrated” from “disappointed” from “agitated” from “disrespected” have significantly better mental health than people who call all of these “upset.” Lisa Feldman Barrett’s research says language doesn’t just describe emotions — it helps construct the emotional experience itself. How many kinds of “sad” your vocabulary has is how many kinds of “sad” you can feel. When AI pushes everyone toward the “safe middle ground,” it may be compressing not just text, but people’s resolution for experiencing the world.
In the 1840s Ireland grew almost nothing but one potato variety (lumper), all clones, no genetic variation. When blight came, every potato was equally vulnerable. One in eight people died in three years. Diversity is resilience. Uniformity is fragility. This holds for genes, ecosystems, and cognition. Organizational research is clear: teams with high cognitive diversity show 27% better financial performance, 45% more innovation revenue.
Wittgenstein wrote: “The limits of my language mean the limits of my world.” When AI pushes everyone’s language toward one statistical center, it’s shrinking not just the boundaries of language, but of our world. PNAS research found over 75% of medicinal plant knowledge exists in only one language. When that language dies, millennia of accumulated experience vanish irreversibly.
AI’s great smoothing doesn’t ban any words. It just makes certain expressions less and less likely to be thought of, through statistical recommendation. This is subtler than censorship — you can’t feel yourself losing something. You just hit Tab, accept the “good enough” suggestion. Once doesn’t matter. Billions of people, every day, for years — that adds up to a global contraction of cognitive space.
Cognitive debt
The Wharton study published in PNAS gave a clear number. About 1,000 high school students used unrestricted ChatGPT-4. Practice scores improved 48%. When AI was removed, they scored 17% worse than students who never had access. The researchers called it “cognitive debt” — borrowed from technical debt. What you get quickly now, you repay with interest later.
MIT Media Lab’s EEG study explained it at the neural level: ChatGPT users showed the lowest engagement across all 32 brain regions, and 83% couldn’t recall key arguments from their own essays. Scientific Reports research found passive AI use (copying AI content) damaged self-efficacy, ownership, and meaning — even after returning to manual work. But active collaboration (human drafts first, AI refines) preserved all three. The order is everything: human first then AI is fine. AI first then human edits is harmful.
Writing’s cognitive debt may be worse than math’s. Writing isn’t just an output skill, it’s a thinking skill. If you’ve never struggled with a blank page, never revised an argument repeatedly, never been forced to explain something in your own words, you don’t just fail to learn writing. You fail to learn a certain kind of sustained, structured thinking. The Wharton study’s most interesting finding: a ChatGPT designed as a tutor — hints, not answers — eliminated cognitive debt entirely. Learning was preserved because difficulty was preserved.
Model collapse
The smoothing is what AI does to human language. But AI-generated content is also being fed back to train AI itself. Shumailov et al. proved in Nature (2024) that when AI models train recursively on AI-generated data, “model collapse” occurs — each generation loses the tails of the original distribution, rare expressions disappear, even under ideal conditions.
How bad? A joint study by NUS, Harvard, Stanford, Google, and Mayo Clinic tested it with clinical text: after four recursive generations, vocabulary dropped 98.9%, unique medical terms fell 66%, rare findings vanished entirely. The feedback loop is already running: 74% of new web pages contain AI content, new models train on it, lose more tails, produce more homogeneous output. 51% of internet traffic is bots. High-quality human text is projected to run out between 2026-2032.
The smoothing, the Jevons paradox, and model collapse form one system: the smoothing is the symptom, Jevons is the amplifier (costs dropped so content exploded), model collapse is the feedback loop (smoothed content trains smoother AI). Together, they form a self-reinforcing cognitive echo chamber.
Where this leaves me
AI writing tools are genuinely useful for many things — business email, formatting, grammar, brainstorming. But I think we need to be honest about which writing is for producing text, which is for thinking, and which is for enjoyment.
Business email is for producing text. Let AI write it. Fine.
But an essay, a personal piece, an important letter, a journal entry — those are for thinking. Their value isn’t in how polished the result is. It’s in what your brain goes through during the process. Outsource those to AI, and what you save is time. What you lose is yourself.
This article started with a personal observation — I handed writing to AI and lost something. Writing it, I found that it’s not just my problem. AI compresses writers’ voices, which compresses linguistic diversity, which compresses cognitive diversity, and the smoothed content feeds back to train more smoothing. The system reinforces itself.
But responding to this isn’t just an individual responsibility.
Universities are rethinking what writing education means. Most haven’t banned AI outright — they’re distinguishing contexts. Edinburgh allows AI for brainstorming but prohibits it for submissions. MLA and CCCC published three working papers affirming teachers’ right to refuse AI in classrooms. The most effective approach isn’t catching cheaters — it’s redesigning assignments to make writing about the process again.
Publishing and academic institutions are building transparency standards. Nature requires AI use disclosure, prohibits AI authorship, and bars reviewers from uploading manuscripts to AI tools. Amazon capped daily uploads. Clarkesworld implemented detection systems against AI submission floods. Imperfect, but the direction is right: let readers know what they’re reading.
Governments are stepping in. The EU AI Act Article 50 takes effect August 2026, requiring machine-readable markers on all AI-generated content. China’s AI Content Labeling Measures have been in force since September 2025, mandating both visible labels and embedded metadata, with platforms required to verify markers and retain logs for six months.
AI companies themselves have room to do better. Most writing tools default to recommending the most common expression, but they don’t have to. The University of Salford’s inclusive prompt framework improved non-native speaker success rates by 37%. Microsoft’s Project Gecko is building AI support for low-resource languages. The World Economic Forum asked the right question: “How do we design AI agents for a world of many voices?” Tools designed to recommend the most common expression can also be designed to protect diversity.
There’s even a counter-trend worth noting: handwriting is making a comeback. The Global Wellness Summit called 2025 the year of the “great analog-ing on.” Journals, letters, fountain pens, typewriter clubs — all growing. Not anti-technology, but people finding a more intentional, private form of expression amid AI fatigue. Oxford chose “brain rot” as 2024’s word of the year. When the noise gets too loud, people reach for paper and pen.
For me personally, the answer is: when using AI, keep the part you care about most for yourself.
Writing this article was itself an example. My ideas didn’t form first and then get written down. They emerged during the writing — by translating vague intuitions into concrete sentences, I discovered what I was actually thinking. No prompt can substitute for that process.
Sources: All claims link to primary sources inline. Key studies and articles: Stanford Daily “Great Smoothing” (2026) · Scientific Reports passive vs active AI (2026) · AI writing reduces voice 50% (2026) · Student essay homogenization (2026) · Freelance writing market (Mediabistro) · Wharton/PNAS cognitive debt · Ted Chiang “Blurry JPEG” (2023) · Cornell cultural homogenization · Automating Joy Out of Work (2026) · Model collapse (Nature 2024) · Writing is thinking (Nature 2025) · MIT writing productivity (Science 2023) · BCG study (Harvard 2023) · Yale CFPB complaints (Nature Human Behaviour 2026) · AI creativity (Science Advances 2024) · SpeakFaster (Nature Comms 2024) · Non-English AI adoption +400% · Amazon AI books · Ghostwriters survey (2025) · Russian blues (PNAS 2007) · Pirahã (Everett) · Emotional granularity · Clinical text degradation · Medicinal knowledge (PNAS) · Cognitive diversity · MIT EEG · Biomedical AI vocabulary