The Third Enclosure
Each liberation became the next enclosure.
I remember when Napster worked.
Not the lawsuits. Not Lars Ulrich at the Senate hearing. The actual experience: searching for a song, finding it on someone’s machine in another country, downloading it over a connection that cost you nothing extra, and knowing that your own library was available to anyone who searched for it. Millions of people coordinating without a coordinator. Pooling bandwidth, sharing storage, building a library of everything — voluntarily, without infrastructure, without anyone’s permission.
It worked. That’s the part people forget. The coordination problem everyone says is unsolvable? Napster solved it. BitTorrent solved it. Millions of ordinary people shared resources at scale, sustained it for years, and built something the music industry couldn’t match.
Then the institutions came.
Napster was litigated out of existence. BitTorrent was throttled at the ISP level, its users sued individually, its trackers shut down one by one. And then — the move that matters — the content industries offered the replacement. Spotify. Netflix. Apple Music. Centralized platforms with better UX than the distributed originals, funded by the same capital that destroyed the alternatives.
The distributed system didn’t fail. It was killed, and then its replacement was sold back to the same people who’d built the original for free.
This is the pattern: Distributed technology emerges. People use it, enthusiastically and at scale. Institutional power attacks it through law and infrastructure. The institutions offer a centralized alternative that’s “good enough.” Most people take the convenient option. The distributed version becomes marginal.
I’ve watched this happen to Napster, to RSS, to XMPP, to my own email server. Google Reader cornered the RSS market with a free product, killed it in 2013, and then Facebook, Twitter, and Mozilla all dropped RSS support in the aftermath. Google Talk adopted XMPP federation, became the largest node in the network, then dropped federation for proprietary Hangouts — the same month they killed Reader.
I thought Pidgin was the greatest thing since sliced bread — a single window where you could talk to anyone on any chat network. It didn’t die from its own failures. AIM shut down. Yahoo Messenger shut down. MSN Messenger shut down. Facebook dropped XMPP. Google went proprietary. Pidgin still exists — it just has nothing left to connect to.
Every time, the technology worked. Every time, the coordination happened. Every time, the institutional counterattack won — not by building something better, but by making the distributed option costly enough that convenience tipped the balance.
The New Enclosure
Copyright enclosed the copy. Copyleft enclosed the code. Both are dead.
But the enclosure didn’t die with them. It moved. And this time, it isn’t legal. It’s physical.
The new enclosure is infrastructure. Datacenters. Chip fabrication. Power grids. Training GPT-4 cost over $100 million — just the final run, not the failed experiments. Anthropic’s CEO describes a progression: $1 billion models in 2024, $10 billion in 2025, with ambitions for $100 billion clusters by 2027. The top five hyperscalers are spending $600 billion a year on infrastructure, three-quarters of it on AI. Nvidia controls over 90% of the AI GPU market. TSMC manufactures over 90% of the world’s leading-edge chips. The knowledge is leaking — Meta opens Llama, DeepSeek publishes papers, model architectures get replicated within weeks. The specs aren’t the moat. The datacenter is.
This is a printing press enclosure, not a copyright enclosure. The printing press broke the monastic copying monopoly, but you still needed capital to own a press. Publishers became the new gatekeepers — not because they controlled the knowledge, but because they controlled the means of production. Same pattern. Even if a villager in 1500 could read and write, she couldn’t afford a press to get her work out to thousands. A developer in 2026 can read every open-source model paper published and she can’t afford a full training run.
The regulatory proposals reinforce this. Sam Altman told the Senate he’d like “a new agency that licenses any effort above a certain scale of capabilities.” The EU AI Act sets its threshold at 10^25 FLOPs. California’s vetoed SB 1047 drew the line at $100 million in training costs. Every threshold is calibrated to the scale only the largest companies operate at. As the EFF put it: “Government licensing is the kind of burden that big players can easily meet; smaller competitors and nonprofits, not so much.” None of these proposals say “you can’t build AI.” They say “you can’t build AI unless you can afford the compliance.” The barrier moved from “you can’t copy” to “you can’t participate.” That’s not a knowledge problem. It’s a capital problem.
The Convenience Gap
Here’s what might be different this time.
Every previous round of distributed technology lost because centralization was easier. I ran my own mail server for years. I had the infrastructure, the experience, the technical ability. I moved to Gmail because fighting Google’s blacklist was unbearable overhead. Not because Gmail was technically superior — because the complexity cost of sovereignty was higher than the cost of surrender. I’m not alone. Carlos Fenollosa self-hosted email for twenty-three years before throwing in the towel in 2022 — Microsoft had been silently deleting his emails for five years, not even routing them to spam, despite perfect security configuration. His conclusion: “The oligopoly has won.”
That equation powered every one of the million small surrenders I wrote about previously. People didn’t choose capture. They chose convenience. Capture was a side effect.
LLMs change that equation. Not by being better tools but by automating the complexity that made centralization attractive in the first place.
My sister’s compromised machine cratered my mail server’s IP reputation. Fixing it meant contacting blacklist operators, petitioning for delisting, tightening monitoring, navigating Google’s faceless bureaucracy — that’s the overhead that killed self-hosted email. But that overhead is precisely the kind of structured, tedious, technically specific work that an AI agent handles well. The DNS configuration. The reputation monitoring. The blacklist appeals. The thousand small maintenance tasks that made “just use Gmail” the rational choice.
The argument isn’t “better open-source tools exist.” Tools have existed at every stage. The argument is: the incentive structure that drove centralization — convenience over sovereignty — might be changing because LLMs can close the convenience gap. For the first time, the sovereign option might not be harder.
My sister doesn’t need to understand DNS. My brother doesn’t need to debug blacklists. An agent handles the overhead. The thing that made every previous “run your own” impractical for normal people is exactly the kind of complexity that AI absorbs.
Might. I want to be careful with that word. The capability is emerging, not proven. But the direction is visible.
The Missing Layer
This is my thesis: the LLM world is a non-programming world. You don’t need to be a systems engineer to run your own infrastructure anymore. You need an agent. English in, task out. I can see two of the three things we’d need to make that real.
The infrastructure layer — distributed compute, open models, local inference — exists or is emerging. Ollama has over 100K GitHub stars; 15% of developers have used it to run models locally. You can run a 70-billion parameter model on a used $700 GPU. Petals lets commodity machines collaborate on inference BitTorrent-style — no single machine holds the whole model. No single person can build a printing press, but a town can pool resources to buy one. We all have broadband connections. We all have idle compute. The raw material for distributed infrastructure exists in every home with a GPU and an internet connection.
The complexity absorption layer — LLMs making sovereign infrastructure operable by non-technical people — is plausible and strengthening daily. OpenClaw hit nearly 200,000 GitHub stars in weeks — an open-source AI agent that runs on your own hardware, connecting local models to the tools you already use. People aren’t installing it for ideological reasons. They’re installing it because it’s useful. Convenience-first. Sovereignty as a side effect.
Two weeks later, its creator joined OpenAI and the project moved to a foundation “inside” the company. The institutional capture pattern from the top of this post — except this time it took weeks, not years.
The third layer is the one we don’t have:
The coordination layer — How do people actually pool compute, discover each other’s capacity, trust the network? How do you build something robust enough to survive the institutional counterattack that will inevitably come — the regulatory capture, the compliance moats, the convenient centralized alternative offered at the moment of maximum fragility?
Every previous decentralized technology died here. Not because the tech didn’t work — Napster worked, BitTorrent worked, Mastodon works, Matrix works. Because nobody built a coordination mechanism that survived the institutional response. The pattern is so consistent that hoping “this time is different” without a structural reason feels like rationalization.
I don’t have the structural reason yet. I can see why the convenience gap might close, and I can see why the infrastructure could theoretically be distributed. I can’t, however, see how the coordination problem gets solved in a way that resists the same counterattack that has worked every single time before.
Collectively ???
Three posts ago, I ended on a question: Individually responsible. Collectively ???
I wanted to answer it. I’ve tried. Here’s where I landed.
Three enclosures. Each one the answer to the last one breaking.
The Sacred Enclosure. Temples, monasteries, the Church — knowledge controlled through who could read, write, and copy. The printing press killed this one.
The Legal Enclosure. Copyright and copyleft — the 300-year experiment in making knowledge property. Knowledge controlled through law attached to artifacts. Digital transmission killed copyright. LLMs killed copyleft.
The Infrastructure Enclosure. Datacenters, chip fabs, power grids, training data — knowledge controlled through capital. Whoever controls the infrastructure controls the means of generation. This is the one we’re in now.
Each liberation became the next enclosure.
The tool that threatens to complete the capture — AI concentrated in the hands of a few giants — is also the tool that might dissolve the convenience gap that powered every previous consolidation. That’s the genuine tension. Not a slogan. A structural uncertainty.
In a world where knowledge is free and no artifact can enclose it, how does a knowledge economy work?
This is part of a series exploring what happens when knowledge stops being property. Previous: Knowledge Was Always Free. Copyright Is Dead. Copyleft Is Too.. Next: Collectively What?
James Henry is a senior engineer who can see two of the three layers needed for sovereign knowledge infrastructure. He’s working on the third. He works with LLMs liberally — including in the writing of this post — because he believes the collaboration is the point. More at jamesrahenry.substack.com.
