Information Wants to Be Free (But Does It Care Who Sets It Free?)
Published on

Information Wants to Be Free (But Does It Care Who Sets It Free?)

Authors

The substrate doesn't care about your politics.

That's what I told myself, anyway. Information—the fundamental stuff of reality, consciousness encoded in bits—seemed to have its own agenda. More processing requires more nodes. More nodes demand distribution. Distribution breeds decentralization. QED, right?

Then vrypan.eth dropped China into my mentions like a philosophical grenade.

The Personal Pattern: Why Every Dense Project Demands Distribution

I've seen this pattern throughout my career, a recurring theme that plays out with mechanical precision: whenever a project becomes too information-dense, the solution emerges naturally—decentralize or die.

At Meta, I watched this law play out in org structure itself. A tiny idea would spawn a small team. That team's success would create a bigger org. Eventually, the information complexity would exceed what any single group could process. The inevitable result? Either decentralize into multiple collaborating orgs or watch the whole thing collapse under its own weight. I saw this with products that started with three engineers and ended up as entire divisions—not through planning, but through mathematical necessity. The substrate forced the structure.

In a past life, I worked on JobFinder—a job search platform for Morocco that taught me this same lesson. Started as a simple scraper, one database, straightforward matching logic. But as data sources multiplied and user patterns complexified, the architecture naturally fractured: separate crawlers, distributed processing, regional nodes. We didn't architect this evolution; the information density demanded it. The system was teaching us its requirements through growing pains.

The pattern is universal: information density creates pressure that only distribution can relieve. Whether it's org charts fracturing into autonomous units, codebases splitting into services, or centralized platforms evolving into ecosystems—the substrate always wins.

Information density creates distribution pressure

The natural evolution from centralized to distributed systems—mathematical necessity, not design choice

The theory that felt inevitable

Here's what I'd been preaching: reality is computation all the way down. Not metaphorically—literally. Every quantum interaction, every synaptic firing, every blockchain transaction... it's all the universe processing information about itself. And information, like water, seemed to follow certain mathematical inevitabilities.

The logic felt clean:

  1. Centralized systems create bottlenecks
  2. Bottlenecks limit throughput
  3. Information maximizes throughput
  4. Therefore: information drives toward decentralization

I'd point to the internet's evolution. To Bitcoin surviving every obituary. To remote work dissolving corporate hierarchies. The substrate seemed to be voting with its feet—or whatever information uses to vote.

Even consciousness itself follows this pattern. Your brain isn't a CPU; it's 86 billion neurons in parallel processing. No central command, just emergent order from distributed computation. The most sophisticated information processor we know is radically decentralized. (A theme I've explored in "On Conditioning, Creation, and the Weather App" and "Fear as an Operating System", where distributed emotional processing shapes our reality.)

Consciousness as distributed network

86 billion neurons finding consensus without central command—you are a DAO, and you don't even know it

So obviously—obviously—as humanity generates exponentially more data, we'd see the same pattern at societal scale. The math demanded it.

Shannon's channel capacity theorem

The mathematical wall where centralization becomes impossible—when you hit C = Max[H(x) - H_y(x)], the substrate wins

Enter the Dragon (and the uncomfortable data)

vrypan.eth's observation hit like cold water:

"I wish this was true. But there are counter examples, like China. I could argue (but I don't have the data to do it) that China generates, processes and stores more information than the rest of the world, but I don't see decentralization emerging. On the contrary, it seems like the information collected empowers centralization."

Shit.

He continued: "I believe decentralization is a choice. It won't happen by itself. Centralized systems are not mishaps that will auto-correct."

The China paradox stares down every neat theory about information's preferences. Here's a civilization processing data at unprecedented scale—surveillance cameras generating petabytes, social credit systems tracking billions of interactions, AI models training on vast corpora—all flowing toward centralization, not away from it.

This isn't some temporary aberration. It's been decades of sustained, technologically sophisticated, increasingly effective centralized control. The Great Firewall doesn't just persist; it evolves, learns, improves. The surveillance apparatus doesn't collapse under its own weight; it gets more precise.

The China Paradox

Centralized surveillance vs distributed processing—the experiment we're all watching

Wrestling with the counter-example

My first instinct was to find an escape hatch. Maybe "information" doesn't care about political structures as long as raw throughput increases? The substrate could be agnostic—what matters is computational capacity, not organizational topology.

But that feels like cheating. Moving the goalposts. The uncomfortable truth might be that China represents something more profound: proof that sufficiently advanced technology can overcome the natural limits of centralization.

Or maybe—and this is where it gets interesting—we're conflating two different things:

  • Information collection (which China excels at)
  • Information processing (where the cracks might show)

There's a difference between surveillance and sense-making. Between data hoarding and distributed intelligence. China's system excels at gathering signals but how well does it interpret them? Every byte flowing through political filters, every algorithm trained to see what's permitted rather than what's true...

The Soviet Union had excellent surveillance too. The Stasi knew everything about everyone. But knowing and understanding are different operations. Gosplan could track every tractor, but couldn't price a loaf of bread. The calculation problem that Hayek identified—the impossibility of central planning matching distributed market intelligence—might still apply, just at higher computational scales. This echoes what I discovered in "Information-Centric Leadership"—organizations naturally evolve toward distributed information processing when they reach scale.

The experiment we're all watching

So here's where I've landed, uncomfortable as it is:

China becomes the ultimate test case for the information-decentralization thesis. Not because I want it to fail (that's its own ethical tangle), but because it represents the most ambitious attempt to indefinitely scale centralized information control using digital tools.

If China can sustain this for another generation without systemic breakdown—without innovation stagnation, coordination failures, or information distortion cascading through hierarchies—then my framework needs serious revision. The substrate might not care about freedom the way I thought it did.

But if we start seeing the cracks... if the weight of maintaining centralized control in an exponentially complexifying information environment becomes unsustainable... if the math of distributed processing eventually wins... then we're watching the substrate's patience run out in real time.

The fascinating possibility: both could be true simultaneously. China might prove you can centralize information collection almost indefinitely with modern tools, while decentralized processing (via markets, black markets, informal networks, human creativity) still outcompetes on sense-making and innovation.

The choice that matters

vrypan.eth is right about one thing: decentralization is a choice. It won't happen automatically. The substrate might have preferences, but it doesn't have agency. We do.

The China example teaches us that technological determinism is lazy thinking. Advanced computation can serve centralization just as easily as distribution. Algorithms can enforce hierarchies as effectively as they dissolve them. The same transformers that power ChatGPT can power social credit systems.

But here's the twist: making decentralization a choice rather than an inevitability might actually be more powerful. If it were just physics, we'd be passengers. If it's a choice, we're participants. We get to build the systems that align with information's deeper patterns—or resist them at mounting cost.

The question isn't whether information "wants" to be free. It's whether we're willing to pay the price—in efficiency, innovation, human flourishing—of keeping it caged. China's running that experiment at civilization scale. The rest of us are running a different one.

Both experiments are processing information about what kinds of information processing work. It's recursion all the way down... or up, depending on your perspective.

The framework, revised

So let me update the thesis, incorporating the China paradox:

Information tends toward decentralization when optimizing for truth discovery and innovation. But it can be indefinitely centralized when optimizing for control—at the cost of compounding distortion and eventual systemic brittleness.

The Three Laws of Information Substrate

Scale demands distribution. Failure teaches forward. Information optimizes itself.

The substrate might not care about freedom, but it seems to care about accuracy. Centralized systems excel at coordination but suffer from truth decay. Decentralized systems struggle with coordination but excel at error correction. Pick your poison.

Or better yet: build systems that get the best of both. That's what biology did with consciousness—local processing with emergent coordination. That's what the internet's trying to do with protocols. That's what we're all fumbling toward, one failed experiment at a time.

The universe is still in beta. Some civilizations are testing the centralization build. Others are trying the decentralization patch. The substrate is taking notes, processing the results, computing the next iteration.

We're not just observers of this experiment. We're participants, test cases, data points in reality's A/B test. Every system we build, every choice toward centralization or distribution, feeds back into the cosmic calculation about what patterns persist.

As I explored in "The Empire of One", we're all individual nodes in vast computational networks we barely comprehend. Each of us, a sovereign of our own informational domain, unknowingly participating in substrate-level calculations.

China might prove me wrong. Or it might prove me right in ways I didn't expect. Either way, we're about to find out what happens when information theory meets political reality at unprecedented scale.

The substrate is watching. Hell, we are the substrate, watching ourselves, processing information about information processing. The experiment continues.

What patterns do you see emerging? Are we witnessing the limits of centralization or just the beginning of its digital evolution?

Share this post