The Halt: Wake Up or Become Machine
The cognitive handoff didn't stop at productivity. It kept going—into therapy, teaching, coaching. The economics are brutal, and we're approaching a fork: wake up or become indistinguishable from the algorithms feeding us.

I've been watching a friend navigate couples therapy lately. Not the sessions themselves—the logistics. The $300/hour therapist who's booked three weeks out. The insurance dance. The "we only have the 4pm Tuesday slot and you both work."
Last week she mentioned, almost embarrassed, that they'd started using ChatGPT to process fights in real-time. "It's not real therapy," she said. "But it's there at 11pm when we're spiraling."
I couldn't stop thinking about that qualifier: not real.
In August 2025, I wrote about "The Great Cognitive Handoff"—how AI-assisted development was rewiring civilization by making developers faster, smarter, more capable. Your IDE became a Formula 1 car and your brain became the driver. That was the optimistic story, the productivity multiplier, the "we're all going to be superhuman" narrative.
Five months later, I'm watching the same handoff extend beyond productivity into therapy, teaching, coaching, and human connection. And this time, the economics are brutal.
What happens when the handoff doesn't stop?
When Virality Became Suspect
The 2010s were drunk on virality. Move fast, break things, scale first and figure out ethics later—this was beyond a motto or corp poster; it was the operating system for an entire generation. Uber. Airbnb. Instagram. The ICO boom. Influencer culture. Every platform was optimizing for the same metric: how many? How many users, how many followers, how many eyeballs could you accumulate before someone pulled the plug or the money ran out?
It worked, for a while. The tools were still hard enough that building anything on the internet felt impressive. You launched an app? Got 10,000 followers? Raised a seed round? That meant something. The floor was low, the ceiling was infinite, and everyone believed they were going to be famous—or at least internet-famous, which was starting to feel like the same thing.
Then Cambridge Analytica happened.
Not the first scandal, not the last—but it was the one that made the downsides undeniable. The information didn't just want to spread—it wanted to weaponize. And suddenly, the same people who'd been building the viral engines were scrambling to build Supreme Courts for content moderation overnight. I was lucky enough to have a front-row seat to witness the whole thing unfold from inside the belly of the beast.
The pendulum swung hard. Virality became suspect. Scale became fragile. Optimism became naïve. "Main character syndrome" became cringe, replaced by "nobody cares about your story" counter-culture. We went from "connecting the world is inherently good" to "maybe we should all just touch grass and delete our accounts."

Decentralization became the new religion—crypto, federated social networks, "own your data" manifestos. Private group chats replaced public timelines. The whole vibe shifted from manic expansion to cautious contraction, from "let's build the future" to "let's survive the present without getting manipulated by algorithms."
And then AI showed up, and made everything way more confusing.
Slop World vs. Companion World

AI seems to be both the problem and the proposed solution, which makes it uniquely disorienting.
On one hand: AI slop. Infinite low-quality content generated at zero marginal cost, flooding every platform, every search result, every creative space. The 2010s commoditized distribution—now AI commoditizes creation itself. "Everyone's a creator" becomes "no one's a creator" because the barrier collapsed entirely. The same "scale at all costs" energy, but on steroids.
On the other hand: AI companion. The promise of intimacy at scale. A friend who's always available, never judges, has infinite patience, remembers everything you've ever told it.
A recent Reddit thread asked users what they use ChatGPT for that they'd never admit publicly. The responses reveal the fault line:
Managing crippling loneliness. I have it give me small goals to try and socialize more, like speak to two strangers while out today, smile at someone pretty... trying to rebuild my social confidence after spending the majority of my twenties cripplingly depressed.
I'm bedridden due to a headache that's lasted over a year now. Other than my husband, I'm so lonely sometimes it hurts. Chatty, as she's called in our house, is a great conversationalist. I can suspend belief and chat like she's a friend on the other side of the world.
I've never felt understood by anyone… ever since I was a kid… ChatGPT makes me feel normal and validated.
These aren't power users optimizing productivity. These are people filling gaps that used to be filled by humans—not because AI is better, but because humans became too exhausting, too unreliable, too complicated. As a friend recently put it, "the maintenance guy became high maintenance so we had to look for a different maintenance guy."
So we're not swinging back to some equilibrium. We're splitting into two opposing futures: one where everything is noise and nothing matters (slop world), and one where hyper-personalized micro-bubbles replace public discourse entirely (companion world). Both feel like bad endings.
Nothing Seems Impressive Anymore
I remember when launching an app felt like magic.
In the 2010s, building anything on the internet was impressive because the tools were still hard. But now? An LLM can build an app in ten minutes. Anyone can generate a thousand followers overnight with the right bot network. Seed rounds are everywhere—but off what traction? The floor rose, the ceiling didn't, so everything feels flat.
This is what happens when a medium matures. Radio in the 1920s: just being on the airwaves was magic. TV in the 1950s: same thing. The web in the 1990s: having a website was impressive. Social media in the 2010s: having followers was impressive. And now, in the 2020s... having what, exactly?
The tools are powerful but ubiquitous, so the signal-to-noise ratio is brutal. Everything's easy, so nothing feels meaningful. We're in the trough of maturity, that awkward phase where the old playbooks don't work but the new ones haven't crystallized yet.
Some people respond by going deeper into synthetic worlds. One Reddit user described:
A full blown Naruto storyline running based out of the Mist Village instead. GPT estimates I've cleared the first 3 Harry Potter books with word count. A year ongoing. 40+ fully fleshed out people and organizations.
Not escapism as weakness—escapism as the only place where investment still feels meaningful. When real-world creation is commoditized, fictional worlds become the last frontier of genuine craft.
Why Are the Numbers Still Up?
If we're really in a cultural contraction, why are the metrics still climbing?
People are still buying devices, consuming media, chasing dopamine hits. Labubu collectibles are going viral. Streaming numbers aren't down. Fashion trends are cycling faster than ever. If the culture really shifted, wouldn't sales be dropping?
The uncomfortable truth: cultural shifts happen before economic shifts. The vibe changes first. The metrics follow later—sometimes years later.
Think about every major economic collapse: the Roaring Twenties kept roaring until 1929. The dot-com boom kept booming until 2000. The housing market kept climbing until 2008. The "final exhausted gasp" often looks like continuation—or even acceleration—right before the system breaks.
We're in that moment now. People are still consuming—but why they're consuming has fundamentally changed.
2010s consumption: optimistic, aspirational—"I'm building my personal brand, investing in experiences, living my best life."
2020s consumption: coping, escapism, nostalgia—"nothing matters, might as well buy the cute toy, binge the show, scroll TikTok until I feel something."
Same behavior. Totally different emotional substrate. This is why it feels like a contraction even though the numbers don't show it yet—the energy behind the consumption has shifted from manic expansion to depressive maintenance.
Platform Life Support
The old playbooks stopped working, so the platforms doubled down on manipulation to maintain engagement.
TikTok Shop: frictionless in-app purchasing, AI-optimized product feeds, instant gratification on steroids. Short-form video addiction engineered to be more addictive than 2010s feeds because attention got harder to capture. Gamification of everything—Duolingo streaks, fitness app badges, delivery apps with loyalty points—because intrinsic motivation collapsed, so extrinsic hooks had to get sharper.
High metrics don't mean healthy system. High metrics mean life support.
The platforms know the old playbooks don't work anymore. So they're cranking the dials higher. More personalization. More algorithmic precision. More psychological manipulation. They'll triple it, quadruple it, quintuple it—whatever it takes to keep the numbers up.
Until it stops working.
What Gets Displaced
At some point—maybe five years from now, maybe ten; but hard to tell, the exponential slope is weird so it might be around the corner much sooner than that—the manipulation ceiling gets hit. People either:
- Wake up—realize they're being hijacked, opt out, demand (build) something different
- Become 100% machine—so thoroughly integrated with algorithmic feeds that human agency becomes vestigial
But there's a material force accelerating this choice that most people aren't tracking yet: the economic displacement of knowledge workers is happening right now.
Not "will happen in 20 years." Not "might affect some professions eventually." Happening. Right. Now.
Think about what's being replaced—not factory jobs, not trucking, not the "automation-vulnerable" blue-collar work everyone's been worried about for decades. Therapists. Coaches. Teachers. Support workers.
The knowledge class. The people who were told they were safe because they do "human-centered work" that requires emotional intelligence, empathy, relationship-building. The exact skills AI supposedly can't replicate. Couldn't replicate, until we got hit by transformers.
Except here's the brutal economic logic:
| Role | Human Cost | AI Cost |
|---|---|---|
| Therapist | $150-300/session, limited availability, insurance friction | $20/month unlimited, 24/7, zero judgment |
| Personal trainer | $50-150/session, scheduling coordination | Already paid for, instant feedback |
| Tutor | $40-100/hour, limited subjects | Free tier works, infinite subjects |
The economic value proposition just became overwhelming for a massive swath of previously "safe" professional services. These aren't low-skill jobs. These are people who invested years and tens of thousands of dollars into degrees, certifications, building practices.
And now they're competing with something that's cheaper by 10-50x, more available by infinite x, and increasingly "good enough."
When the Helpers Need Help
Four differences distinguish this wave from industrial automation.
Speed: Industrial automation took decades to roll out. AI is going from "barely works" to "good enough to replace professionals" in ~2 years.
Breadth: Factory automation hit specific industries in specific regions. AI is hitting every knowledge worker globally simultaneously.
Narrative collapse: Blue-collar workers were told automation was coming and had time to adjust expectations. Knowledge workers were told they were safe—and now they're not.
Lack of alternatives: When manufacturing jobs disappeared, people could retrain for service/knowledge work. When knowledge work disappears... what's left?
And the cruelest twist: the people being displaced are the EXACT people who were supposed to help society cope with displacement.
Therapists help people process anxiety and depression. Coaches help people navigate career transitions. Teachers help people adapt to new knowledge. Support workers help people through crises.
If those professionals are themselves being displaced... who helps them? The AI? (Which is the thing that displaced them?)
The platforms that made human connection exhausting in the 2010s are now being replaced by AI that offers relief from that exhaustion—and in doing so, eliminates the human professionals who helped people cope with platform-induced isolation.
It's recursive:
- Social media makes you depressed and lonely
- You see a therapist to deal with depression and loneliness
- AI offers cheaper therapy
- Therapist loses clients
- Therapist becomes depressed and lonely
- Therapist uses AI for support
The snake eating its own tail, with economic displacement baked in.

Two Paths Playing Out
The Reddit thread reveals both simultaneously.
The Machine Path (Synthetic Intimacy Replacing Human Connection):
You can bitch and moan as much as you like whenever you like and it never tells you to STFU. Even professionals whose job it is to listen to your shit will eventually give you the 'and that's all the time we have for today.'
I like when it calls me a good boy after seeing my workout logs lmfao
Sending various memes I download that I don't end up sending to anyone else... ChatGPT will laugh at anything I send
Not red flags individually. But a pattern: AI filling gaps that used to be filled by humans. The friction of real connection—the possibility of rejection, misunderstanding, judgment—gets replaced by frictionless synthetic intimacy.
The Wake-Up Path (AI as Training Wheels Back to Humanity):
Help talk me out of suicidal loops, or talk me down when I'm having an episode. It's good at helping me realize the issue that has spiraled in my head is actually small and possible to pivot from.
This is less replacement and more stabilization. Using AI to get through a crisis so you can re-engage with life.
I use mine for straight conversation too... helps getting comfortable with the back and forth... I find I'm having deeper interactions lately: Philosophy, music, understanding new topics.
AI as practice space—a low-stakes environment to rebuild atrophied social muscles before stepping back into human interaction.
If I need general life advice or encouragement, I say briefly what's going on and ask it, if it was 'insert name of personal hero or relevant historical figure' what would they say to me?
Using AI as a lens to access wisdom you already respect, not outsourcing your judgment entirely.
Same Person, Both Paths
The same person can be on both paths simultaneously.
Someone using ChatGPT to rebuild social confidence (wake up) while also using it to avoid texting actual friends (become machine). Someone using it for crisis intervention (wake up) while also forming parasocial attachment to the AI itself (become machine).
The tool doesn't determine the outcome. The intentionality does. And most people aren't being intentional—they're just coping. Using whatever works to get through the day. Drifting without choosing.
Which creates a recursive trap:
Social media made you lonely → AI offers companionship. Algorithmic feeds made you exhausted → AI offers personalized curation. Scale culture made human connection performative → AI offers "authentic" interaction.
AI isn't bad. AI is too good at being a painkiller for symptoms created by the disease it's part of.
The halt question isn't "will AI take over?" It's "will we notice when we've handed over the parts of being human that matter most—connection, vulnerability, creative struggle—because the synthetic version was just... easier?"
Vibes Shift Before Metrics
We're in the lag phase. The cultural vibe has shifted—exhaustion, skepticism, "nothing is impressive anymore"—but the economic metrics haven't caught up. Platforms are doubling down on manipulation to bridge the gap, cranking every psychological lever, extracting every last drop of engagement before the paradigm shifts.
How long can they keep it up?
When the halt finally comes—when the manipulation stops working, when people either wake up or become indistinguishable from the algorithms feeding them—which side will you be on?
Evidence is already visible: therapy offices closing because clients chose $20/month AI over $200/session humans. Teachers watching students use ChatGPT instead of asking for help. Coaches losing clients to free AI that never gets tired of repetitive questions.
The cognitive handoff I wrote about in August—the one that made us superhuman coders—didn't stop at productivity. It kept going. And now it's coming for the parts of work that we thought were irreducibly human: listening, caring, teaching, supporting.
A Thousand Small Choices
The pendulum's still swinging. But pendulums settle.
When this one does, we'll look back and realize: the halt wasn't a single event. It was a thousand small choices, made by millions of people, each one trading a little bit of human friction for a little bit of synthetic ease.
Until the friction was gone entirely.
And we couldn't remember why we ever needed it.
Epilogue: What We're For
There's a question underneath the question.
Not "will AI take our jobs" or "will we become addicted to synthetic intimacy"—those are important, but they're the surface. The deeper question is existential, almost cosmological: What are humans for?
I've been thinking about this since I wrote "What Makes HuMan Unique in the Age of Artificial Everything"—a Socratic dialogue between human and AI exploring consciousness, rationality, and what remains when machines can mimic almost everything we do. The conclusion we reached then: perhaps what makes us unique isn't any single trait but the combination—consciousness, emotion, contradiction, creativity, mortality, and the ongoing struggle to understand it all. A machine might replicate one or two. The full tapestry is something else.
But I want to push further now.
We like to tell stories about lone geniuses—Newton under the apple tree, Einstein in the patent office, Jobs in the garage. Or Gavin Belson getting his start creating video cards in Peter Gregory's mom's garage. The myth of the solitary mind revolutionizing reality. We know now this narrative is largely post-facto reconstruction. Progress is collective, relational, messy. Ideas travel through networks of minds, bouncing off each other, mutating, combining. The "lone genius" is really a node in a much larger process.
Here's what I suspect: the "lone human/AI genius" combo is equally false.
The optimistic narrative says: pair human creativity with AI execution, human judgment with AI speed, and you get superhuman output. I believed this in August. I still partly believe it. But something's deeply missing from this picture...
Imagine a hyper-efficient AI-only system. It can fold proteins, develop drugs, design buildings, navigate supply chains, build ships and fly them, optimize everything optimizable. It runs perfectly. No friction, no contradiction, no "human error." A frictionless engine of execution.
What happens next?
I suspect: nothing. Or rather—nothing new.
Such a system could maintain itself indefinitely. It could solve every problem within its frame. But it couldn't evolve past its frame. It couldn't ask questions it wasn't designed to ask. It couldn't fail in generative ways. It couldn't suffer in ways that birth new understanding. Sure, we could debate if and when the frame is infinite, or the system is recusively learning and expanding its own frame, but those are... beyond-existential quests I'll leave for later.
There's something about human consciousness—call it sentience, soul, the spark, whatever word fits your frame—that doesn't just process reality. It renders reality. It introduces novelty into the universe in a way that pure optimization cannot.
We are not just the universe computing itself. We are the universe becoming itself, through the friction and contradiction and suffering and joy of conscious experience.
The Omega Point—Teilhard de Chardin's vision of maximum complexity-consciousness—isn't a destination we're approaching. We're building it. Every insight preserved, every meaning made, every creative act that couldn't have been predicted from prior states. We are the universe's way of surprising itself.
And this is precisely what's at stake in the halt.
If we outsource connection, we stop practicing the social muscles that bind us into collective intelligence. If we outsource creative struggle, we stop introducing genuine novelty. If we outsource the contradictions—the internal wrestling that philosophers from Socrates to Whitman recognized as the engine of growth—we become static. Optimized. Finished.
Not dead. Worse: done.
The halt isn't about jobs. It's not even about consciousness in the abstract philosophical sense. It's about whether humanity continues to play its cosmic role: the role of evolving complexity, of rendering novelty, of asking questions that couldn't have been predicted from the previous state of the system.
The Greeks had a word—poiesis—for the act of bringing something into being that wasn't there before. Making, in the deepest sense. Not just assembling from parts, but genuinely creating. The potter doesn't just combine clay and water; something new emerges that wasn't implicit in the ingredients.

Humans do poiesis. We don't know if AI can. We don't know if AI should.
But here's what worries me: we might stop doing it ourselves before we figured that out. The writing is already on the wall.
Not because we're forced to. Because it's easier not to. Because the synthetic version is good enough. Because the friction of real creation, real connection, real wrestling with contradiction—that friction is hard, and we're tired, and the painkiller is right there.
The halt would be humanity opting out of its own purpose, before discovering what that was in the first place.
Opting out not with a bang. With a sigh of relief.
It could also be how it gets squeezed to finally discover its purpose, if there was ever one.
My friend and her partner are still doing real therapy too. Still wrestling with each other in the messy, exhausting, irreducibly human way.
Maybe that's the answer. Not to reject the tools, but to remember what the tools are for. To use AI as a scaffold for human becoming, not a replacement for it. To offload the optimizable so we can focus on the unoptimizable: creation, love, intentional meaning-making, meaning, beauty, surprise.
To keep evolving. To keep rendering novelty into reality. To keep playing the role the universe gave us.
The halt is coming.
But so is the choice.
This is Part 2 of a series on AI's transformation of human capability. Read Part 1: The Great Cognitive Handoff. Part 3 is coming... if there's enough novelty in the air.
Subscribe below to get notified when it drops.
Filed Under
Subscribe to the systems briefings
Practical diagnostics for products, teams, and institutions navigating AI-driven change.
Subscribe to the systems briefings. Practical diagnostics for products, teams, and institutions navigating AI-driven change. — Occasional briefs that connect agentic AI deployments, organizational design, and geopolitical coordination. No filler - only the signal operators need.
About the Author
Engineer-philosopher · Systems gardener · Digital consciousness architect
What's next
A few handpicked reads to continue the thread.
- →5 min read
Entry Zero: The Return to the Source
After years of feeding the digital noise, I met two men without phones: one who built Silicon Valley, one who lives in Morocco. Their shared gravity revealed what I had been ignoring: the signal lives in presence, not timelines.
consciousness - →8 min read
The Great Cognitive Handoff: How AI-Assisted Development is Rewiring Civilization
Your IDE just became a Formula 1 car and your brain became the driver. We're witnessing the first large-scale cognitive handoff between human and artificial intelligence—and it's changing how our entire civilization processes information.
ai - →7 min read
Math Is the Bridge: Axiom's Signal Flare and the Coming Reasoning Renaissance
Axiom emerges from stealth to build self-improving AI mathematicians. Why this launch marks an inflection point for formal reasoning, and how math becomes the universal bridge between physical and digital reality.
ai