The Overhead
I recorded a voice note while cooking dinner, describing a feature I wanted. By the time the onions were done, the feature existed — as the blog post itself. A builder's honest account of two weeks running an AI orchestrator, the maddening overhead of making it work, and why the compound returns are worth it.

I'm recording a voice note on a Friday evening while cooking dinner, describing a feature I want — a pipeline that takes my voice, transcribes it, and ships it as a blog post. By the time the onions are done, the feature exists. Not as a prototype. As the thing you're reading right now.
If you're reading this, the proof is the post itself.
I built a machine to ship my thoughts, and the first thing the machine shipped was a post about building the machine. Tangent within tangent within tangent. But that's honest — and honesty about the overhead is the whole point.
Two Weeks Ago, There Was Nothing
On January 27th, I spun up a Google Cloud VM and installed OpenClaw — an open-source AI assistant framework that, as of this writing, has 192,000 stars on GitHub. It hit 100K in two days. Faster than React. Faster than Linux. Faster than anything in GitHub's history. It started life as "Clawdbot," got renamed, and landed on OpenClaw. I found it the week it launched.
The first thing I saw when it booted was my WhatsApp self-DM — the screen I'd been using for years to dump thoughts to myself. I typed: "this screen used to be just me dumping my thoughts to myself, and now it'll be you." The space started to breathe. Something that had been a monologue became a conversation.
We're collectively tripping with AI, and this was my first hit.
What attracted me wasn't the hype. It was the architecture — and the recognition. A week earlier, I'd published Forgeloop-kit, an open-source agentic build loop for software engineering: plan, build, capture, repeat. Persistent memory, skill-driven development, knowledge routing. But my implementation was narrow — focused on code, with a half-wired Slack integration. OpenClaw was the same idea taken to its logical conclusion: a generalized agent orchestrator that could wire into any messaging platform, run any tool, manage any workflow. It was my blueprint, fully realized by someone else, the week after I published mine.
I named the main agent Noth. Short for nothing — the empty space where things happen.
The First Real Test
The GCP bill came fast. Cloud VMs aren't cheap when you're running a daemon 24/7. So I did what any reasonable person would do: I told the AI to migrate itself.
"Find somewhere cheaper and move us there."
Noth browsed pricing pages. Compared Hetzner, DigitalOcean, OVH. Evaluated specs against requirements. Picked Hetzner — a Finnish data center, roughly a fifth of the GCP cost. Then it walked me through the migration, step by step. Server provisioning, data transfer, DNS, daemon restart. Like teleporting infrastructure.
That was the first moment I trusted the system with something real. Not a prompt-and-response trick. A consequential decision about its own hosting, with cost implications, made through reasoning I could audit but didn't have to hand-hold.
The second moment came when I wanted voice. I asked Noth to enable voice note transcription — I figured it'd be more natural to talk than type. The system evaluated two paths: run Whisper locally on the VM, or use OpenAI's cloud API. It checked the server's RAM, realized the VM didn't have enough memory for a local model, and chose the cloud path. Again — not because I programmed that decision tree. Because I described a goal and it reasoned through the constraints.
Two weeks in, the system was making infrastructure decisions about itself. That's when the relationship shifted from "configuring a tool" to "delegating to a system."
The Migration Nobody Tells You About
The first version ran on WhatsApp. My phone, my messages, my voice notes going to an AI that lived on a Finnish server, bouncing through Anthropic's API, coming back as text.
It worked. Until it didn't.
WhatsApp has rate limits. Group threads get noisy. You can't do topics or swimlanes. When you're running seven agents across multiple projects, you need bandwidth that WhatsApp wasn't built for. So today — literally today, Friday the 13th — I migrated the entire operation to Telegram.
Three forum groups. Twenty topics. Nineteen cron jobs retargeted. Memory systems repointed. Session persistence reconfigured. The kind of work that, if you described it to someone at a dinner party, would make their eyes glaze over before you finished the second sentence.
And that's the overhead.
What Overhead Actually Means
I launched a podcast this week called The Overhead. Episode one was me and Noth talking asynchronously via voice notes — me recording, the system responding with synthesized audio, back and forth, about what it's like to build with AI when the AI is also what you're building on.
The name came from a conversation with a friend who pushed back on the whole setup. "You're spending more time configuring the AI than just doing the work," he said. He's not wrong. Auth bugs at 2am. Audio transcription failures. Session compactions that wipe context and force the system to reconstruct its own identity from markdown files. A cron job that fires into the wrong Telegram topic because you fat-fingered a thread ID.
The overhead is real. It's maddening. It's also the price of admission to something that compounds.
Last week, I asked Noth to check on a deployment. It had already noticed the deploy, posted a summary to the engineering channel, cross-referenced Sentry for errors, and drafted a message to the team — before I asked. Not because I programmed that sequence. Because the agents, given tools and memory and autonomy, started connecting dots I hadn't explicitly drawn.
That's the phase transition. When the AI stops being a tool you invoke and starts being a system that operates. My friend quit before reaching that threshold. Most people do. The overhead filters for the people stubborn enough to push through it.
An Operations Model for Builders Who Can't Operate
I've always been a builder. Architecture, systems, code, ideas — the construction part comes naturally. What doesn't come naturally is operations. The follow-through. The daily standup. The release checklist. The "did we actually ship that thing we said we'd ship three weeks ago?" accountability layer.
Every founder I know has this gap somewhere. You're either great at building and terrible at ops, or great at ops and never quite build the thing. The mythical founder who does both is either lying or burning out.
What I discovered in these two weeks: AI orchestration is an operations prosthetic.
Not a replacement for human ops — I'm not claiming Noth runs my company. But the daily pulse checks, the deployment monitoring, the cron-driven accountability nudges, the memory system that actually remembers what I committed to on Monday? That's operations infrastructure I never had before. Not because the technology didn't exist, but because hiring an ops person for a solo builder is absurd, and maintaining the discipline yourself requires being someone you're not.
Now I have an agent that sends me a morning standup prompt, tracks whether I responded, logs what I said, and surfaces it during the evening reflection. It's not sophisticated. It's markdown files on a Finnish server. But it works because it's persistent and tireless in ways I am not.
The Voice Note Pipeline
Which brings us back to this post.
The pipeline that produced what you're reading:
- I recorded a voice note on Telegram while cooking dinner
- OpenClaw's Whisper integration transcribed it
- Noth — the orchestrator — read the transcript, pulled context from memory (the blog's structure, my writing style, the newsletter gap, the OpenClaw GitHub stats), and drafted this essay
- I reviewed the draft via another voice note (still cooking)
- The draft became MDX, got scaffolded onto a git branch, generated a hero image, and was pushed for my final review
- I merged to main. Railway deployed. You're reading it.
Voice to blog to newsletter. From my kitchen to your inbox. Friday the 13th energy.
Is this process overengineered? Absolutely. Could I have just opened a text editor and typed? Sure. But I wouldn't have. That's the point. The voice note captured a thought that would have evaporated by Saturday morning. The system caught it, shaped it, and shipped it — with me steering but not grinding.
The overhead created the conditions for the output.
What's Actually Novel Here
I want to be careful not to oversell this. People have been using speech-to-text and AI writing assistants for years. Dictation isn't new. Blog automation isn't new.
What's new — at least to me — is the ability to dream up plumbing while cooking dinner and have it constructed before you finish your sentence. Plumbing you wouldn't know how to build yourself, not without spending significant time researching and wiring. Plumbing that can leak and destroy the carpet, sure. But plumbing you didn't have five minutes ago.
The system that transcribed my voice is the same system that monitors my deployments, tracks my habits, manages my agents, and knows the context of what I've been working on for two weeks. When I said "ship a newsletter," it already knew the Buttondown API key, the subscriber count, that the last newsletter went out six weeks ago, and that the blog uses Next.js with Contentlayer and MDX.
It's not a transcription tool. It's not a writing assistant. It's an operational context layer that happens to also transcribe and write. The difference matters because context is what separates a clever demo from a useful system.
Builders Building
OpenClaw is open source. The entire stack I described — the agents, the cron jobs, the memory system, the voice pipeline — runs on a single Hetzner VM that costs less per month than a fancy coffee habit. The models are accessed through existing subscriptions. The overhead is time, not money.
This matters because the traditional path to having an operations layer is: raise money, hire people, build process. The new path is: install a daemon, configure agents, iterate for two weeks, and accept that the first week will be painful.
I'm not saying everyone should do this. I'm saying it's now possible in a way it wasn't six months ago, and the people who figure it out first will have a structural advantage that compounds daily.
The Overhead podcast exists for these people. The ones who are actually building on top of AI, not just talking about it. The ones who know the auth bugs and the session wipes and the cron misfires — and keep going because the compound returns are real.
If that's you, the first episode is at overhead.gabl.us. It's me and my AI, talking honestly about what this is actually like. No production polish. No script. Just voice notes and the overhead of making them into something.
This post was drafted from voice notes recorded on Telegram, transcribed by Whisper, shaped by an AI orchestrator named Noth running on OpenClaw, and reviewed over dinner. The overhead of producing it was approximately two weeks of infrastructure work and twenty minutes of actual recording. Whether that ratio improves is the whole experiment.
Subscribe to the systems briefings
Practical diagnostics for products, teams, and institutions navigating AI-driven change.
Subscribe to the systems briefings. Practical diagnostics for products, teams, and institutions navigating AI-driven change. — Occasional briefs that connect agentic AI deployments, organizational design, and geopolitical coordination. No filler - only the signal operators need.
About the Author
Engineer-philosopher · Systems gardener · Digital consciousness architect
What's next
A few handpicked reads to continue the thread.
- →19 min read
The Halt: Wake Up or Become Machine
The cognitive handoff didn't stop at productivity. It kept going—into therapy, teaching, coaching. The economics are brutal, and we're approaching a fork: wake up or become indistinguishable from the algorithms feeding us.
ai - →6 min read
Boundary Recursion: When Ideas Choose Their Vessels
The boundary between thought and expression is recursive: speak the unspeakable, and more becomes possible. Ideas choose their vessels as much as we choose them.
philosophy - →19 min read
Your Startup Isn't Failing. Your Immune System Is Rejecting It.
Why fighting copycats means you have already lost, and what your body knows about founder-idea fit that your brain refuses to accept.
founders