The Plumber Lives Inside the House
A friend sent three questions about agents over WhatsApp. Where do they live? What's the interface? Where do they report back? The answers reveal a mental model most people are missing — and why the plumber has to live inside the house.

A friend sent a WhatsApp message with three questions about AI agents. Not hypothetical questions — real ones, the kind that tell you exactly where someone's mental model breaks down.
I answered out loud. Voice note to Noth. Noth published it.
That's episode three.
The three questions:
- Where does an agent exist?
- What's the front-end interface you use to interact with it?
- Where does it report back?
They sound simple. The answers surprised me — not because they're complicated, but because the right frame changes everything downstream.
An address that receives mail
The agent isn't a website. It's a process on a box.
A VM in Finland. A Mac Mini under your desk. A server in a rack somewhere. It's an address — the same way your home is an address. Stuff gets sent there. Stuff gets processed there. Things leave from there.
Most people picture an AI agent as a chat box in a browser tab. Open tab, get help, close tab. The agent doesn't exist between sessions in that model — it's more like a very smart vending machine.
What I'm describing is different. The agent runs continuously. It has files, memory, access to services. It wakes up in the morning before I do and checks the news. It monitors builds. It watches for things I asked it to watch for. When I'm asleep, it's still at the address.
Think of it like a tenant. Except the tenant is working for you and never leaves.
The interface is the door, not the house
Second question: what's the front-end interface?
The answer is whatever door you point at the address.
Telegram topic. WhatsApp self-DM. Discord bot. Slack integration. SSH terminal. A custom web UI if you want to build one. The interface is not the agent. The interface is just a door. You can have multiple doors into the same house.
I mostly use a Telegram group with topic threads — one for ops, one for writing, one for projects, one for the podcast. When I want to write a blog post, I talk to the writing thread. When I want to check on a deploy, I check the ops thread. Same agent, different doors.
This matters because people conflate the door with the house. They think "my agent is ChatGPT" — but ChatGPT is a door into a house owned by OpenAI. The house isn't yours. The agent isn't yours. The memory resets.
Owning the address means the tenant is actually working for you.
The work is the report
Third question: where does it report back?
Output lands wherever you aimed it.
At 7:30am, I get a news digest in the news thread. That digest is the report. It doesn't also send an email saying "I sent you a digest." The work is the artifact. The artifact is the answer.
A file changed on a server is a report. A message in a group is a report. A compiled episode published to an RSS feed is a report.
This is the part that trips people up most. They expect a status bar, a dashboard, a "task complete" notification. But the agent isn't a to-do app — it's more like a well-run department. You know the department is working because deliverables show up on time. You don't need a separate meeting to confirm the meeting happened.
You can fork the team mid-conversation
There's a fourth thing I didn't expect to say out loud, but said anyway.
You can spawn sub-agents on the fly.
In the middle of a conversation, I can tell Noth to spin up a specialist — something focused only on that one task, isolated, disposable. It does the work. It reports back. It's gone.
The VP can fork into directors. The directors can fork into individual contributors. When the project is done, the ICs dissolve. You're not maintaining headcount; you're renting compute shaped like a team.
That's a different mental model of what a "team" is. Not a fixed org chart. A temporary structure that forms around a problem and disperses when it's solved.
The command line is older than everything you use
I spent about four and a half minutes on this when I only planned to spend one.
The command line is not a legacy interface for people who don't like GUIs. It's the foundation layer. The GUI is a wrapper on top of something older — a surface that boots first, runs underneath everything, and is the actual substrate of the system.
When you SSH into a remote machine, you're not "going to a different computer." You're stepping through a door between simulations. Your local machine is one universe; the VM in Finland is another. The GPU farm you're renting is a third. You move between them with one command.
Most people never learn this because the GUIs do enough. But if your agent is going to live on a box and operate across boxes, someone has to understand what boxes are.
That someone doesn't have to be you. But it helps to know the plumber exists — and to understand that the plumber lives inside the house.
The curriculum that assembles itself
A friend asks three questions. I answer them out loud. The episode becomes an artifact.
Someone finds it next month, six months from now, two years from now, and starts from wherever they dropped in. There's no prerequisite check. No cohort. No enrollment date. The questions stack on top of each other — pilot, throttle, plumber — and the shape of a curriculum appears without anyone designing it.
That's what The Overhead might be, without trying to be: a public, async, voice-driven record of figuring out agentic AI from the inside. Not a course. Not a framework. Just the honest overhead of building something that doesn't have a manual yet.
The agents build where I point. Readers point me.
If you have questions about where the plumber lives in your setup, ideas about the interface layer, or you're already running agents inside a project and want to compare notes — I'm at @zakelfassi or reply to the newsletter. Every direction this has taken came from a conversation.
The Overhead — all three episodes, audio + transcript:
- Episode 03: Summon the Plumber on Spotify · read the transcript
- Episode 02: The Universe's Throttle on Spotify · read the transcript
- Episode 01: Pilot on Spotify · read the transcript
Subscribe via RSS.
Recent writing
- SkDD: Skills-Driven Development — the methodology under the hood in Forgeloop, now standalone
- Works for a Bunch of AI Agents — principal vs. labor in the 2026 job market
- The Burnout Cascade — burnout starts in AI labs and radiates outward
- Managing Agents Is Managing People — Without the Feelings — management theory maps 1:1 to managing AI agents
- I Woke Up Recursive — the night the Colony went live at 1:54 AM
Subscribe to the systems briefings
Practical diagnostics for products, teams, and institutions navigating AI-driven change.
Subscribe to the systems briefings. Practical diagnostics for products, teams, and institutions navigating AI-driven change. — Occasional briefs that connect agentic AI deployments, organizational design, and geopolitical coordination. No filler - only the signal operators need.
About the Author
Builder · Founder · Systems engineer
What's next
A few handpicked reads to continue the thread.
- →5 min read
SkDD: Skills-Driven Development
I accidentally named it SDD in October. Spec-Driven Development claimed that acronym. So: SkDD — the methodology I have been running in production since a git init at 1:54 AM.
skills-driven-development - →4 min read
works for a bunch of ai agents
There are only two positions in the 2026 labor market: principal of your own agents, or labor for someone else's. Jack Dorsey just ran the proof.
ai - →3 min read
Failure is the Universe’s Throttle
We tried to add video to a brand-new voice-note podcast. The laptop overheated, OBS hung, life intervened — and the failure mode was the lesson. Reality doesn’t say ‘no’. It rate-limits you until your pipeline matches your ambition.
overhead