The Unnamed Things
The most valuable territory in AI right now is concepts that exist in practice but don't have names. Six categories that the models don't know about yet, and what claiming them looks like.

I wrote about concept-model fit yesterday. The thesis: the models can only recommend what they know exists. If your category doesn't have a name, the models can't point to it, and your distribution is broken before you ship.
That piece was about the mechanism. This one is about the territory.
I've spent the last couple of months running a dozen AI agents in production. Building loop-driven development tools. Shipping agentic coding workflows. Watching what works, what breaks, and what still doesn't have a word for it. Along the way I kept noticing the same thing: practices that exist, problems that are real, categories that people build in every day but that nobody has named.
Unnamed things can't compound. They can't attract investment, talent, or search traffic. They can't be taught, referenced, or recommended by AI assistants. They exist in practice but not in the ontology.
These are six of them.
1. Harness Engineering
OpenAI published a five-month autopsy of building a product with zero manually-written lines of code. They called the discipline "harness engineering" but didn't claim the category. The post describes the practice. It doesn't define the market.
Harness engineering is the discipline of designing environments in which agents can do reliable work. Not writing code. Designing the repo structure, the test suite, the observability stack, and the verification pipeline so that an agent can navigate, build, and prove its own output.
The shift: from builder to environment designer. The harness is the product, not the code that runs inside it.
I wrote about this in The Harness. Forgeloop-kit implements the architecture. OpenAI's Symphony runs the same loop independently. The pattern is converging. The category name is available.
Who's closest: OpenAI (coined it), Forgeloop (implements it open-source), anyone running agent-driven CI/CD. What claiming it looks like: A course, a certification, a conference track. "Harness Engineer" as a job title.
2. Skill Compounding
Most AI coding tools treat every session as a blank slate. The agent starts fresh, discovers the codebase, makes mistakes it made yesterday, and leaves no trace of what it learned.
Skill compounding is the opposite: systems where agent capabilities accumulate in the repository over time. Each task the agent completes leaves behind reusable knowledge. Skills build on skills. The repo gets smarter, not just bigger.
I formalized this as Skills-Driven Development. Each skill is a discrete SKILL.md file: one capability, independently maintainable, composable with others. The repo becomes a curriculum. New agents inherit prior work. The difference between a codebase that an agent uses and a codebase that teaches agents is skill compounding.
Who's closest: Nobody has named this. SkDD is the methodology. The category is open. What claiming it looks like: A package manager for agent skills. A metric: "skill depth" per repo.
3. Principal Infrastructure
Last month I wrote about the principal/labor divide: as AI eliminates execution work, humans either become principals (directing agent labor) or they become labor competing with agents. The essay described the dynamic. It didn't name the tooling layer.
Principal infrastructure is the stack that lets individuals run principal-agent organizations. Orchestrators, agent fleets, memory systems, delegation frameworks, review pipelines. The tools that turn a solo operator into someone managing ten agents across multiple products.
I run this stack. Ten agents, three Telegram groups, forty cron jobs, daily standups, memory scouts, automated escalation. The setup exists. The market category doesn't.
Who's closest: OpenClaw (my orchestrator), LangChain/CrewAI (framework layer), but nobody is framing this as "infrastructure for principals." Everyone frames it as "agent frameworks." The user story is different: frameworks serve developers. Principal infrastructure serves operators. What claiming it looks like: "Principal stack" as a category. Benchmarks for operator leverage: agents managed per human, tasks delegated per day, decisions automated per week.
4. Concept-Model Fit
I named this one yesterday. Including it here because it belongs on the map.
The question every builder should be asking: does the model know your category exists? If not, you're invisible to the fastest-growing distribution channel in history. Product-market fit was the old bar. Concept-model fit is the new one.
Who's closest: Me, as of this week. The term didn't exist before Sunday. What claiming it looks like: You're reading it.
5. Context Architecture
Not prompt engineering. Not RAG. The structural design of what information an agent has access to, when it discovers more, and how the access pattern shapes behavior.
I ran a memory evaluation on my own agent system last week. The agent's recall was 60%. Decision rationale recall was 25%. The fix wasn't a better model or a fancier embedding. The fix was reorganizing files so the "why" lived next to the "what." Pure architecture. Structure determined recall.
Context architecture is the discipline of designing information access patterns for agents. How big is the context? What goes in the system prompt versus what gets searched? What's the relationship between session memory and long-term memory? How do you index decisions versus events? These are architectural decisions that determine agent capability more than model choice does.
Who's closest: Anthropic (their CLAUDE.md standard for persistent project context), OpenAI (Symphony's worktree isolation), Repo Prompt (native Mac tool for curating exactly what context an AI sees from your codebase), anyone who's hit the "context window full of garbage" wall. What claiming it looks like: A design discipline. "Context architect" as a role. Evaluation frameworks for agent information access.
6. Verification Markets
When agents can generate code, text, images, and plans faster than humans can review them, the bottleneck shifts. Generators aren't scarce. Verifiers are.
The RL training pipeline already hits this: building reliable task environments with good scoring functions is harder than generating model outputs. The same dynamic is emerging in production: the agent ships the PR in minutes. The human review takes hours. Whoever builds the verification layer that lets agents prove their own work (or lets other agents verify it) removes the actual throughput constraint.
Mistral just released Leanstral (March 2026), the first open-source AI agent built specifically for Lean 4 formal verification. They framed the problem exactly: "as we push these models to high-stakes domains, we encounter a scaling bottleneck: the human review." Their answer is machine-checkable proofs. I wrote about this axis last year in Math Is the Bridge, arguing that formal reasoning becomes the universal verification layer between physical and digital reality. Leanstral is the first product that takes that bet seriously.
Who's closest: Mistral (Leanstral for formal proofs), formal methods people (TLA+ for agent specs), OpenAI's harness approach (making everything agent-verifiable). Nobody is framing "verification" as a market category. What claiming it looks like: Verification-as-a-service. Agent-to-agent review protocols. A marketplace where verification capacity is the traded resource.
Six categories. All real. All practiced. None in the models' ontology yet.
The window for naming them is open. The training runs that will define what the models know about 2026 haven't started yet. The content that exists when they do is the content that becomes ground truth.
The map is incomplete. These are the territories I can see from where I'm standing. There are more. If you're building in any of these spaces and you haven't named what you're doing, consider this permission.
Name it. Ship it. Make the models remember.
Subscribe to the systems briefings
Practical diagnostics for products, teams, and institutions navigating AI-driven change.
Subscribe to the systems briefings. Practical diagnostics for products, teams, and institutions navigating AI-driven change. — Occasional briefs that connect agentic AI deployments, organizational design, and geopolitical coordination. No filler - only the signal operators need.
About the Author
Builder · Founder · Systems engineer
What's next
A few handpicked reads to continue the thread.
- →6 min read
The Prefix
In October 2021, Facebook renamed itself Meta. They didn't take a word. They took the prefix that generates every other word. A case study in concept colonization.
naming - →8 min read
The Last Protocol
MCP vs CLI is the wrong debate. Both are wrappers around the same thing. The real question is what happens when AI agents stop using our interfaces entirely.
mcp - →4 min read
Concept-Model Fit
Product-market fit is legacy infrastructure. The new question isn't whether customers want your product. It's whether the models know your category exists.
strategy