Skip to content
Available in
Artificial Intelligenceleadership

Self-Deprecation as Survival: Why Making Yourself Obsolete Is the Only Way Forward

Training your replacement looks like suicide until you realize it's the forcing function that prevents decay. Rick's layoff story and the paradox of voluntary obsolescence.

·12 min read
Self-Deprecation as Survival: Why Making Yourself Obsolete Is the Only Way Forward

Rick Morales posted a warning on LinkedIn that's been rattling around in my head since I read it. He was part of the Apple team that trained AI to do their jobs. Then they got laid off. His message: "If they're pushing you to use AI for your work, understand this: you're already putting the noose around your neck."

Rick Morales LinkedIn post warning about training AI replacements

Rick's warning hit a nerve—the 2025 betrayal narrative in its purest form.

The metaphor is visceral. Train your replacement, watch it learn your patterns, then watch it take your seat while you're escorted out. It's the betrayal narrative of the 2020s, and it's not wrong—corporations are optimization machines. When a tireless, cheaper version of you becomes viable, the math is simple: you become a liability.

And yet.

Throughout my career, in startups and bigco alike (and even at home and with friends...), I've had one obsessive practice: making myself obsolete. Every playbook documented, every decision framework externalized, every specialized knowledge I held turned into a system that could run without me. I've trained people to replace me, built tools that automated my judgment, and designed processes that made my direct involvement unnecessary.

Same behavior as Rick's team. Different outcome. Why?

The Replication Forcing Function

When you externalize your work—whether into documentation, tools, or another person's head—you create a forcing function for your own evolution. The work that once justified your existence no longer does. You either find the next layer of value you provide, or you decay alongside the function you used to perform.

This is the paradox: the same instinct that kills your role keeps you alive in the next one.

But only if you're running the process intentionally, not having it run on you.

Rick's story reads like the shadow side of this pattern. He trained the AI. The AI learned. The system optimized. He was deprecated by code he helped create. The tragedy isn't that he made himself obsolete—it's that he did it without architecting his own next move.

The corporation didn't betray him. It did exactly what corporations do: maximize efficiency within the current optimization landscape. The question isn't "will the machine replace me?"—that's already decided. The question is: what are you becoming while the machine learns what you were?

Voluntary Evolution vs. Forced Deprecation

There are two paths that start from the same place: training your replacement.

Path 1: Forced Deprecation You train the system because you're told to. You focus on making the handoff seamless, on being a good team player, on "embracing AI." The system learns. You're still performing the same function, but now so is the AI—faster, cheaper, more scalable. Eventually the math tips: why pay you when the machine already knows the patterns?

You're fired. You're angry. You write a LinkedIn post warning others.

Path 2: Voluntary Evolution You train the system because you see it as liberation from your current function. You focus on making the handoff seamless, and you focus on what becomes possible when you're no longer bottlenecked by routine execution. The system learns. You're no longer performing the same function—you're building the system that builds the systems, or spotting patterns the AI can't see, or architecting problems worth solving.

The function you trained away was holding you back. Now you're free to operate at the next level.

Same action. Radically different intent. Radically different outcome.

What Self-Deprecation Actually Looks Like

This isn't abstract philosophy. I've run this pattern repeatedly:

Throughout every startup I've ever touched, as co-founder or advisor, the pattern repeats: months documenting every decision framework, every customer insight, every operational pattern. Turned into training systems. Within a year, most decisions happen without me.

What happened to me? Did I become obsolete?

No. I became free to work on product architecture and systems design—and at times, totally different projects. The replication forced me up the abstraction ladder, or sometimes sideways into entirely new territories.

Leading teams at Meta (see my earlier reflection on those five years), I learned that "Embrace The Bad" included embracing the moment your skillset became commoditized. The engineers who survived waves of tooling changes and org shifts weren't the ones who protected their specialized knowledge—they were the ones who built the tools that obsoleted their specialization, then moved to the next problem.

This is what I mean by information-centric leadership: you're not hoarding knowledge as power, you're distributing it as fast as possible so you can discover what you're capable of beyond the knowledge you currently hold.

Every time I've made myself obsolete in a role, I've been forced to ask: what's the next version of value I can provide? That question is the engine of growth. When you're still necessary in your old function, you never have to answer it.

Worth noting: at some meta-meta-meta point, this question eventually bites you in the ass and you have to go full existential. What's the next version of value I can provide when everything I know how to do can be systematized? Hence, maybe, this blog—an exploration of questions I don't yet have systems for.

The System Doesn't Care About Your Intent

The twist—and this is crucial—is that the corporation can't tell the difference between Path 1 and Path 2. It just sees: task automated, efficiency gained, cost reduced. Whether you're playing 4D chess with your career evolution or passively training your executioner, the immediate outcome looks the same from the optimization algorithm's perspective.

This is why Rick's warning isn't wrong. If you're training AI to do your job and you haven't architected what you become after that job is gone—which at the very base layer includes using that AI to do your job better/faster/cheaper/more reliably—you're in danger. The system will optimize you out. It doesn't care about your growth journey or your self-actualization. It cares about cost per unit of output.

But the thing is: the system was always going to optimize you out eventually. That's what systems do. They find inefficiencies and eliminate them. You were always going to face obsolescence—whether from AI, or offshoring, or younger workers willing to do your job for half the cost, or simple technological change.

The only choice you ever had was: deprecate yourself voluntarily and control the evolution, or get deprecated by forces outside your control.

Rick chose the latter, maybe without realizing there was a choice. The system made the choice for him. That's the tragedy.

Becoming the System That Builds the Systems

The survival strategy isn't to avoid training your replacement. It's to become the person who architects what comes after your replacement exists.

This requires a fundamental shift in how you see your role. You're not a function-performer. You're not even a knowledge-worker in the traditional sense. You're an evolutionary node in a larger system, and your job is to keep evolving faster than the system can optimize you out.

Practically, this means:

1. Document compulsively. Every pattern you notice, every decision framework you use, every insight you gain—externalize it. Make it teachable. Make it systematizable. Not because you're trying to get fired, but because anything you can't externalize is a sign you don't understand it well enough yet.

2. Train your replacement before anyone asks. Don't wait for the reorg or the AI pilot program. If someone (or something) can learn to do your job, teach them. The sooner your current function is replicated, the sooner you're forced to find your next one.

3. Watch what becomes possible when you're not doing the old thing. The space that opens up when you're no longer bottlenecked by execution—that's where your next value lives. Pay attention to what you're curious about in that space. That curiosity is directional.

4. Build systems, not just output. Every task you do repeatedly is a system waiting to be designed. When you build the system instead of just executing the task, you're moving from operator to architect. Architects don't get deprecated the same way operators do—they get asked to architect the next thing.

5. Stay adjacent to the automation. This is from my stance on AI use: I use AI as leverage, not replacement. I'm building with the tools that are automating knowledge work, which means I understand how they work, where they fail, what they enable. That adjacency is protective—you can't easily replace someone who's building the replacement.

The Meta Layer: AI Training Itself

There's a recursive dimension here that Rick's story highlights. When AI learns from humans doing their jobs, then humans get laid off, then new humans train the next version of AI... we're watching meta-learning eat itself. The system is learning to learn. The question is whether you're learning faster than the system is learning from you.

This is fundamentally a debugger-level problem. The system is running. It's optimizing. You can't stop it or even slow it down meaningfully. Your only leverage is understanding how it works and positioning yourself as part of the optimization function rather than part of the thing being optimized.

Rick was optimized out. The AI team he trained was optimized in. Apple's system did what systems do: found a more efficient configuration. Rick's mistake—if we can call it that without blaming the victim of a broader systemic pattern—was positioning himself as the thing being automated rather than as the architect of the automation.

Same building. Different floor. Completely different outcome.

The Failure Mode

Let's talk about the risk, because this isn't a guaranteed win strategy. The failure mode is real: you make yourself obsolete, but you don't successfully evolve into the next role. Maybe the next layer of value you hoped to provide isn't actually valued by the organization. Maybe you misread what skills would matter. Maybe the reorg came before your evolution completed.

This happens. This is a risk.

But compare it to the alternative: clinging to your current function while it slowly (or quickly) gets automated/offshored/commoditized. That path has a 100% failure rate. The function will eventually be replaced, and you'll be holding onto a job that no longer exists.

At least with voluntary obsolescence, you're building optionality. Every time you successfully make yourself obsolete and evolve to the next role, you're proving to yourself and the market that you can do it. That becomes a meta-skill: the ability to continuously reinvent your value proposition.

That meta-skill might be the only durable career asset in a world where AI is eating everything.

The Choice Architecture

So what do you do with Rick's warning?

You take it seriously. You recognize that training AI to do your job is a threat if you don't control the evolution that follows. The noose metaphor is apt for passive participation in your own obsolescence.

But you also recognize that the alternative—refusing to train AI, protecting your specialized knowledge, trying to make yourself indispensable in your current function—is a slow-motion version of the same outcome. The system will find another way to optimize you out.

The only winning move is: accelerate your own obsolescence and architect what comes after.

Make yourself obsolete before the system does it for you. Train your replacement with one hand while building your next role with the other. Treat every externalization of your knowledge as a liberation, not a threat.

The corporations will do what corporations do: optimize for efficiency. The AI will do what AI does: learn patterns and execute them at scale. The question isn't whether these forces will reshape your role—they will. The question is whether you're shaping the transformation or being shaped by it.

Rick's story is a warning about passive obsolescence. Let it be a reminder: the same pattern, run with different intent, becomes a completely different story.

Train your replacement. Document your knowledge. Build systems that outlive your direct involvement. Then watch what becomes possible when you're no longer bottlenecked by the work that used to define you.

The replication forces the evolution. The evolution is the only survival strategy.

Everything else is just waiting for the math to catch up with you.


The machines are reading. Late-night thoughts from an Oakland living room. Few humans still do.

Subscribe to the systems briefings

Practical diagnostics for products, teams, and institutions navigating AI-driven change.

About the Author

avatar
Zak El Fassi

Engineer-philosopher · Systems gardener · Digital consciousness architect

Share this post