Skip to content
aicareer

The Language Game & The Interesting Trap: A Response to Cold DMs in the AI Era

A raw DM from a software engineering student sparked deeper questions about language, authenticity, and chasing "interesting" work in an AI-mediated world.

·6 min read
The Language Game & The Interesting Trap: A Response to Cold DMs in the AI Era

The DM arrived with unfiltered authenticity:

"Hello brother Am Hamza in my final year software engineering looking for PFE and start my career my goal for my now is to learn fundementals and work on great things Want to know in your opinion based on ur exp is there a way i can land interesting expereinces out there that works on interestings things ?"

Cold message. Typos intact. Raw curiosity embedded.

Something about its unpolished directness made me pause... then think... then realize there's a deeper meta-lesson worth unpacking. Not just for Hamza, but for anyone navigating the strange new territory where AI handles syntax while humans grapple with meaning.

Language Paradox: When Grammar Matters More and Less

The first thing that struck me about Hamza's message: language matters today more than ever, precisely because it suddenly matters less.

Consider the split reality we're living in:

In AI-mediated spaces: Claude and ChatGPT won't judge your typos. Feed it messy thoughts, fragmented ideas, stream-of-consciousness dumps—it'll parse meaning and respond with precision. The "fundementals" typo carries zero computational cost.

In human-facing contexts: Every email to a potential employer, every Slack message to a colleague, every pitch to an investor gets measured against an invisible baseline... and that baseline just got elevated by AI's linguistic fluency.

The skill isn't perfect grammar—it's strategic code-switching. Knowing when to let AI clean up your raw thoughts versus when to craft human-facing prose with intentional precision.

Let the machines handle syntax. You focus on substance.

But this creates a fascinating tension. The same person might need flawless prose for their LinkedIn profile while using broken English to communicate complex ideas to AI assistants. We're developing linguistic split personalities—one optimized for human judgment, another for machine parsing.

When "Interesting" Becomes the Enemy

"Work on interesting things"—I've chased this my entire career. From Facebook's messaging partnerships to building AI voice-quiz engines to tokenized creator economies. But the contrarian truth that took me years to learn: what looks interesting from the outside often lacks the very qualities that make work sustainable, valuable, or personally fulfilling.

The most "interesting" projects often lack fundamentals. They're impressive at parties but brutal to build, scale, or monetize. Meanwhile, the boring-looking infrastructure plays—the unglamorous plumbing—tend to compound into empires.

The distinction that matters:

  • Extrinsic interesting: "I want to work at that hot AI startup everyone's talking about"
  • Intrinsic interesting: "I'm genuinely curious about how language models handle context windows"

The first is social validation dressed as ambition. The second is genuine gravitational pull toward understanding.

Instead of hunting externally-defined "interesting experiences," I've learned to map my own energy landscapes.

The framework that emerged:

Trial Phase: Touch multiple domains superficially. Code a weekend project. Read papers outside your field. Build something useless but fun. Cast wide nets without commitment.

Energy Audit: Notice where time dissolves. Where you lose track of hours. Where you naturally return when procrastrating on other tasks. Your attention reveals your authentic interests.

Gravitational Pull: Follow that energy, even if it looks "boring" or "impractical" to others. Trust your cognitive magnetism over external validation.

Value All Experiences Equally: The failed startup teaches resilience. The boring internship teaches fundamentals. The weird side project teaches creativity. Every experience compounds differently.

The goal isn't finding the "most interesting" opportunity—it's finding your natural inclination and removing all excuses for not following it.

Sometimes the most valuable work happens in spaces others dismiss as uninteresting. The infrastructure layer. The documentation. The maintenance. The bridge-building between systems.

Practical Bridges for the AI Era

For Hamza specifically—and anyone navigating similar transitions—the 80/20 breakdown:

Language Layer

  • Draft important communications in AI first, then humanize
  • Build a personal style guide for human-facing prose
  • Practice explaining complex concepts simply (to both humans and AI)
  • Develop comfort with code-switching between contexts

Experience Layer

  • Contribute to open-source projects that solve real problems
  • Build things you'd actually use, regardless of external "coolness"
  • Document your learning process—the journey becomes the credential
  • Choose fundamentals over flash when skills conflict

Network Layer

  • Cold DMs work when they're specific and valuable (like Hamza's was)
  • Share genuine insights, not just requests for help
  • Remember: everyone's building on shifting ground now, and those that have something to lose (Ego included) are the only ones that have got anything to lose.
  • Vulnerability often works better than polish

The meta-game insight: In a world of AI-polished communication, authentic human messiness becomes paradoxically valuable. Hamza's unedited DM stood out precisely because it bypassed the linguistic performance layer.

Information Beings Navigate Split Realities

We're all information beings navigating an increasingly AI-mediated world. But rather than trying to optimize for a single reality, successful navigation requires embracing the split:

AI collaboration spaces: Raw thoughts, broken syntax, stream-of-consciousness ideation. Let the machines handle translation.

Human evaluation spaces: Polished prose, strategic messaging, careful curation. Humans still judge presentation alongside content.

Hybrid spaces: Most professional communication now. Understanding when you're being evaluated by humans, machines, or both.

The students who thrive won't be those chasing externally-defined "interesting" opportunities, but those who:

  • Master AI-human communication protocols
  • Develop genuine curiosity beyond social validation
  • Build things that compound over time
  • Value the full spectrum of experiences
  • Navigate authenticity across split realities

Every interaction now carries metadata about itself—we're constantly evaluating not just what someone says, but how they choose to say it. As I explored in my analysis of AI communication patterns, we're developing sophisticated filters for artificial versus authentic communication. Hamza's message passed those filters precisely because it prioritized meaning over presentation.

The skill that matters isn't perfecting either extreme—it's conscious navigation between them.

Questions That Actually Matter

What interesting trap are you currently escaping?

What gravitational pull are you finally ready to follow?

Where does your energy naturally flow when nobody's watching?

Which communication contexts require your polished self versus your authentic self?

How do you navigate the split between human evaluation and AI collaboration?

The simulation rewards authenticity... most of the time. Learning to calibrate that "most of the time" might be the most important skill for navigating the next decade.

For Hamza and everyone like him: your curiosity is already the most interesting thing about you. Everything else is just syntax.

About the Author

avatar
Zak El Fassi

Engineer · systems gardener · philosopher-scientist · Between Curiosity, Code & Consciousness

Share this post