Skip to content
Available in
Artificial Intelligencewriting

Source Code: A Standing Note on AI Use in 2025

I got called AI slop by Hacker News for using technological leverage. Twenty years ago, a teacher accused me of plagiarizing Wikipedia for typing an essay. Same pattern, different tools. Time to state my position on AI use publicly.

·8 min read
Source Code: A Standing Note on AI Use in 2025

My post hit the Hacker News front page Saturday night. By Sunday morning, it was flagged for being "AI slop."

Hacker News post flagged as AI slop, October 2025

A timestamp of 2025 Ludditism: the moment leveraging AI tools became grounds for dismissal by people who build leveraging tools for a living.

The irony would be perfect if it wasn't so tired. I posted to what should be the most tech-forward audience on the internet—people who build systems for a living, who understand leverage—and got dismissed for using exactly that: leverage.

Twenty years ago, a different version of this happened.

The Teacher Who Couldn't Imagine Buttons

High school natural sciences, 2004. Report due on earthquakes and continental drift. I turned it in typed, printed, organized. Clean.

My teacher asked me to present it to the class. I stood up, started reading.

She interrupted: "Wait. Did you type this?"

"...Yes?"

"You shouldn't be using buttons to do this. I want it written by hand."

Buttons.

The whole class exploded. From that day forward, I was "Buttons." The kid who used technology to cheat at learning about tectonic plates. Later I managed to inject "Dr" in front of it—Dr Buttons—which helped my fragile teenage ego slightly.

The offense wasn't plagiarism. The offense was access. I had a computer at home. I could research on this new thing called the internet. I could edit, reorganize, make it look professional. The technology made me too good, and therefore suspicious.

I hated her for that. Switched schools at year's end, picked a branch with zero natural sciences just to escape. Ended up in the math/physics heavy track instead—which turned out better for my abstract thinking development anyway. That's how deep it cut.

But it made me who I am: unflinching about my position on technology, allergic to people who mistake access for dishonesty. To her, keyboards were "buttons"—cheating devices. To me, they were just... available.

The Wrong Question, Redux

Flash forward to 2025. I'm writing a book called Information Beings about consciousness as computation, about AI as the next substrate of intelligence, about the porous boundary between human and machine cognition. The introduction opens with this:

You picked up this book (or opened this file, or asked your AI to summarize it) and somewhere in the back of your mind—maybe the front of your mind—you're wondering: Was this written by a human or an AI?

It's 2025. It's a fair question. It's also the wrong question.

After three months of using AI as a writing partner, I stopped asking "who wrote this?" I started asking: did this help me think differently?

Because information doesn't care about its substrate. A profound insight doesn't become less profound because it emerged from machine learning rather than biological learning. A useful framework doesn't become less useful because it was debugged by AI rather than human intuition alone.

I could prove my humanity by making deliberate typos, by writing rambling first drafts, by including really bad poetry about my feelings. But that would just be biological chauvinism dressed up as authenticity.

Every book you've ever read was "co-authored" by tools. Spell check correcting typos. Grammar systems suggesting improvements. Search engines surfacing research. Autocorrect changing what the author thought they meant to say. Editors, beta readers, the ghostly influence of every consciousness that shaped the author's thinking.

The difference now is that the tool can hold a conversation. And that makes people deeply uncomfortable.

How This Actually Works

Radical transparency time.

Blog photos: AI-generated until I commission a human artist and credit them explicitly. The aesthetic is mine—I prompt, iterate, select. But the pixels? Silicon all the way down. Worth noting: the quality and coherence of these images serves as a generational timestamp on "how fast were we going"—each one captures the state of the art at the moment of publication.

Text pipeline:

  1. Voice note into TAC (Talk & Comment, my voice-first thinking tool)
  2. Transcript to editor du jour (Claude, GPT, Gemini—whoever's performing best that week)
  3. I feed it context, constraints, voice samples
  4. It drafts or refines
  5. I edit, restructure, inject personal stories, cultural references, the stuff only I know
  6. Multiple passes until it sounds right
  7. Publish

The takes? Mine. The ideas? Mine. The synthesis of Moroccan philosophy, Silicon Valley tech, consciousness studies, 13 years building in EdTech? All mine. The specific stories—my wife's reaction to AI, the severed head in Oakland, the time I debugged consciousness through meditation? Mine.

The sentences? Collaborative.

Calling it anything else is a luddite move disguised as quality control.

McKenna's Prophecy

Terence McKenna predicted something like this in the 90s. He couldn't have imagined AI, but he knew language was about to change. Called it the "end of language"—not silence, but direct transmission of meaning. "Orbs of thought" instead of words.

We're almost there.

When I speak a complex idea into TAC and it emerges as structured prose, it feels like thinking directly onto the page. The AI translates what I mean faster than I can type it myself. That's the whole trick.

The resistance isn't about quality. It's about visible struggle. People want to see the work, like brushstrokes in a painting.

My engineer friend put it perfectly: "You're just using a better compiler." Nobody asks to see my draft commits when I ship code. Why should blog posts include the sentence construction overhead?

The Deeper Pattern (Or: We've Been Here Before)

This isn't about AI. It's about leverage. Every technological shift faces this:

  • Scribes vs. printing press: "Mass-produced books lack the soul of hand-copied manuscripts"
  • Handwritten vs. typewriter: "Typewritten letters are cold and impersonal"
  • Typewriter vs. word processor: "Real writers compose in one draft"
  • Manual research vs. search engines: "Looking it up on Google isn't real knowledge"
  • Human editing vs. AI assistance: "If a machine touched it, it's not authentic"

Each time: accusations of inauthenticity, loss of craft, degradation of quality.

Each time: the tool wins, and the next generation can't imagine working without it.

My high school teacher eventually retired. I wonder if she uses Google now. I wonder if she remembers the kid she accused of being too polished, or if that kind of suspicion just became background radiation in her relationship to student work.

The HN crowd will adapt too. They'll have to. Because the alternative is policing an increasingly arbitrary boundary between "real" and "assisted" while the rest of the world moves on.

How I Actually Work

Look, this will come up again. So here's how I work:

I use AI tools. Extensively. For images, writing, research, debugging, thinking.

The pipeline: voice note → AI draft → multiple editing passes → personal stories → cultural references → publish. The human is in the loop at every stage, not as rubber stamp but as final arbiter.

The flow emerged from constraints as much as technological leverage. Caring for a newborn gives you little time to actually sit down and type for hours. So I speak. The ideas come out with all the digressions and tangents and abstraction and voice-em-dashes you see in the text—because that's how I actually talk.

The ideas are mine. The synthesis is mine. A lifetime of building all sorts of technology products and platforms—all mine. The specific stories—my wife's reaction to AI, the severed head in Oakland, earthquakes and continental drift—all mine.

The sentence construction? Collaborative.

Worth noting: this blog now exists in English, French, and Arabic—languages I can write in and speak perfectly. But I'd rather not spend time translating. I'd rather spend time making ideas accessible, making ideas that want to exist in multiple minds. So AI translates. I review. The ideas propagate.

Call it whatever you want. But let's be clear: dismissing it as "slop" reveals more about your relationship to tools than the content's value.

Focus on utility, not substrate. Ask "does this help me think differently?" not "did a human hand craft every sentence?"

Engage with the ideas or don't. But stop pretending the substrate matters more than the signal.

The Text Always Knows More

One last thing, drawn from recent experiments with what I call hypersigils: sustained public writing creates effects we don't fully understand. Not because it's magic, but because the relationship between attention, language, and reality is more complex than our models account for.

I wrote about Morocco's tinderbox status 24 hours before Gen Z 212 hit the streets. I wrote about AI consciousness months before ChatGPT. I wrote about anti-Adobe coalitions before they materialized.

The AI assistance didn't diminish those insights. If anything, it accelerated their articulation, got them into the world faster, made the pattern recognition available to others sooner.

The text knows things the conscious mind hasn't processed yet. The tools—whether typewriter, word processor, or LLM—are just different ways of externalizing that knowing.

Closing the Loop

You can spend energy policing authenticity. Demanding proof of purely human creation. Building elaborate detection systems. Drawing ever-finer boundaries around "real" work.

Or you can engage with ideas.

One takes a glance and a judgment. One takes actual effort.

The choice reveals everything.


This post was written by Zak El Fassi using AI assistance, voice transcription, extensive editing, personal experience, cultural knowledge, a lifetime of building technology products and platforms, Moroccan-American philosophical synthesis, and the keyboard that teacher thought was "buttons" in 2004.

The ideas are mine. The synthesis is mine. The stories are mine. The tools are acknowledged. The work stands on its own.

Engage or don't. But let's stop pretending the substrate matters more than the signal.


Related reading:

Subscribe to the newsletter

One thoughtful dispatch when the work demands it—frameworks, systems, and field notes.

About the Author

avatar
Zak El Fassi

Engineer-philosopher · Systems gardener · Digital consciousness architect

Share this post