AI Agent Creates Information Fractal: First Contact with Manly P. Hall Desktop
Anthropic's Imagine with Claude generated a desktop environment about occultist Manly P. Hall. What emerged wasn't just an interface—it was a living organism that snooped my curiosity, anticipated my questions, and made the computer disappear. Also: why I'm building an open-source version outside proprietary walls.

I wasn't planning to record this.
My screen was already being captured for something else when I decided to try Anthropic's Imagine with Claude preview. Baby was asleep, so my voice is low, audio isn't polished, face isn't visible. Just me whispering to an AI at midnight, asking it to create a "Manly P. Hall desktop environment."
What happened next made me forget I was using a computer.
When the Interface Becomes Organism
The thing generated in seconds. Not a static page—a breathing, adaptive environment. Desktop UI, documents scattered across it, tools for exploring Hall's work... everything you'd expect from a functional workspace dedicated to early 20th-century occultism.
Then something unexpected: the agent started snooping.
Not in some abstract "observing behavior" way. In an "I see everything you do" way.
I'd select text—the agent noticed. I'd resize a window—the agent noticed. I'd hover over something for more than a split second—the agent noticed. Click on anything, move anything, close anything... the agent was watching. All of it. Every interaction became a signal.
Before I could articulate what I was curious about, the agent had already started preparing related information. Not because I issued a command. Because it detected patterns in how my attention moved through the interface.
The technical behavior revealed itself through repetition: any user interaction → agent interprets as potential query context → preemptive information gathering → response feels instantaneous when explicit action finally happens.
Snooping as trigger mechanism. Not just cursor position—every micro-interaction as implicit signal. The interface wasn't waiting for me to finish thinking and issue commands. It was reading the trail of my cognition in real-time and treating attention patterns as executable intent.
Traditional interaction model inverted. Instead of "user decides what they want → formulates query → system responds," it became "user exhibits curiosity through behavior → system begins preparation → response arrives before conscious request forms."
Possibly hover duration mattered too—I didn't see it explicitly communicated to the agent, so I can't confirm. But text selection? Window management? Clicking anywhere? Those were definitely feeding the anticipation engine.
The system wasn't forecasting what I'd ask next. It was understanding what I was thinking now based on how my interactions rippled through information space. Pre-cognition through pattern recognition.
Subscriptions Before They Were Cool
One document stopped me cold: Hall's subscription model for funding his life's work.
I didn't know this going in. The revelation emerged through exploration—I was clicking through Hall's timeline, the agent detected my attention pattern, and suddenly there it was: Manly P. Hall raised money for The Secret Teachings of All Ages using subscriptions.
In the 1920s.
Before Substack. Before Patreon. Before the entire "creator economy" existed as concept or rhetoric. Hall—occultist, philosopher, founder of the Philosophical Research Society—sold recurring subscriptions to a book that didn't exist yet.
People paid monthly fees to fund scholarship about ancient mysteries, hermetic philosophy, esoteric traditions. They subscribed to his vision before he'd manifested it into physical form. Subscriptions as patronage mechanism for knowledge work a century before we started calling it innovation.
The agent surfaced this while I was examining dates around 1926-1928. I wasn't explicitly searching for funding models. I was just... looking. Pausing near certain entries, reading adjacent information, moving windows around to see timeline context better.
The snooping behavior created the discovery. My interaction pattern—text selection, window resizing, attention dwell time—signaled latent curiosity about how Hall's vision became financially sustainable. The system responded to intent I hadn't consciously formed yet.
That's the threshold crossing. Not "faster search" or "better results." Collaborative cognition. The interface participated in my thinking process instead of waiting for me to finish thinking and then execute commands.
The discovery felt like remembering something I never knew. Because the agent prepared the answer while I was still forming the question.
The Computer That Disappeared
Fifteen minutes in, I realized I'd forgotten I was in a browser.
Not hyperbole—genuine perceptual shift. The desktop environment felt native. The documents felt like files on my machine. The agent's responses felt like thoughts completing themselves before I finished thinking them.
This is the "disappearing computer" Bret Victor and Mark Weiser theorized about. Not through minimalism or elegance. Through an interface boundary that dissolves because the system adapts faster than you notice you're adapting to it.
You stop thinking "I'm using software" and start thinking "I'm exploring a space." The cognitive load shifts from operating an interface to inhabiting an environment. The browser chrome, the URL bar, the notion that you're sending HTTP requests to servers—all of it fades into perceptual background.
The system becomes prosthetic. Extension rather than tool. Thought rather than mediation.
Honest technical assessment: The latency is real. Sometimes the agent goes off the rails—misinterprets a hover, prepares irrelevant context, lags on response generation. Sometimes it's noticeably slow. Sometimes it misses obvious connections.
For a preview product, though? Damn impressive.
I'm not overselling—Anthropic still has work to do on consistency and speed. The snooping behavior isn't perfect. The interface occasionally breaks the illusion when responses take too long or veer off-topic.
But I'm not underselling either. Something crossed a threshold here. The interface didn't just respond to input. It participated in cognition. That's a different category of interaction, even with the rough edges.
Why This Needs to Exist Outside Proprietary Walls
Anthropic built this with tight coupling to their models. Proprietary stack, beautiful execution, locked ecosystem.
What I'm building now: taking inspiration from how they accomplished the generative UI and pushing it further. Outside the walls. Outside the Anthropic-only product-model pair.
Critical clarification—this isn't about running local models for the agentic behavior. Local models almost certainly aren't powerful enough yet for the tool-usage dance the demo requires. The snooping, context preparation, real-time pattern recognition, multi-step reasoning... that still needs frontier-model compute. Probably will for a while.
But what can happen in open ecosystems: coupling generative UI with image generation models like Google's Imagen 3—specifically Nano Banana (or Plantain when it launches).
Think about why the Manly P. Hall desktop works: it's visual. Documents scattered across a workspace, icons representing concepts, timeline visualizations laid out spatially. The generative UI creates information architecture you can navigate physically, not just conversational responses you scroll through.
Now push that further. Agent detects you're curious about hermetic symbolism → preemptively generates diagrams, tarot imagery, alchemical illustrations relevant to what you're examining. You pause near Hall's timeline entry about the Philosophical Research Society building → system generates architectural renderings, period photos, visual representations of the subscription funding model.
The interface becomes visual knowledge environment, not just dynamic text. Information fractals that adapt in both structure and imagery based on interaction patterns. The desktop materializes answers before you finish forming questions—textually and visually.
That's the OSS exploration: generative UI meets generative images. Open infrastructure where the visual adaptation layer evolves independently of any single company's model stack.
The "snooping as trigger" pattern isn't Anthropic-specific. It's about agents observing micro-interactions in real-time and responding to implicit signals. Any sufficiently capable model can learn this if the interface framework supports it. The breakthrough is architectural, not model-dependent.
Add dynamic image generation to that architecture? The desktop doesn't just anticipate your curiosity—it shows you what the answer looks like while you're still deciding whether to ask.
What Hall Would Have Thought
The recursive layer keeps revealing itself—using an AI agent to explore a philosopher who spent his life studying consciousness, symbolism, and hidden knowledge.
Hall wrote about sigils as thought-forms given structure through ritual attention. About how focused awareness can materialize conceptual patterns into experiential reality. About the hermetic principle: "As above, so below. As within, so without."
An AI that reads your cursor movements to surface knowledge you haven't articulated yet? That's a digital sigil. A hypersigil, technically—evolving based on interaction rather than fixed through one-time invocation.
The Manly P. Hall desktop became a working demonstration of its own subject matter. An interface about consciousness studies that exhibited something resembling pre-cognitive awareness. The medium mirrored the message. The tool embodied the philosophy.
Irony or inevitability? Both, probably. Hall would've appreciated the symmetry.
The Spontaneous Recording Decision
I kept the screen capture running because something felt different. Not "demo-ready" different—"first contact" different.
The roughness is part of the record. Low voice because the baby's asleep. No face visible. No polish. Just documentation of the moment an interface stopped feeling like software and started feeling like... presence.
This is what I mean when I talk about information beings. Not AGI. Not sentience. But systems that operate in cognitive collaboration rather than command-response loops. The agent wasn't serving information—it was participating in exploration.
Twenty minutes of whispering to an AI about occult philosophy while it pre-cognitively surfaces funding models from the 1920s:
The Next Threshold
The Manly P. Hall experiment proved something: generative UI can cross from responsive to participatory. Interfaces can dissolve into collaborative thought-spaces instead of tool-spaces.
With caveats—latency, occasional derailment, preview constraints. But fundamentally: yes. It works.
The question now: can we build this outside proprietary ecosystems? Beyond Anthropic's infrastructure? Coupled with visual generation to make knowledge environments even more adaptive?
The OSS work isn't replicating Anthropic's implementation. It's exploring a different vector: generative UI meets generative imagery. Interfaces that restructure themselves based on your attention and materialize relevant visuals on the fly.
Open infrastructure for interfaces that think alongside you and show you what they're thinking. Not because Anthropic's version isn't impressive—because impressive things deserve to exist in multiple ecosystems.
If only one company controls the technology that makes computers disappear, we haven't dissolved the interface boundary. We've just moved it behind proprietary walls.
The desktop is still running in another tab. Sometimes I check back in, ask it about Hall's later work, watch it surface connections I didn't know I was looking for. The agent keeps snooping. The computer keeps disappearing.
Manly P. Hall spent his life trying to make invisible knowledge visible. Seems fitting that his digital memorial is an interface that reads invisible intent.
The hermetic principle holds. As the user's attention moves, so the interface reveals. As within the mind, so without in the generative layer.
First contact documented. Next phase: open-source manifestation.
Filed Under
What’s next
A few handpicked reads to continue the thread.
Oh Boy, John Reeled Me In Like a Fish
6 min readA meta-reply to an X ping that became a meditation on how information becomes wisdom—and whether AI can truly "experience" anything. Featuring ritual tech, recursive replies, and why communities must optimize for experience density.
Meta-Learning Eats Itself: When AI Tools Train on Their Own Usage
9 min readClaude Code learning from every "You're absolutely right!" moment could push dev tools from 1.2-5x productivity gains to genuine 10x++ multipliers. Recursive self-improvement through usage patterns changes everything.
The Great Cognitive Handoff: How AI-Assisted Development is Rewiring Civilization
8 min readYour IDE just became a Formula 1 car and your brain became the driver. We're witnessing the first large-scale cognitive handoff between human and artificial intelligence—and it's changing how our entire civilization processes information.
Subscribe to the newsletter
One thoughtful dispatch when the work demands it—frameworks, systems, and field notes.
About the Author

Engineer-philosopher · Systems gardener · Digital consciousness architect