Every deity was born from a real crisis at Sirsi Technologies. We didn't set out to build a platform — we were just trying to survive our own development environment. These are those stories.
How Thoth's memory, Ma'at's gate, and 20 minutes of forensics recovered 3,400 lines of lost code
On March 25, 2026, we started a new session and discovered that an entire working session had vanished.
Session 17 — a 2+ hour deep architectural session — had never been committed. 38 files modified. 12 new files created. 1,350 lines added. 2,061 lines deleted. Cross-platform CI pipelines. Standalone deity binaries. A full Platform interface refactor. All of it existed only in the local working tree.
The AI assistant that built the code was gone. Its context was gone. The conversation was gone. If this were a human developer who forgot to push before reformatting their laptop, the story would end here.
But we had Pantheon.
Thoth was the key. Git can tell you what files changed. But only Thoth's journal could tell us why they changed. Journal Entry 017, titled "The Boss Fight: 99% Coverage and the Interface Wall," documented the reasoning behind ADR-009 (Injectable System Providers) — the design pattern that drove all 38 file changes. Without that narrative, we'd have 38 modified files and zero understanding of the architecture.
Ma'at's QA_PLAN.md confirmed the coverage targets. PANTHEON_ROADMAP.md validated that the cross-platform CI changes were intentional. And the pre-push gate — the same gate we'd fixed just hours earlier (B10) — caught formatting violations in the recovered files, preventing broken code from reaching the remote.
The entire recovery took 20 minutes. 100% of the code was recovered. The only casualty was a single stale test assertion that had been written against old defaults.
What we changed: Proposed Rule A18 (Incremental Commits) — no AI session may accumulate more than 5 file changes without a checkpoint commit. And ADR-010 — the Pantheon Menu Bar App — will include a "session guardian" that detects uncommitted changes and alerts the developer before a session ends.
The incident proved something we hadn't expected: Pantheon's deity architecture isn't just a clever naming convention. Each deity genuinely specializes. Thoth knew the story. Ma'at enforced quality. Horus mapped the files. The separation of concerns that makes the code maintainable is the same separation that made recovery possible.
Why the Judge of the Dead was the first deity we summoned
It started with a full disk.
In late 2025, our M1 Max workstation — the 32 GB, 1 TB machine meant to be the engine of Sirsi Technologies — was out of storage. Again. The third time in two months. We'd already been through CleanMyMac, OmniDiskSweeper, and the manual "what's in ~/Library?" ritual. Nothing stuck.
So we wrote a script. A small Go program that walked the filesystem and tallied sizes by category. What it found was shocking:
47 gigabytes. Not photos. Not movies. Infrastructure waste that no consumer cleaning tool even knows to look for. CleanMyMac doesn't scan ~/.cache/huggingface. Mole doesn't know about Docker dangling images. DaisyDisk shows them as "Other" and shrugs.
We realized: every developer and AI engineer has this problem. Their workstations accumulate a class of waste that no existing tool understands — because existing tools were built for consumers, not for people who pip install transformer models and spin up Docker containers.
The scan script became a scan engine. The engine grew rules — 58 of them across 7 domains. We added safe deletion with SHA-256 verification. We added profiles: general, developer, ai-engineer, devops. The project had a life of its own.
We named it after the god who weighs the hearts of the dead, because that's exactly what it does — Anubis weighs your system, finds what's dead, and judges whether it stays or goes.
Why the God of Knowledge records everything — so the AI doesn't have to re-read it
By Session 8 of building Anubis, we had a new problem.
Every time we started a new AI coding session, the agent would spend the first five to ten minutes re-reading the codebase. Re-reading files it had already analyzed yesterday. Re-discovering architecture decisions it had already made. Re-learning the module structure it had already mapped.
We were burning 100,000+ tokens per session just on context re-establishment. At $0.003 per 1K tokens, that's $0.30 per session — before any actual work happened. Across 15+ sessions of Ship Week, that was $4.50 in pure waste, plus hours of developer time watching an AI read code it already knew.
The real cost wasn't dollars — it was the context window. With 128K token limits, burning 100K on "remember where we left off" leaves only 28K for actual problem-solving. We were operating every session at 78% capacity overhead.
So we built Thoth — a three-layer persistent memory system for AI coding assistants:
Layer 1: memory.yaml — A structured YAML file containing project identity, architecture, conventions, and current state. One file, ~50 lines, gives any AI agent full project context in under 2K tokens.
Layer 2: journal.md — A running log of sessions, decisions, and progress. The AI reads only the latest entry to know "what happened since last time."
Layer 3: artifacts/ — Deep-dive documents (architecture diagrams, benchmark results, case studies) that the AI references only when needed, not every session.
The result was a 98% reduction in context tokens — from 100K to 2K. The AI now starts every session already knowing everything. No re-reading. No re-discovery. The context window is 98% available for actual work.
We named it after the ibis-headed god who invented writing and recorded all knowledge — because that's the function: write it down once so no one has to re-read it.
Why Ra's warrior goddess is needed to protect developers from their own tools
On March 23, 2026, during Session 13 of building Pantheon, the IDE froze.
We were using Antigravity IDE v1.20.6 — an Electron-based AI coding environment — on an Apple M1 Max with 32 GB of unified memory, 10 CPU cores, 32 GPU cores, and 16 Neural Engine cores. This is not a weak machine. This is one of the most powerful developer workstations money can buy.
And it was completely unresponsive. We couldn't click buttons. We couldn't approve tool calls. We couldn't even close the window. The IDE had been frozen for seventeen minutes.
The natural assumption was memory. 32 GB should be enough, but maybe the AI agent was eating it all? Maybe we needed more RAM? We were about to force-quit and restart when we realized: this is exactly the kind of problem Pantheon was built to diagnose.
So we opened a terminal and ran the same tools we'd been building all week.
It was not RAM. The system had 88% free memory — 28 GB sitting idle. Swap usage was negligible at 253 MB. This was a CPU problem masquerading as a memory problem.
Two Plugin Host processes — the Electron child processes that run AI agent extensions — had been at 100%+ CPU for seventeen consecutive minutes and never yielded. They were doing tokenization, context assembly, file analysis, and prompt construction — all on a single Node.js thread. While they ran, the Renderer process (the one that handles clicks) was starved. IPC messages from button clicks were queued but never processed.
Meanwhile, the hardware that could have helped sat completely idle:
This is the absurdity: a $3,500 machine with 58 processing cores was frozen because all the work was running on a single JavaScript thread. The GPU that could hash files at 15× the speed of CPU? Idle. The Neural Engine that could compute embeddings at 60× speed? Idle. Nine of ten CPU cores? Idle.
We were building Pantheon to solve infrastructure hygiene problems. And the very tool we were using to build it — Antigravity IDE v1.20.6 — was the infrastructure problem. The irony was perfect. The solution was obvious.
If Pantheon can detect this is happening and offload work to the accelerators already in the machine, it doesn't just solve our problem — it solves every developer's problem. Because every developer with an M-series Mac, every developer with an NVIDIA GPU, every developer running an Electron-based AI IDE is experiencing this same bottleneck. They just don't know it yet.
We named the guardian deity Sekhmet — Ra's lioness enforcer, the destroyer of threats. In Egyptian mythology, when Ra was threatened, Sekhmet was unleashed to obliterate the threat. In Pantheon, when your IDE process threatens to starve the UI, Sekhmet detects it, alerts you, and offers to intervene — before you lose seventeen minutes of work.
But Sekhmet is Phase 1. The full plan is four phases:
Sekhmet watchdog monitors Plugin Host CPU. If sustained >80% for >60 seconds, alert the developer. If >5 minutes, offer to restart the Extension Host. Never let the IDE freeze silently again.
Move tokenization and embedding to Apple's Neural Engine via CoreML. Move file hashing to GPU via Metal compute shaders. Move file indexing to Worker Threads. The Extension Host stays free for UI.
Token budget enforcement, redundant context pruning, session continuity. Don't send 100K tokens when 2K contains the same knowledge. Extend developer token runway by 50×.
What works on Apple ANE also works on NVIDIA CUDA, AMD ROCm, and Intel oneAPI. One abstraction layer, every GPU. Deploy Pantheon anywhere developers work.
When 25 invisible processes stole 1.1 GB of RAM and Pantheon looked the other way
On March 25, 2026, we encountered a failure that shouldn't have been possible in a Pantheon-protected environment. The browser subagent — the AI's internal testing eye — was failing. Not because of a bug, but because it couldn't even start.
We ran a manual audit and found a graveyard of dead sessions. 17 Playwright driver processes and 8 headless Chrome renderers were still running long after their parent AI agents had disconnected. They were zombies — orphaned children of a crashed process that were now clogging the pipes of the next scan.
Together, they were holding 1.1 GB of RAM and dozens of open file handles. Most importantly, they were blocking new browser instances from initializing. And yet, the Pantheon dashboard was green. No CPU spikes. No RAM alerts. No ghosts on disk.
This was a dogfood crisis. We had built deities for file waste (Anubis), dead app remnants (Ka), and CPU hogs (Sekhmet). But we had a massive blind spot for process litter — the child processes of developer tools that fail silently and stay running forever.
Within an hour, we summoned a new capability for Sekhmet: Orphan Process Detection. Unlike the standard watchdog which looks for heat (CPU), the Orphan Hunter looks for loneliness. It looks for known patterns (Playwright, LSP servers, Electron helpers, Build watchers) that are running with a PPID of 1 (orphaned) or whose parents don't match their expected toolchain.
The result: internal/guard/orphan.go. Sekhmet now has a "sweep" function that identifies these stale children before they can block the next developer move. It's the difference between a doctor who only checks for a fever (CPU) and one who checks for a full circulatory audit (Process Topology).
How we took pre-push gates from 55 seconds to 15 milliseconds
A gate that takes a minute to pass is a gate that developers start to bypass. In the early days of Pantheon, running our mandatory pre-push coverage check meant sitting and waiting for 55 seconds while go test -cover ./... walked every package.
The math was brutal: 10 pushes per day × 55 seconds = 9 minutes of pure waiting. Multiply that by 5 developers, and you're losing nearly an hour of technical momentum every single day.
Stopping the "redundant walk" — how one shared index makes every deity 19× faster
Three deities — Anubis, Ka, and Hathor — were all walking the same filesystem independently. Every full assessment involved 38 seconds of redundant disk I/O. Our SSDs were screaming, and our results were slow.
We realized the filesystem is a shared resource; the index should be too. We built Horus to walk the disk once, cache a Gob-encoded manifest, and provide O(1) query performance to every other deity.
Why an app you deleted six months ago is still eating 8.5 GB of your disk
We uninstalled Parallels Desktop. We trashed the .app, we emptied the bin. And yet, six months later, we found 8.5 GB of data sitting in ~/Library/Application Support/Parallels. Logs, VM caches, and diagnostic dumps that no standard uninstaller bothered to clean up.
This is the "Ka" — the spirit double that lingers after the body (the .app) is gone. We mapped 17 separate locations on macOS where these spirits hide.
Finding duplicates in 100 GB of photos without melting your disk
Hashing a 20 MB photo takes time. Hashing 5,000 photos takes a lifetime if you read every byte. Most "dedup" tools are disk-death sentences for high-volume users.
Hathor uses a 3-Phase Reflection strategy. We only do the expensive SHA-256 work on files that have already cleared the size and "short-hash" (8KB header/footer) hurdles. Total I/O reduction: 98.8%.
Healing CI runners that say they're full but `system df` says they're empty
A CI/CD runner ran out of space. docker system prune -a cleaned nothing. docker volume ls was empty. Yet df -h showed `/var/lib/docker` at 100%.
Scarab (the module of renewal) found 64 GB of orphaned volumes left over from a previous Docker engine installation that weren't being "claimed" by the active daemon. Scarab doesn't just ask Docker; it audits the filesystem reality.
Every line of code, every benchmark, every bug. We document the journey because transparency is how trust gets built — and because someone else might be hitting the same wall right now.