Notes on AI coding agents, MCP, token economics, and codebase context. Practical writing about building tooling that's actually useful when an agent meets a real codebase.
Every Claude Code, Cursor, or Codex session re-greps the same files, re-reads the same handlers, re-builds the same blurry mental model — every prompt. Here's the rediscovery tax, and what a context engine actually does about it.
One CLI command, one restart, one /mcp check. Here's the fast path to wiring an MCP server into Claude Code — plus the three config methods, common failure modes, and a short list of MCP servers worth installing.
Cursor's built-in index is fine until it isn't. An honest comparison of the three approaches devs use in 2026 — Cursor's own @codebase, Continue.dev custom providers, and a shared MCP server — with the tradeoffs each one actually makes.
Every Claude Code or Cursor session starts from zero. The agent re-greps the same files, re-runs the same lint checks, re-debates the same finding you already reviewed. Persistent findings kill the loop.
Multi-tenant Supabase apps leak data through RLS gaps that look like normal code. Your AI agent won't catch them — they're invisible without database awareness. Here's what to audit and how.
grep is excellent. ripgrep made it faster. ast-grep made it structural. None are the right primitive when an AI agent is doing the searching — here's why, and what to use instead.
Most of your Claude / GPT bill is context, not generation. Here's the math, why local models alone don't solve it, and how a local context engine acts as the bridge.