agentmako is a local-first MCP server for AI coding agents — Claude Code, Codex CLI, Cursor, Cline, and any other MCP client. It indexes the repo, tracks diagnostics, snapshots your Postgres or Supabase schema, and returns typed context packets — so your agent stops rediscovering everything from scratch on every turn.
Mako sits between your coding agent and your repo as an MCP server. It doesn't write code — it returns the context the model would otherwise invent.
Reads the repo. Writes a typed local snapshot.
Your agent calls one tool. Mako returns ranked context.
Stale or contradicted evidence is labeled, not silently served.
The agent edits with cited context. Diagnostics gate the commit.
Same model, same prompts. The difference is the context layer underneath.
Requires Node ≥ 20. Everything else runs locally — index, store, MCP transport.
Same stdio config everywhere. Tools surface as mcp__mako-ai__*.
$ claude mcp add mako-ai \
agentmako mcp ; ~/.codex/config.toml
[mcp_servers.mako-ai]
command = "agentmako"
args = ["mcp"] // .cursor/mcp.json
{
"mcpServers": {
"mako-ai": { "command": "agentmako",
"args": ["mcp"] }
}
} Tools are only as useful as the agent's choice to call them. Drop our CLAUDE.md / AGENTS.md into your project root so the agent knows when to reach for Mako.
## Mako MCP Usage This repo has `mako-ai` registered in `.mcp.json` as: ```json { "mako-ai": { "command": "agentmako", "args": ["mcp"] } } ``` In Claude Code, Mako tools usually appear as `mcp__mako-ai__<toolName>`. The examples below use the bare tool name for readability. ### Operating Model Mako is a deterministic project context engine, not a replacement for normal coding discipline. Use it to narrow the work: relevant files, symbols, routes, schema objects, findings, freshness, and risks. Then use normal reads, edits, tests, and shell commands to implement and verify. Prefer Mako before broad grep/file walking when the question is about project structure, cross-file impact, database usage, routing, auth, or known findings. Prefer `live_text_search` or shell `rg` when you need exact current disk text after edits. Mako has two evidence modes: - Indexed/Reef evidence: fast and structured, but tied to the last index or persisted fact snapshot. - Live evidence: current filesystem or live database. Use this when line numbers, edited files, or recently created files matter. Do not treat answer stability as freshness. A stable indexed answer can still be stale relative to disk. Check `project_index_status`, per-evidence freshness fields, or `live_text_search` before relying on exact lines after edits.
Six clusters cover the whole agent loop — from "what's relevant?" to "did it ship?".
context_packet reef_scoutcross_search imports_impactfile_findings verification_statedb_table_schema db_rlsroute_trace auth_pathlint_files git_precommit_checkThe most-asked questions from devs first wiring up an MCP server. Full bank lives at the agentmako FAQ.
An open standard from Anthropic that lets AI coding agents call external tools over a stdio interface. An MCP server exposes tools — search, schema, diagnostics — that any compliant client (Claude Code, Codex CLI, Cursor, Cline) can call.
It has no memory across turns. Each prompt it greps and re-reads files it touched 30 seconds ago. Mako indexes the repo once and serves typed context packets on demand.
Free, Apache-2.0, zero telemetry. Everything runs locally; data
sits in .mako-ai/ under each project. Outbound traffic
only when you wire a model provider or connect Postgres / Supabase.
RAG fuzzy-matches text chunks. Mako stores a typed graph — files, symbols, routes, schema, RLS — and does deterministic lookups. Lower tokens, higher accuracy when fresh, and freshness itself is labeled.
Tools are only as useful as the agent's choice to call them. Drop the CLAUDE.md / AGENTS.md template into your project root — it tells the agent when to reach for which Mako tool. Without it, agents default to grep.