FAQ

agentmako FAQ

agentmako is a local-first MCP server for AI coding agents. Below are common questions about MCP servers, codebase context, agentmako specifics, troubleshooting, and how Mako compares to alternatives. Jump to a section: MCP fundamentals · Codebase context · agentmako specifics · Troubleshooting · Comparison.

MCP fundamentals

What is an MCP server?

MCP (Model Context Protocol) is an open standard introduced by Anthropic in late 2024 that lets AI coding agents connect to external tools and data sources via a standardized stdio interface. An MCP server exposes tools (file search, database queries, schema introspection) that any compliant client (Claude Code, Codex CLI, Cursor, Cline, Continue.dev) can call.

Before MCP, every coding agent had to invent its own context-gathering mechanism. Now a single tool — like agentmako — can serve every agent. You install one server, and all your AI tools get the same enriched view of your codebase.

Full MCP explainer →

What is the Model Context Protocol?

Model Context Protocol (MCP) is a stdio-based protocol from Anthropic, released November 2024, that standardizes how AI coding agents call external tools. It uses JSON-RPC over stdin/stdout, so any process that speaks MCP can serve any agent that speaks MCP. The full spec is at spec.modelcontextprotocol.io.

Practically: MCP turns "AI agent + custom tooling" into a plug-and-play ecosystem. You install agentmako once, and Claude Code, Codex, Cursor, and Cline all gain access to its code-intelligence tools without you wiring each one separately.

How do I add an MCP server to Claude Code?

Claude Code reads MCP servers from ~/.claude.json or a project-scoped .mcp.json. Two ways:

CLI (fastest):

claude mcp add mako-ai agentmako mcp

Manual JSON:

{
  "mcpServers": {
    "mako-ai": {
      "command": "agentmako",
      "args": ["mcp"]
    }
  }
}

Restart Claude Code, then run /mcp inside the session to confirm. Tools surface as mcp__mako-ai__<toolName>.

Full Claude Code setup →

How do I add an MCP server to Cursor?

Cursor reads MCP servers from ~/.cursor/mcp.json (global) or .cursor/mcp.json (per-project). Add:

{
  "mcpServers": {
    "mako-ai": {
      "command": "agentmako",
      "args": ["mcp"]
    }
  }
}

Open Cursor Settings → Features → MCP to verify the server shows green. Composer (⌘+I) and Agent mode both see the tools after a restart.

For best results, add a Project Rule at .cursor/rules/mako.mdc that instructs Cursor to call mako-ai tools before grep. Full setup: Cursor MCP setup with agentmako.

How do I add an MCP server to Codex CLI?

Codex CLI uses TOML, not JSON. Add to ~/.codex/config.toml:

[mcp_servers.mako-ai]
command = "agentmako"
args = ["mcp"]

Restart any running codex sessions. Verify by asking Codex to list its tools — mako-ai__* should appear. Works with both codex (interactive TUI) and codex exec (one-shot).

Full Codex CLI setup →

How do I add an MCP server to Cline (VS Code)?

Cline is a VS Code extension. Open the Cline panel, click the MCP Servers icon, choose Configure MCP Servers, paste:

{
  "mcpServers": {
    "mako-ai": {
      "command": "agentmako",
      "args": ["mcp"],
      "disabled": false,
      "autoApprove": ["context_packet", "reef_scout", "repo_map", "cross_search"]
    }
  }
}

Save — Cline reloads MCP servers automatically. The autoApprove list lets safe read-only tools run without permission prompts.

Full Cline setup →

Can I run an MCP server locally?

Yes — most MCP servers, including agentmako, run entirely on your machine via stdio. There's no hosted service, no remote API, no auth tokens. Your code never leaves your laptop.

agentmako specifically stores everything in local SQLite under each project's .mako-ai/ directory. The MCP transport is stdin/stdout between your AI coding agent (e.g., Claude Code) and the local agentmako mcp process. Apache-2.0 licensed.

Is MCP free?

The Model Context Protocol is open and free — Anthropic published the spec under MIT. You're free to write or use any MCP server, and most existing ones (including agentmako) are free + open source.

You may still pay for the AI agent itself (Claude Code uses your Anthropic API quota; Cursor has paid plans) but MCP servers themselves are not a separate billing line.

Codebase context for AI agents

Why does my AI coding agent rediscover the codebase every prompt?

Without a persistent context layer, agents have no memory of your project across turns. Each prompt, they start fresh — running grep, walking directories, re-reading files they read 30 seconds ago. This burns tokens (high cost) and loses signal (the agent forgets what it learned).

The fix is a context engine like agentmako that indexes your repo once and serves typed context packets on demand. The agent calls one tool (context_packet) and gets back ranked files, active findings, schema-aware hints, and a "read next" pointer. Subsequent turns build on durable findings stored across sessions.

What is a context packet? →

How do I give my AI coding agent project context?

Three layers, in order of effectiveness:

  1. Project instructions fileCLAUDE.md, AGENTS.md, .cursorrules. Read by the agent on every turn. Use it for "always remember this about my project" rules.
  2. MCP server with codebase intelligenceagentmako indexes your repo, tracks symbols/routes/schema, returns deterministic context. The agent calls context_packet instead of grepping.
  3. Per-prompt context — explicit @file mentions or pasted code. Cheapest but only as good as your guesses about what's relevant.

Combining all three: start every project with a CLAUDE.md template, install agentmako once, and stop pasting code into prompts. See the agentmako CLAUDE.md template.

How do I make Claude Code aware of my Postgres schema?

Out of the box, Claude Code knows nothing about your database. Two options:

  1. Paste schema in CLAUDE.md — works for small projects, gets stale fast.
  2. MCP server with live DB inspectionagentmako connects to Postgres / Supabase and exposes db_table_schema, db_rls, db_rpc tools. Claude Code calls them directly when it needs to know about a table or RLS policy. Always current.

Setup: agentmako connect . (interactive — wires Postgres or Supabase) or agentmako connect . --db-env DATABASE_URL --yes for CI. Full guide: database-aware edits with agentmako.

What's the best way to give an AI coding agent codebase context?

The most effective stack as of 2026:

  1. A CLAUDE.md / AGENTS.md at project root with project rules (auth model, route patterns, schema constraints).
  2. An MCP server like agentmako that indexes the codebase and returns ranked context on demand.
  3. Per-prompt @file mentions only when needed.

This keeps the agent's per-turn token spend low while ensuring it has access to the right context. The CLAUDE.md tells the agent how to think; the MCP server lets the agent look things up without rediscovering everything from scratch.

How do I stop my AI agent from hallucinating routes / files / functions?

Hallucination usually comes from the agent guessing instead of looking. Solutions:

  1. Index your codebase so the agent has a typed graph to query. agentmako's repo_map, cross_search, and route_trace give the agent ground truth instead of letting it pattern-match from memory.
  2. Use freshness-aware tools — agentmako labels evidence as live, fresh_indexed, stale, contradicted, or unknown. The agent won't return a stale answer as a confident one.
  3. Require citations in your CLAUDE.md — instruct the agent to include file:line references for every claim. If it can't cite, it's hallucinating.
How do I make Cursor / Claude Code understand my whole codebase?

There's no magic. You need to give them a query interface to your codebase, not a "complete dump" — codebases are too big for context windows. The right architecture:

  • Index, don't dump. Use a tool like agentmako that builds a typed local index of files, symbols, routes, imports, schema. The agent queries the index per-question.
  • Rank, don't list. Tools like context_packet return ranked candidates — top 5 files, not all 500. Token-efficient.
  • Track changes. When you edit a file, the index needs to know. agentmako handles this via working_tree_overlay (light) or project_index_refresh (full re-index).

This is the ground truth: AI agents understand a codebase the same way a new engineer does — by querying targeted parts of it, not by reading everything.

agentmako specifics

What is agentmako?

agentmako is an open-source (Apache-2.0) MCP server that gives AI coding agents typed, deterministic context about your codebase. Instead of letting Claude Code, Codex, Cursor, or Cline grep blind, agentmako indexes your repo into local SQLite and returns ranked "context packets" — the right files, routes, schema info, and prior findings — for any coding task.

Local-first, no hosted service. Works with any MCP-compatible agent. Install with npm install -g agentmako.

Is agentmako free?

Yes. agentmako is open source under Apache-2.0. There's no paid tier and no hosted service. The codebase is at github.com/drhalto/agentmako and the npm package is published as agentmako.

How does agentmako work?

Three-layer architecture:

  1. Indexer — walks your repo, parses TS/JS/TSX/JSX with AST, extracts symbols/routes/imports, snapshots Postgres schema (if connected). Stores everything in local SQLite under each project's .mako-ai/.
  2. Reef Engine — durable fact + finding layer. Tracks freshness, accumulates findings across sessions, labels evidence as live / fresh / stale / contradicted.
  3. MCP serveragentmako mcp exposes 85 typed tools (context_packet, cross_search, route_trace, db_rls, etc.) that any MCP-compliant agent harness can call.

When your AI agent asks "where is the auth callback?", it calls context_packet and gets back: target file + line, related routes, active findings, schema touch points, and a recommended next tool. Deterministic. Roughly 600 tokens vs. 12k for blind grep.

Does agentmako work offline?

Mostly yes. The indexer and MCP server run locally — no network needed for code intelligence. The exceptions:

  • Database tools (db_table_schema, db_rls) need network to your Postgres / Supabase, of course.
  • npm install -g agentmako itself needs network the first time.

After install, agentmako connect, agentmako mcp, and all read-only tools work offline.

What languages does agentmako support?

Strong support: TypeScript, JavaScript, TSX, JSX. AST tooling, structural search, and import-graph traversal all work natively.

Partial support: Python, Go, Rust (text-based search via cross_search and live_text_search work; AST and import-graph are TS/JS-first).

For schema awareness: PostgreSQL (including Supabase). Other databases are not currently first-class.

What's the difference between agentmako and a regular RAG / vector search?

RAG retrieves text chunks via fuzzy semantic similarity. agentmako stores a typed graph — files, symbols, routes, imports, schema, RLS, findings — and does deterministic lookups (this route → this handler → this file). Differences:

  • Storage: RAG uses a hosted vector DB; agentmako uses local SQLite.
  • Index type: RAG is semantic embeddings; agentmako is a typed graph (symbols, routes, imports, schema).
  • Lookup: RAG is fuzzy similarity; agentmako is deterministic (this route → this handler → this file).
  • Freshness: RAG re-embeds everything; agentmako labels per-evidence freshness.
  • Token cost: RAG returns text chunks (high); agentmako returns structured pointers (low).

RAG is good when you have unstructured text. Code is highly structured — file paths, imports, type signatures — and a typed graph beats embeddings on accuracy and token cost.

Does agentmako support Supabase?

Yes, first-class. agentmako reads your Postgres connection (Supabase or otherwise), snapshots the schema, and exposes:

  • db_table_schema — columns, indexes, FKs, RLS, triggers
  • db_rls — RLS state and policies
  • db_rpc — function signatures and source
  • tenant_leak_audit — multi-tenant RLS posture audit

Connect interactively with agentmako connect . or via --db-env DATABASE_URL --yes in CI. Secrets live in your OS keychain by default.

Does agentmako send my code anywhere?

No. agentmako does not collect telemetry of any kind — no analytics, no usage pings, no error reporting endpoint. Source code, schema snapshots, tool runs, and Reef Engine facts are stored in local SQLite under each project's .mako-ai/ directory.

Outbound network requests only happen when:

  1. You configure a model provider (Anthropic, OpenAI, Ollama, LMStudio, etc.) for the optional harness. The harness then talks to that provider on your behalf.
  2. You explicitly use Supabase or Postgres tooling that connects to your own database.

Database credentials live in your OS keychain via @napi-rs/keyring. Project config files store keychain references, not plaintext URLs. Full policy: PRIVACY.md.

How do I uninstall agentmako?

Remove the binary:

npm uninstall -g agentmako

To remove indexed data, delete the project state directories:

rm -rf ~/.mako-ai
# from each project root:
rm -rf .mako-ai

Your projects themselves are untouched.

To detach a single project without uninstalling: agentmako project detach from the project root.

Troubleshooting

Why does my MCP server fail to start in Claude Code?

Most common causes:

  1. agentmako not on PATH — Claude Code launched before the install finished. Run which agentmako (or where agentmako on Windows). If empty: npm install -g agentmako again, fully quit Claude Code, reopen.
  2. No project attachedagentmako mcp needs a project. Run agentmako connect . from the project directory first.
  3. Permissions — on macOS, the binary may need explicit "open" the first time. Run agentmako --version from a terminal once.

Diagnose by running the server manually: agentmako mcp — if it fails standalone, the same error will hit Claude Code.

Why does Cursor not see my MCP tools?

In Cursor Settings → Features → MCP, look for the server. Status meanings:

  • Green — connected, tools available.
  • Red — server failed to start (click for error).
  • Missing — config file isn't being read. Check the path: .cursor/mcp.json (per-project) or ~/.cursor/mcp.json (global).

If green but tools aren't called: Cursor's auto-tool-selection is conservative. Add an explicit Project Rule at .cursor/rules/mako.mdc instructing Cursor to call mako-ai tools before grep.

My agent isn't calling Mako tools — what do I do?

Three layers:

  1. Make sure tools are registered — run /mcp in Claude Code or check Cursor MCP settings. If tools aren't listed, the server isn't connected.
  2. Tell the agent to use them — add agentmako's CLAUDE.md template to your project root. Without explicit instructions, agents default to grep.
  3. Be explicit in prompts — for the first few turns, prompt: "Use the mako-ai context_packet tool first." Once the agent sees the tool's value, it tends to keep reaching for it.
Why is agentmako returning stale results?

The Reef Engine labels every piece of evidence with a freshness tag: live, fresh_indexed, stale, historical, contradicted, or unknown. If you're getting stale answers:

  1. Check project_index_status — is the index dirty?
  2. Use live_text_search instead of cross_search if you need post-edit accuracy.
  3. Run project_index_refresh with mode: "if_stale" to refresh.
  4. Run project_index_refresh with mode: "force" only if the index seems wrong, not just stale.

Full freshness model: How Mako tracks freshness.

I'm getting "no project attached" errors

You haven't run agentmako connect . from the project directory. Each project is registered separately so Mako knows which SQLite database to read.

cd /path/to/your/project
agentmako connect . --no-db
agentmako doctor

agentmako doctor confirms attachment.

Comparison & context

agentmako vs. raw grep — what's the difference?

Grep is fine for "find this exact string." agentmako is for "what files matter for this task?"

  • Speed: grep is fast for small repos; agentmako is indexed and fast for any size.
  • Type-aware: grep is not; agentmako is (AST-parsed).
  • Knows imports: grep doesn't; agentmako does (imports_impact).
  • Knows routes: grep doesn't; agentmako does (route_trace).
  • Knows schema: grep doesn't; agentmako does (db_table_schema).
  • Returns ranked context: grep doesn't; agentmako does (context_packet).
  • Tokens per agent turn: grep ~5-15k; agentmako ~600.
agentmako vs. Cursor's built-in indexing?

Cursor has a proprietary embedding-based index for Composer. agentmako is complementary, not competing:

  • Cursor's index — semantic, hosted, optimized for Cursor's UI. You can't inspect it or reuse it elsewhere.
  • agentmako — typed graph, local, MCP-exposed. Same context available in Cursor and Claude Code, Codex, Cline simultaneously. You own the data.

Most users run both. Cursor's UI for code search + agentmako for cross-tool agent context.

Why MCP and not just direct API calls?

Three reasons MCP wins for this use case:

  1. One server, every client. Build agentmako once, it works in Claude Code, Codex, Cursor, Cline, Continue.dev, and everything else with MCP support — no per-client adapters.
  2. The agent picks the tool. With direct APIs, you'd write integration glue per agent. With MCP, the agent itself decides when to call which tool, based on the schemas you expose.
  3. stdio is simple. No auth, no HTTP, no rate limits. The agent spawns the process, talks to it via stdin/stdout, kills it when done.