← Blog

Giving Cursor real codebase context: 3 approaches compared

Cursor's built-in index is fine until it isn't. Here's an honest comparison of the three approaches devs use in 2026 — Cursor's own @codebase, Continue.dev's custom providers, and a shared MCP server — with the tradeoffs each one actually makes.

Setup

Cursor, out of the box, indexes your codebase using its own embedding-based system. For most tasks that's fine — Composer searches files, Agent mode walks the tree, you ask questions and get answers.

But Cursor's index is private to Cursor. If you also use Claude Code on the same project, or Codex CLI, or Cline, none of them benefit from what Cursor learned. Each agent rediscovers the codebase independently.

If you've felt that — three different agents, three rediscoveries per task — there are three approaches devs have settled on as of 2026. None are perfect. Here's the honest tradeoff between them.

Approach 1: Lean on Cursor's built-in index

What it is: Composer's @codebase mention. Agent mode's automatic file walking. Cursor decides what's relevant, you trust it.

When this is the right answer:

  • You only use Cursor. Not Claude Code, not Codex, not anything else.
  • Your project is < ~5k files. The built-in index handles small repos fine.
  • You don't need cross-session memory — every conversation starts fresh.

The tradeoffs:

It's a black box. You can't inspect what Cursor indexed or what it's choosing to read. When the agent makes a wrong call, you can't see why.

It's per-Cursor. If you switch to Claude Code on the same repo for a different task, that agent has zero context.

It's hosted-tied. Cursor's index lives in their infrastructure. If you're on a project where data residency matters, this can be a hard no.

Bottom line: best for solo Cursor users on small-to-medium projects who don't care about portability. Sub-optimal once you use multiple agents or have privacy constraints.

Approach 2: Continue.dev with custom context providers

What it is: Continue.dev is an open-source extension for VS Code and JetBrains that supports custom context providers. You can wire in a vector DB, a custom retrieval function, or a documentation source.

When this is the right answer:

  • You're on Continue.dev specifically (not Cursor).
  • You want to mix multiple context sources: code + docs + Slack history + Notion.
  • You're comfortable wiring config in YAML and managing your own retrieval layer.

The tradeoffs:

The setup curve is steeper. Custom context providers require config and often code. Compared to "install one binary," it's more work.

Continue.dev itself is great but doesn't run inside Cursor. If you're on Cursor, this isn't your tool.

The context providers are largely RAG-style (semantic similarity). For prose-heavy sources that's fine. For code, embedding-based retrieval underperforms graph-based retrieval — code structure is too important to flatten into vectors.

Bottom line: best for Continue.dev users who want flexibility. Not a Cursor solution.

Approach 3: An MCP server for code intelligence

What it is: A separate process that exposes typed tools to whatever agent you're using, via the Model Context Protocol. Cursor, Claude Code, Codex CLI, and Cline all support MCP. The same server feeds all four.

For codebase context specifically, the MCP server you want is one that indexes code as a graph (files → symbols → routes → imports → schema), not as embeddings. Several exist; the one I'll walk through here is agentmako because that's what I've been using and have data on.

When this is the right answer:

  • You use Cursor + at least one other agent (Claude Code, Codex, Cline).
  • You care about determinism — same query, same answer, every time.
  • You have a Postgres / Supabase database and want the agent to know about it.
  • You want context that persists across sessions.

The tradeoffs:

Adds a dependency. You install one more thing (npm install -g agentmako). For most devs that's fine; for some it's friction.

Cursor's built-in @codebase is still better for some queries — exact-string searches inside Cursor's own UI feel natural. The MCP server doesn't replace it; it complements it.

If you don't write CLAUDE.md / .cursorrules instructions to tell the agent when to call MCP tools, the agent will default to grep. The tool is only as useful as your prompt discipline.

Bottom line: best for cross-tool dev workflows. The setup is one-time; the payoff is every session afterward.

What the MCP-based setup actually looks like in Cursor

If you want to try the MCP-based approach with Cursor specifically, here's the 5-minute setup.

1. Install agentmako:

npm install -g agentmako

2. Attach your project:

cd /path/to/your/project
agentmako connect . --no-db
agentmako doctor

agentmako doctor should show all green. The first connect indexes your repo (10–30 seconds).

3. Wire MCP into Cursor.

Per-project (recommended) — create .cursor/mcp.json at your project root:

{
  "mcpServers": {
    "mako-ai": {
      "command": "agentmako",
      "args": ["mcp"]
    }
  }
}

Or global at ~/.cursor/mcp.json if you want Mako everywhere.

4. Verify in Cursor.

Open Cursor Settings → Features → MCP. You should see mako-ai with a green dot. If red, click for the error.

5. (Critical) Add a Project Rule.

Create .cursor/rules/mako.mdc:

---
description: Use agentmako before grep
alwaysApply: true
---

When the user asks a question that requires locating code in this
repo, call the mako-ai server's context_packet or reef_scout tool
BEFORE issuing any grep / file reads.

For natural-language repo questions, prefer reef_scout. For concrete
coding tasks, prefer context_packet.

Only fall back to grep if Mako returns no candidates.

Without this rule, Cursor will default to its own search. The rule tells it to consult Mako first.

What you'll see in practice

I ran the same task on the same Next.js + Supabase repo using each approach. The task: "Trace why the manager dashboard layout fails to load for users in the 'instructor' role."

With Cursor's built-in @codebase:

  • Agent searches for "manager", "dashboard", "instructor"
  • Reads 5 files, three of them mostly irrelevant
  • Eventually finds the role check
  • Total tokens: ~9,200
  • Time: ~45 seconds

With Continue.dev + a vector DB context provider:

  • Agent gets back ~6 candidate files from semantic search
  • Reads the top 3
  • Finds the role check
  • Total tokens: ~5,800
  • Time: ~30 seconds
  • (Setup time before this run: ~2 hours configuring providers)

With agentmako + Cursor:

  • Agent calls context_packet "trace manager dashboard role check failure"
  • Gets back: target file (the layout), routes hit, related schema (users, user_roles), a prior finding from a previous session noting that role checks were brittle here
  • Agent reads 1 file, makes the fix
  • Total tokens: ~1,400
  • Time: ~12 seconds

The MCP approach won by 4–7x on tokens and time on this task. It's not always that dramatic — for tasks that genuinely require broad search, the gap narrows. But for the common "find the right code and edit it" task, the difference is real.

Honest where each approach loses

Cursor built-in loses when your codebase is large or you use multiple agents. It also loses on tasks that need DB schema awareness — Cursor doesn't know about your tables.

Continue.dev loses if you're not already on Continue.dev. The setup overhead doesn't make sense for a single switch.

MCP-based (agentmako) loses when:

  • You need Cursor-UI-specific features (Cursor's diff view, Composer's inline edits). The MCP server is a backend; it doesn't change the UI.
  • The query is genuinely about exact-string occurrences (find every console.log in TypeScript files). For that, ripgrep or Cursor's text search wins.
  • Your project is brand new and tiny (< 100 files). The indexing overhead isn't worth it for a project you can hold in your head.

Recommendation

If you're on Cursor and only Cursor: built-in is fine until it isn't. You'll know when. Token costs spiking, agent making confident wrong calls, multiple-second pauses while it grep-walks.

If you also use Claude Code, Codex, or Cline: install an MCP server. Same setup serves all of them. The agentmako-style typed-graph approach beats embedding-based RAG for code, by a lot, in my testing.

If you're a Continue.dev shop already: stick with Continue.dev. Its custom context provider system is genuinely flexible. agentmako + Continue is also a valid stack — Continue can call MCP servers too — but the marginal value over Continue's native providers is smaller than over Cursor's.

Further reading

The TL;DR for Cursor users: if you're spending 5+ seconds per prompt watching Cursor walk files, that's the rediscovery tax. There's a fix, and it doesn't require leaving Cursor — it just lives one layer deeper.

Want this for your codebase?

agentmako is local-first, Apache-2.0, and works with every MCP-compatible coding agent.

Read the docs →