Cursor argues coding agents work best with explicit plans, tight context management, and verifiable goals (tests/linters). It details how to extend Cursor agents with Rules, Skills, hooks, and parallel worktrees.
What actually happened
Cursor documented internal best practices for using its coding agent effectively.
It frames agent performance as a product of the “agent harness”: instructions, tools, and user messages.
It recommends planning-first workflows (Plan Mode) and restarting from the plan when execution drifts.
It outlines customization via persistent Rules, on-demand Skills, and hooks that automate loops.
It describes parallel and cloud agent workflows (worktrees, multi-model runs, remote sandboxes + PRs).
Key numbers
Plan Mode toggle: Shift+Tab
Plans can be saved to .cursor/plans/
Rules live in .cursor/rules/
Commands can be stored in .cursor/commands/
Hooks config uses "version": 1
Example agent loop MAX_ITERATIONS = 5
Why this was hard
Different frontier models prefer different workflows and tools (e.g., grep vs dedicated search).
Long conversations accumulate noise; agents can lose focus after many turns/summarizations.
Irrelevant context (too many tagged files) can confuse what the agent should prioritize.
AI-generated code can look correct while being subtly wrong, increasing review burden.
How they solved it
Use an agent harness that standardizes instructions/tools per supported model.
Start with Plan Mode: codebase research, clarifying questions, plan with file paths, wait for approval.
Store plans as editable Markdown; refine and rerun instead of patching a bad implementation.
Let the agent pull context via grep + semantic search; tag exact files only when known.
Use @Branch for situational context and @Past Chats to selectively import prior work.