openclaw-cursor-cli
An OpenClaw plugin that adds a cursor-cli CLI backend and provider, so OpenClaw can route model calls through the local cursor-agent binary (using your Cursor subscription).
Pattern: structurally identical to OpenClaw's built-in claude-cli backend — a thin wrapper that spawns the Cursor IDE's official CLI in headless -p --output-format stream-json mode and pipes the result back into OpenClaw.
Highlights
- All Cursor models (
auto,composer-2, GPT-5.x, Claude 4.5/4.6/4.7, Gemini 3.x, Grok, Kimi, …) usable viacursor-cli/<base>. - OpenClaw's unified
--thinking off|low|medium|high|xhigh|maxflag is translated at runtime to the corresponding Cursor model id (-thinking-high,-extra-high, etc.). Catalog stays small (~13 user-facing base ids); per-effort variants are derived. - Model catalog and thinking-level rules are fetched live from
cursor-agent modelsand cached — no hardcoded lists to drift when Cursor adds models. - Single
/cursor-models refreshslash command (in supported channels) orbash scripts/refresh-models.sh(shell) to re-sync after Cursor updates.
Requirements
-
OpenClaw
>= 2026.5.7 -
cursor-agentinstalled on PATH and logged in:cursor-agent login # opens browser OAuth, once cursor-agent status # should show ✓ Logged in as ... cursor-agent -p "say hi" # smoke test — must return text
Install
Via ClawHub (recommended)
openclaw plugins install clawhub:@jeehou/openclaw-cursor-cli
From a local checkout (development)
git clone https://github.com/jeehou/openclaw-cursor-cli.git
openclaw plugins install ./openclaw-cursor-cli
Post-install (both routes)
# 1. Verify it loaded cleanly. Your existing default model is untouched.
openclaw plugins doctor # "No plugin issues detected."
openclaw plugins inspect cursor-cli # cli-backend + text-inference both registered
openclaw models # Default unchanged
# 2. Fetch the live Cursor catalog and family rules.
bash ~/.openclaw/extensions/cursor-cli/scripts/refresh-models.sh
# 3. Allowlist the families you want to use (one entry per cursor-cli/<base>).
# OpenClaw's agent requires explicit allowlist entries in agents.defaults.models.
openclaw config set 'agents.defaults.models["cursor-cli/auto"]' '{}'
openclaw config set 'agents.defaults.models["cursor-cli/claude-opus-4-7"]' '{}'
openclaw config set 'agents.defaults.models["cursor-cli/gpt-5.5"]' '{}'
openclaw config set 'agents.defaults.models["cursor-cli/claude-4.6-sonnet"]' '{}'
# (Use bracket notation for ids containing dots, otherwise `.` is parsed as path separator.)
After step 2, openclaw models list --provider cursor-cli shows every family with the correct context window (1024k / 256k / 195k).
Usage
Per-call (recommended — zero default change)
openclaw agent --local --agent main \
--model cursor-cli/claude-opus-4-7 \
--thinking high \
--message "design a worker pool" \
--json
The --thinking flag is automatically translated:
| --thinking | claude-opus-4-7 → cursor model id |
|---|---|
| off | claude-opus-4-7-medium |
| low | claude-opus-4-7-thinking-low |
| medium | claude-opus-4-7-thinking-medium |
| high | claude-opus-4-7-thinking-high |
| xhigh | claude-opus-4-7-thinking-xhigh |
| max | claude-opus-4-7-thinking-max |
(Mapping for every family is in ~/.openclaw/extensions/cursor-cli/model-rules.json after refresh. View with bash scripts/refresh-models.sh --dry-run if curious.)
Make Cursor your default
openclaw models set cursor-cli/claude-opus-4-7
# revert any time:
openclaw models set deepseek/ali-deepseek-v4-pro
Refreshing when Cursor adds new models
Whenever Cursor ships new model ids, run one of:
Shell:
bash ~/.openclaw/extensions/cursor-cli/scripts/refresh-models.sh
Chat slash command (in channels that support plugin slash commands — telegram / discord / feishu / mattermost):
/cursor-models refresh
Both routes:
- Run
cursor-agent modelsand parse all 100+ Cursor model ids. - Group ids into families (auto, gpt-5.5, claude-opus-4-7, …) and infer per-family suffix patterns automatically.
- Build a
thinking-level → real cursor idmapping for each family. - Persist the catalog to
models.providers.cursor-cli(your OpenClaw config). - Persist the family-rules cache to
~/.openclaw/extensions/cursor-cli/model-rules.json(read by the plugin'sresolveExecutionArgshook at runtime).
Idempotent — re-running just updates the same files. OpenClaw backs up openclaw.json to .bak before every config write.
Other /cursor-models sub-commands (slash only)
| Sub-command | Description |
|---|---|
/cursor-models or /cursor-models status | Show cache file path, age, family count, cursor id count. |
/cursor-models refresh | Re-fetch and rebuild as described above. |
/cursor-models list | Print every cached family with its off and high mapping. |
/cursor-models help | Show usage. |
Plugin config (optional)
In ~/.openclaw/openclaw.json:
{
plugins: {
entries: {
"cursor-cli": {
enabled: true,
config: {
// Override the cursor-agent binary path (default: "cursor-agent" on PATH)
command: "/home/joe.hou/.local/bin/cursor-agent",
// Extra args appended to every cursor-agent invocation
extraArgs: []
}
}
}
}
}
Uninstall / Rollback
# Light: disable, keep installed
openclaw plugins disable cursor-cli
# Full uninstall
openclaw plugins uninstall cursor-cli
# Also drop the provider config + rules cache the plugin wrote:
openclaw config unset 'models.providers.cursor-cli'
rm -f ~/.openclaw/extensions/cursor-cli/model-rules.json
Other providers (deepseek, modelhub, openai-codex, …) are never touched by any of this.
Architecture
openclaw agent --local --model cursor-cli/claude-opus-4-7 --thinking high --message "…"
↓
OpenClaw resolves model ref → provider=cursor-cli
↓
Routes to CLI backend "cursor-cli" (registered by this plugin)
↓
plugin.resolveExecutionArgs reads ~/.openclaw/extensions/cursor-cli/model-rules.json,
looks up rules.families["claude-opus-4-7"]["high"] → "claude-opus-4-7-thinking-high"
↓
Spawns: cursor-agent -p --output-format stream-json --stream-partial-output \
--force --trust [--resume <prev-id>] "<prompt>" --model claude-opus-4-7-thinking-high
↓
Parses NDJSON stream via OpenClaw's claude-stream-json dialect:
{type:"system",subtype:"init",session_id} → session bind
{type:"thinking",subtype:"delta",text} → (dropped — Claude dialect doesn't know)
{type:"assistant",message:{content:[{text}]}} → assistant text
{type:"result",subtype:"success",result,...} → finalize
↓
Returns to OpenClaw as a normal assistant turn (session_id persisted for resume)
No HTTP, no OAuth, no protobuf reverse engineering. All auth, rate limiting, and billing remain with the official cursor-agent CLI.
File layout
openclaw-cursor-cli/
├── package.json
├── openclaw.plugin.json # manifest (id, providers, cliBackends, commandAliases)
├── index.js # definePluginEntry: registerCliBackend + registerProvider + registerCommand
├── src/
│ └── refresh-models.mjs # parse cursor-agent → infer family rules → write user config + cache
├── scripts/
│ └── refresh-models.sh # shell entry point (used by install + manual refresh)
└── README.md
Limitations
- Streaming: uses
--output-format stream-json --stream-partial-outputand theclaude-stream-jsonjsonl dialect (Cursor's NDJSON is event-compatible with Claude Code's). Multi-turn session resume via--resume <id>works. thinkingevents from Cursor are silently dropped by theclaude-stream-jsonparser (Claude's dialect doesn't know that event type). Finalassistantandresultlines are parsed correctly — only intermediate reasoning text is invisible.- Token usage counters are not surfaced into OpenClaw's
agentMeta.usageunder stream-json. Switchoutput: "jsonl"→"json"inindex.jsif you need accurate accounting. cursor-agentruns in--force --trustheadless mode, which lets it execute its own tools (write, shell) inside the current workspace. Mirrorsclaude-clibehavior — only use it in directories you trust.- Per-
-fastvariants (e.g.gpt-5.5-high-fast) are not exposed in the catalog yet. Add a separate--fastflag axis if needed. - External plugins can't register top-level
openclaw <cmd>CLI subcommands in OpenClaw 2026.5.7 — that's why refreshing from the shell goes throughscripts/refresh-models.shrather than e.g.openclaw cursor-models refresh. The/cursor-modelsslash command is the in-chat equivalent.