Thalamus (plugin shim)
This is the OpenClaw plugin shim for Thalamus. It is small on purpose: it declares the plugin manifest, validates the config stanza, and exists so OpenClaw can wire Thalamus through its plugin system. The actual work, the MCP server and the cognitive routing layer, is in the separate openclaw-thalamus npm package, which is declared as a dependency here and is auto-installed when you install this plugin.
If you only want the MCP server and you do not care about the OpenClaw plugin manifest, install openclaw-thalamus directly via npm and skip this package.
What gets installed
clawhub:openclaw-thalamus-plugin ← this package (the shim)
└── npm dep: openclaw-thalamus ^1.0.1 ← the MCP server, CLI, dashboard
Install (recommended)
openclaw plugins install clawhub:openclaw-thalamus-plugin
OpenClaw fetches the shim, then runs npm install of the declared dependency, which pulls openclaw-thalamus@^1.0.1 from npm. After install, you have:
openclaw-thalamus(CLI:health,context,benchmark)openclaw-thalamus-mcp(MCP server binary)openclaw-thalamus-dashboard(optional HTTP dashboard)
Wire the MCP server
Append to ~/.openclaw/openclaw.json:
{
"mcpServers": [
{ "name": "thalamus", "command": "openclaw-thalamus-mcp" }
]
}
Restart the gateway:
systemctl --user restart openclaw-gateway
The MCP tools thalamus_route, thalamus_resolve, thalamus_search, thalamus_search_with_vector, thalamus_promote_packet, thalamus_telemetry should now appear in the OpenClaw tool registry.
Plugin config
The openclaw.plugin.json schema accepts the following config under plugins.entries.openclaw-thalamus-plugin.config:
| Field | Default | Purpose |
|---|---|---|
encoderHost | 127.0.0.1 | Host of the local Qwen3 encoder daemon, if running |
encoderPort | 28760 | Port of the encoder daemon |
vectorStorePath | ~/.openclaw/state/thalamus/vectors | FAISS + BBQ codebook directory |
packetStorePath | ~/.openclaw/state/thalamus/packets | Content-addressed packet store |
dashboardEnabled | false | Enable HTTP telemetry dashboard |
dashboardPort | 28761 | Dashboard HTTP port |
The shim itself does not start any process. OpenClaw spawns the MCP server based on the mcpServers entry in openclaw.json. The encoder daemon (CPU Qwen3 via llama.cpp) is started separately if you want it; see the upstream openclaw-thalamus README for scripts/install-encoder.sh.
Honest measurements
The directly measured numbers from a Pi 5 production run live in BENCHMARKS.md on the upstream repo:
- Protocol-level token compression (the @-code layer): 19.1% reduction on Captain spawn context (
spawn_context_tokens68 tocompact_context_tokens55, three rows inrun_telemetry.jsonl). - BBQ codebook on 99,823 vectors: mean cosine 0.978, p10 0.985, p50 0.999.
- Qwen3-Embedding-0.6B Q4_0 GGUF warm latency: about 167 ms p50 on Pi 5 CPU.
The combined "packet handoff plus protocol" figure quoted in earlier README drafts is a single-machine direction signal, not a benchmark, because there is no side-by-side run against a naive transcript paste baseline. Take that one with a grain of salt; the 19.1 percent figure has a real before-and-after on the same workload.
License
MIT. Source code for the shim and the MCP server live in the same upstream repo: https://github.com/msbel5/openclaw-thalamus