@lumin-io

Openclaw Diagnostics

Full-fidelity Lumin observability for OpenClaw — captures prompts, responses, tool I/O, and reasoning traces via OpenClaw's typed-hook API.

当前版本
v0.1.2
code-plugin社区source-linked

@lumin-io/openclaw-diagnostics

Full-fidelity Lumin observability for OpenClaw — every prompt, response, tool I/O, and reasoning trace, captured per turn.

npm License: Apache 2.0

OpenClaw runs ship native OpenTelemetry through the bundled diagnostics-otel plugin. That gets you provider, model, token counts, and span structure — but not the actual prompt or reply text. The plugin exposes a captureContent flag, but as of OpenClaw 2026.5.x the runtime never populates the underlying event fields the exporter tries to read, so the flag is effectively a no-op.

This plugin uses a different surface — OpenClaw's typed-hook API (api.on('llm_input', …) / api.on('llm_output', …)) — which does carry full content at runtime. On every turn it builds a Lumin SpanInput and POSTs it to your local Lumin instance. The agent never blocks on Lumin; failures are swallowed with a short timeout.

What you get

Per turn:

  • Prompt text (just this turn's user message — not the whole replayed history)
  • Assistant reply text
  • Reasoning trace for thinking-emitting models (gpt-oss, o-series, Claude with extended thinking) — surfaced under metadata.openclaw.content.thinking
  • Token usage (input / output)
  • Model + provider + harness ID
  • Trace ID stitched from OpenClaw's diagnostic context (so the typed-hook span fuses with whatever else you've ingested via OTel for the same run)
  • Lightweight summary in metadata: history_message_count, system_prompt_chars, images_count

Installation

openclaw plugins install @lumin-io/openclaw-diagnostics

Then enable conversation access for the plugin in your ~/.openclaw/openclaw.json — non-bundled plugins are conversation-gated by default, so without this the typed hooks register silently and you'll see nothing:

{
  "plugins": {
    "entries": {
      "lumin-diagnostics": {
        "hooks": { "allowConversationAccess": true }
      }
    }
  }
}

Restart the gateway:

openclaw daemon restart

You should see this line in the gateway log on startup:

lumin-diagnostics: subscribed to llm_input + llm_output → http://localhost:8000/v1/spans (project=openclaw)

Configuration

All optional, set under plugins.entries.lumin-diagnostics.config in openclaw.json:

KeyDefaultDescription
hosthttp://localhost:8000 (or LUMIN_HOST env)Lumin API base URL.
projectopenclawSent as X-Lumin-Project so the agent grid groups OpenClaw runs together.
captureSystemPromptfalseWhether to write the full system prompt to metadata.openclaw.content.system_prompt. Off by default — system prompts are often large and rarely actionable. The character count is captured either way.
maxContentChars32768Per-attribute content cap. Truncated values are tagged …(truncated).
timeoutMs5000HTTP timeout for the POST to Lumin.

Example:

{
  "plugins": {
    "entries": {
      "lumin-diagnostics": {
        "hooks": { "allowConversationAccess": true },
        "config": {
          "host": "http://my-lumin-host:8000",
          "captureSystemPrompt": true,
          "maxContentChars": 65536
        }
      }
    }
  }
}

Why a plugin instead of just OTel?

Two reasons:

  1. Content fidelity. The OTel exporter ships fine for structure + sizes but doesn't get prompts or replies through (see the upstream bug note above). The typed-hook API does, and stays compatible across OpenClaw releases.
  2. Composability. This plugin runs alongside diagnostics-otel — both can be enabled at the same time. OTel ships your spans to Honeycomb / Datadog / etc. with structure + sizes; this plugin ships full-content spans to your local Lumin for debugging. They don't conflict.

Compatibility

  • OpenClaw: ≥ 2026.4.25 (uses the typed-hook API surface). Older releases that predate api.on(...) register silently with a single warn line — no errors.
  • Lumin API: any version with POST /v1/spans — i.e. all current versions.

Caveats

  • Trace ID stitching. OpenClaw's typed hooks and its OTel exporter sometimes run under different runWithDiagnosticTraceContext envelopes, so the trace IDs Lumin sees from the two rails may not match. The plugin always emits a deterministic trace ID derived from the OpenClaw runId, so two ingests of the same run idempotently land on the same trace.
  • History is summarized, not embedded. Each turn captures only the current user prompt — the full conversation history (which OpenClaw replays to the model on every turn) is referenced by count, not embedded, so trace size doesn't grow linearly with conversation length. If you need the full conversation, use Lumin's /sessions view, which already groups turns by session_id.

Source + issues

License

Apache 2.0

源码与版本

源码仓库

amitbidlan/zistica-lumin

打开仓库

源码提交

014976ba3d413c7969755b7c9d9c6fd0bb097eae

查看提交

安装命令

openclaw plugins install clawhub:@lumin-io/openclaw-diagnostics

元数据

  • 包名: @lumin-io/openclaw-diagnostics
  • 创建时间: 2026/05/06
  • 更新时间: 2026/05/06
  • 执行代码:
  • 源码标签: feat/openclaw-diagnostics-plugin

兼容性

  • 构建于 OpenClaw: 2026.5.4
  • 插件 API 范围: >=2026.5.4
  • 标签: latest
  • 文件数: 7