Levea Agentic Video Editor for OpenClaw
Connect OpenClaw agents to the Levea video editor API. The skill supports one primary free-form tool, autonomous_edit, plus a deterministic allowlist for callers that already know the exact structured edit to run.
What Users Can Ask
- "Make this clip vertical for TikTok, remove silences, add bold captions, and export."
- "Turn this long video into five 15-30 second viral clips."
- "Key out the green screen, add a studio background, and keep the speaker centered."
- "Add B-roll over the product mention and duck the music under speech."
- "Export this project as an MP4."
Search Terms This Skill Covers
This listing is intentionally written for the phrases OpenClaw and npm users are likely to search:
- OpenClaw AI video editor, OpenClaw video editing skill, ClawHub video editor
- natural-language video editing, agentic video editing, AI video automation
- viral clip generator, short-form video clips, TikTok video editor, Instagram Reels editor, YouTube Shorts editor
- auto captions, caption generator, subtitles, video captions, social media captions
- vertical video, 9:16 reframe, multi-platform export, MP4 export
- chroma key, green screen removal, background removal, video background replacement
- B-roll generation, motion tracking, audio cleanup, silence removal, voiceover, background music
Recommended ClawHub publish tags:
clawhub skill publish . --tags latest,video,ai-video-editor,openclaw-skill,viral-clips,captions,vertical-video,chroma-key,audio-cleanup,export
API Surface
Base URL: {ADSCENE_API_URL}
For production Livecore, set:
export ADSCENE_API_URL="https://api.livecore.ai"
The full OpenClaw execute URL is:
https://api.livecore.ai/api/v1/misc/openclaw/v1/execute
Use https://studio.livecore.ai/ to sign up, log in, and generate an OpenClaw API key from the editor account UI.
Do not set ADSCENE_API_URL to https://studio.livecore.ai or to the in-product editor route /api/v1/misc/editor/. Studio is the user-facing app; OpenClaw requests should go to the API-key route below on https://api.livecore.ai.
| Endpoint | Purpose |
|---|---|
POST /api/v1/misc/openclaw/v1/execute | Run autonomous_edit or a deterministic tool |
GET /api/v1/misc/openclaw/v1/jobs/{jobId} | Poll async render or generation jobs |
GET /api/v1/misc/openclaw/v1/tools | List available tool names |
GET /api/v1/misc/openclaw/v1/health | Health check |
Auth:
Authorization: Bearer {ADSCENE_API_KEY}
Primary Tool
Use autonomous_edit for natural-language video edits. The backend classifies intent, plans canonical edit steps, executes through the same safety gates as the in-product editor, verifies the result, and queues export when needed.
curl -sS -X POST "$ADSCENE_API_URL/api/v1/misc/openclaw/v1/execute" \
-H "Authorization: Bearer $ADSCENE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"tool": "autonomous_edit",
"params": {
"prompt": "Make this a TikTok-ready viral clip: vertical reframe, add bold captions, remove silences, and export."
},
"project_id": "my-project"
}'
Deterministic Tools
Use these when the caller already has structured parameters and wants to skip intent planning.
| Tool | Canonical action | Use when |
|---|---|---|
read_scene | READ_SCENE | Inspect the current timeline |
read_media | QUERY_ASSETS | List gallery assets |
read_visual | READ_VISUAL | Analyze frames |
query_transcript | QUERY_TRANSCRIPT | Search transcript text or timestamps |
scene_update | UPDATE_LAYER | Mutate known layer properties |
scene_insert | CREATE_LAYER | Add a video, audio, text, image, shape, or adjustment layer |
scene_timing | SCENE_TIMING | Trim, retime, or reposition a layer |
scene_mask | APPLY_MASK | Apply chroma, luma, alpha, or depth masking |
chroma_key | APPLY_MASK | Green-screen or blue-screen keying |
split_screen | SPLIT_SCREEN | Top-bottom, left-right, grid, or picture-in-picture layout |
caption_compose | GENERATE_CAPTIONS | Generate captions from transcript |
media_treat | COLOR_GRADE | Apply color treatment |
scene_track | TRACK_MOTION | Face or object tracking |
clean_audio | CLEAN_AUDIO | Remove silence, breaths, or filler words |
audio_mix / audio_mixing | AUDIO_MIXING | Ducking, normalize, denoise, or EQ |
voiceover_add | GENERATE_VOICEOVER | Generate voiceover |
music_generate | GENERATE_MUSIC | Generate background music |
export_video | EXPORT_VIDEO | Render MP4 output |
Unknown tool names return UNKNOWN_TOOL with a hint to use autonomous_edit.
Response Expectations
JSON responses include:
success,status, andmessagescenewhen an edit changed the projectvideoUrlwhen export output is already availablejobId,activeTasks, orpendingAsyncJobsfor async render/generation workworkingMemorywhen a plan is awaiting approvalverificationPassedandverificationIssueswhen verification ran
The execute endpoint also supports SSE with Accept: text/event-stream or ?stream=true.
Configuration
Set the API URL and key through OpenClaw skill configuration or the process environment. Create the key from the Studio app at https://studio.livecore.ai/, then use https://api.livecore.ai as the API URL. Keep keys out of code and shared logs.
{
"skills": {
"entries": {
"openclaw_ai_video_editor": {
"enabled": true,
"env": {
"ADSCENE_API_URL": "https://api.livecore.ai",
"ADSCENE_API_KEY": "your-openclaw-api-key"
}
}
}
}
}
Safety Model
- API-key authentication is required for execute and job polling.
- Requests are rate-limited per API key.
- Destructive actions require explicit confirmation parameters.
- Mutating tools auto-queue an export only when the scene actually changed and no export is already queued.
- Read-only tools never auto-export.
Supported Media
- Video input: MP4, MOV, WebM, gallery IDs, HTTP/HTTPS URLs, or supported source URLs
- Image input: JPG, PNG, WebP
- Audio input: MP3, WAV, M4A, AAC
- Output: MP4 for export, ZIP for multi-clip or multi-platform bundles
Support
For setup help, integration questions, or issue reports, contact:
brajendrak00068@gmail.com