@waisimon

Playwright Scraper Skill

Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.

当前版本
v1.2.0
52 2.4万总安装 277

name: playwright-scraper-skill description: Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk. version: 1.2.0 author: Simon Chan

Playwright Scraper Skill

A Playwright-based web scraping OpenClaw Skill with anti-bot protection. Choose the best approach based on the target website's anti-bot level.


🎯 Use Case Matrix

Target WebsiteAnti-Bot LevelRecommended MethodScript
Regular SitesLowweb_fetch toolN/A (built-in)
Dynamic SitesMediumPlaywright Simplescripts/playwright-simple.js
Cloudflare ProtectedHighPlaywright Stealthscripts/playwright-stealth.js
YouTubeSpecialdeep-scraperInstall separately
RedditSpecialreddit-scraperInstall separately

📦 Installation

cd playwright-scraper-skill
npm install
npx playwright install chromium

🚀 Quick Start

1️⃣ Simple Sites (No Anti-Bot)

Use OpenClaw's built-in web_fetch tool:

# Invoke directly in OpenClaw
Hey, fetch me the content from https://example.com

2️⃣ Dynamic Sites (Requires JavaScript)

Use Playwright Simple:

node scripts/playwright-simple.js "https://example.com"

Example output:

{
  "url": "https://example.com",
  "title": "Example Domain",
  "content": "...",
  "elapsedSeconds": "3.45"
}

3️⃣ Anti-Bot Protected Sites (Cloudflare etc.)

Use Playwright Stealth:

node scripts/playwright-stealth.js "https://m.discuss.com.hk/#hot"

Features:

  • Hide automation markers (navigator.webdriver = false)
  • Realistic User-Agent (iPhone, Android)
  • Random delays to mimic human behavior
  • Screenshot and HTML saving support

4️⃣ YouTube Video Transcripts

Use deep-scraper (install separately):

# Install deep-scraper skill
npx clawhub install deep-scraper

# Use it
cd skills/deep-scraper
node assets/youtube_handler.js "https://www.youtube.com/watch?v=VIDEO_ID"

📖 Script Descriptions

scripts/playwright-simple.js

  • Use Case: Regular dynamic websites
  • Speed: Fast (3-5 seconds)
  • Anti-Bot: None
  • Output: JSON (title, content, URL)

scripts/playwright-stealth.js

  • Use Case: Sites with Cloudflare or anti-bot protection
  • Speed: Medium (5-20 seconds)
  • Anti-Bot: Medium-High (hides automation, realistic UA)
  • Output: JSON + Screenshot + HTML file
  • Verified: 100% success on Discuss.com.hk

🎓 Best Practices

1. Try web_fetch First

If the site doesn't have dynamic loading, use OpenClaw's web_fetch tool—it's fastest.

2. Need JavaScript? Use Playwright Simple

If you need to wait for JavaScript rendering, use playwright-simple.js.

3. Getting Blocked? Use Stealth

If you encounter 403 or Cloudflare challenges, use playwright-stealth.js.

4. Special Sites Need Specialized Skills

  • YouTube → deep-scraper
  • Reddit → reddit-scraper
  • Twitter → bird skill

🔧 Customization

All scripts support environment variables:

# Set screenshot path
SCREENSHOT_PATH=/path/to/screenshot.png node scripts/playwright-stealth.js URL

# Set wait time (milliseconds)
WAIT_TIME=10000 node scripts/playwright-simple.js URL

# Enable headful mode (show browser)
HEADLESS=false node scripts/playwright-stealth.js URL

# Save HTML
SAVE_HTML=true node scripts/playwright-stealth.js URL

# Custom User-Agent
USER_AGENT="Mozilla/5.0 ..." node scripts/playwright-stealth.js URL

📊 Performance Comparison

MethodSpeedAnti-BotSuccess Rate (Discuss.com.hk)
web_fetch⚡ Fastest❌ None0%
Playwright Simple🚀 Fast⚠️ Low20%
Playwright Stealth⏱️ Medium✅ Medium100%
Puppeteer Stealth⏱️ Medium✅ Medium-High~80%
Crawlee (deep-scraper)🐢 Slow❌ Detected0%
Chaser (Rust)⏱️ Medium❌ Detected0%

🛡️ Anti-Bot Techniques Summary

Lessons learned from our testing:

✅ Effective Anti-Bot Measures

  1. Hide navigator.webdriver — Essential
  2. Realistic User-Agent — Use real devices (iPhone, Android)
  3. Mimic Human Behavior — Random delays, scrolling
  4. Avoid Framework Signatures — Crawlee, Selenium are easily detected
  5. Use addInitScript (Playwright) — Inject before page load

❌ Ineffective Anti-Bot Measures

  1. Only changing User-Agent — Not enough
  2. Using high-level frameworks (Crawlee) — More easily detected
  3. Docker isolation — Doesn't help with Cloudflare

🔍 Troubleshooting

Issue: 403 Forbidden

Solution: Use playwright-stealth.js

Issue: Cloudflare Challenge Page

Solution:

  1. Increase wait time (10-15 seconds)
  2. Try headless: false (headful mode sometimes has higher success rate)
  3. Consider using proxy IPs

Issue: Blank Page

Solution:

  1. Increase waitForTimeout
  2. Use waitUntil: 'networkidle' or 'domcontentloaded'
  3. Check if login is required

📝 Memory & Experience

2026-02-07 Discuss.com.hk Test Conclusions

  • Pure Playwright + Stealth succeeded (5s, 200 OK)
  • ❌ Crawlee (deep-scraper) failed (403)
  • ❌ Chaser (Rust) failed (Cloudflare)
  • ❌ Puppeteer standard failed (403)

Best Solution: Pure Playwright + anti-bot techniques (framework-independent)


🚧 Future Improvements

  • Add proxy IP rotation
  • Implement cookie management (maintain login state)
  • Add CAPTCHA handling (2captcha / Anti-Captcha)
  • Batch scraping (parallel URLs)
  • Integration with OpenClaw's browser tool

📚 References

Security Scan

状态

suspicious

打开 VirusTotal

OpenClaw

gpt-5-mini

suspicious

OpenClaw 分析

The skill's code and instructions match a Playwright-based scraper, but the registry metadata omits required runtime dependencies (Node/Playwright) and the SKILL.md encourages anti-bot evasion tactics (proxies/CAPTCHA services) — a coherent scraper but with metadata and disclosure gaps you should understand before installing.

置信度: medium

VirusTotal

Type: OpenClaw Skill Name: playwright-scraper-skill Version: 1.2.0 The skill is designed for web scraping, including bypassing anti-bot measures using Playwright. While the core functionality and documentation are aligned with its stated purpose, the `scripts/playwright-stealth.js` file uses the `--no-sandbox` and `--disable-setuid-sandbox` flags when launching Chromium. Disabling the browser's sandbox is a security risk, as it can make the system more vulnerable if the browser itself is compromised by malicious web content. Although this is a common practice in web scraping for compatibility, it represents a risky capability without clear malicious intent from the skill author, thus classifying it as suspicious.

元数据

  • 作者: @waisimon
  • 创建时间: 2026/02/07
  • 更新时间: 2026/04/13
  • 版本数: 1
  • 评论数: 0
  • 扫描时间: 2026/02/11

运行要求

官方公开数据里暂未列出运行要求。