Adds `webclaw_fetch::Fetcher` trait. All 28 vertical extractors now
take `client: &dyn Fetcher` instead of `client: &FetchClient` directly.
Backwards-compatible: FetchClient implements Fetcher, blanket impls
cover `&T` and `Arc<T>`, so existing CLI / MCP / self-hosted-server
callers keep working unchanged.
Motivation: the production API server (api.webclaw.io) must not do
in-process TLS fingerprinting; it delegates all HTTP to the Go
tls-sidecar. Before this trait, exposing /v1/scrape/{vertical} on
production would have required importing wreq into the server's
dep graph, violating the CLAUDE.md rule. Now production can provide
its own TlsSidecarFetcher implementation and pass it to the same
dispatcher the OSS server uses.
Changes:
- New `crates/webclaw-fetch/src/fetcher.rs` defining the trait plus
blanket impls for `&T` and `Arc<T>`.
- `FetchClient` gains a tiny impl block in client.rs that forwards to
its existing public methods.
- All 28 extractor signatures migrated from `&FetchClient` to
`&dyn Fetcher` (sed-driven bulk rewrite, no semantic change).
- `cloud::smart_fetch` and `cloud::smart_fetch_html` take `&dyn Fetcher`.
- `extractors::dispatch_by_url` and `extractors::dispatch_by_name`
take `&dyn Fetcher`.
- `async-trait 0.1` added to webclaw-fetch deps (Rust 1.75+ has
native async-fn-in-trait but dyn dispatch still needs async_trait).
- Version bumped to 0.5.1, CHANGELOG updated.
Tests: 215 passing in webclaw-fetch (no new tests needed — the existing
extractor tests exercise the trait methods transparently).
Clippy: clean workspace-wide.
33 KiB
Changelog
All notable changes to webclaw are documented here. Format follows Keep a Changelog.
[0.5.1] — 2026-04-22
Added
-
webclaw_fetch::Fetchertrait. Vertical extractors now consume&dyn Fetcherinstead of&FetchClientdirectly. The trait exposes three methods (fetch,fetch_with_headers,cloud) covering everything extractors need. Callers that already held aFetchClientkeep working unchanged:FetchClientimplementsFetcher, blanket impls cover&TandArc<T>, so&clientcoerces to&dyn Fetcherautomatically.The motivation is the split between OSS (wreq-backed, in-process TLS fingerprinting) and the production API server at api.webclaw.io (which cannot use in-process fingerprinting per the architecture rule, and must delegate HTTP through the Go tls-sidecar). Before this trait, adding vertical routes to the production server would have required importing wreq into its dependency graph, violating the separation. Now the production server can provide its own
TlsSidecarFetcherimplementation and pass it to the same extractor dispatcher the OSS server uses.Backwards compatible. No behavior change for CLI, MCP, or OSS self-host.
Changed
- All 28 extractor
extract()signatures migrated fromclient: &FetchClienttoclient: &dyn Fetcher. The dispatcher functions (extractors::dispatch_by_url,extractors::dispatch_by_name) and the cloud escalation helpers (cloud::smart_fetch,cloud::smart_fetch_html) follow the same change. Tests and call sites are unchanged because&FetchClientauto-coerces.
[0.5.0] — 2026-04-22
Added
- 28 vertical extractors that return typed JSON instead of generic markdown. New
webclaw_fetch::extractorsmodule with one extractor per site. Dev: reddit, hackernews, github_repo / github_pr / github_issue / github_release, crates_io, pypi, npm. AI/ML: huggingface_model, huggingface_dataset, arxiv, docker_hub. Writing: dev_to, stackoverflow, youtube_video. Social: linkedin_post, instagram_post, instagram_profile. Ecommerce: shopify_product, shopify_collection, ecommerce_product (generic Schema.org), woocommerce_product, amazon_product, ebay_listing, etsy_listing. Reviews: trustpilot_reviews, substack_post. Each extractor claims a URL pattern via a publicmatches()fn and returns a typed JSON payload with the fields callers actually want (title, price, author, rating, review count, etc.) rather than a markdown blob. POST /v1/scrape/{vertical}onwebclaw-serverfor explicit vertical routing. Picks the parser by name, validates the URL plausibly belongs to that vertical, returns the same shape asPOST /v1/scrapebut typed. 23 of 28 verticals also auto-dispatch from a plainPOST /v1/scrapebecause their URL shapes are unique enough to claim safely; the remaining 5 (shopify_product,shopify_collection,ecommerce_product,woocommerce_product,substack_post) use patterns that non-target sites share, so callers opt in via the{vertical}route.GET /v1/extractorsonwebclaw-server. Returns the full catalog as{"extractors": [{"name": "...", "label": "...", "description": "...", "url_patterns": [...]}, ...]}so clients can build tooling / autocomplete / user-facing docs off a live source.- Antibot cloud-escalation for 5 ecommerce + reviews verticals. Amazon, eBay, Etsy, Trustpilot, and Substack (as HTML fallback) go through
cloud::smart_fetch_html: try local fetch first; on bot-protection detection (Cloudflare challenge, DataDome, AWS WAF "Verifying your connection", etc.) escalate toapi.webclaw.io/v1/scrape. WithoutWEBCLAW_API_KEY/WEBCLAW_CLOUD_API_KEYthe extractor returns a typedCloudError::NotConfiguredwith an actionable signup link. With a key set, escalation is automatic. Every extractor stamps adata_source: "local" | "cloud"field on the response so callers can tell which path ran. cloud::synthesize_htmlfor cloud-bypassed extraction.api.webclaw.io/v1/scrapedeliberately does not return raw HTML; it returns a parsed bundle (structured_dataJSON-LD blocks +metadataOG/meta tags +markdown). The new helper reassembles that bundle back into a minimal synthetic HTML doc (JSON-LD as<script>tags, metadata as OG<meta>tags, markdown in a<pre>) so existing local parsers run unchanged across both paths. No per-extractor code path branches are needed for "came from cloud" vs "came from local".- Trustpilot 2025 schema parser. Trustpilot replaced their single-Organization + aggregateRating shape with three separate JSON-LD blocks: a site-level Organization (Trustpilot itself), a Dataset with a csvw:Table
mainEntitycarrying the per-star distribution for the target business, and an aiSummary + aiSummaryReviews block with the AI-generated summary and recent reviews. The parser walks all three, skips the site-level Org, picks the Dataset byabout.@idmatching the target domain, parses each csvw:column for rating buckets, computes weighted-average rating + total from the distribution, extracts the aiSummary text, and returns recent reviews with author / country / date / rating / title / text / likes. - OG-tag fallback in
ecommerce_productfor sites with no JSON-LD and sites with JSON-LD but empty offers. Three paths now:jsonld(Schema.org Product with offers),jsonld+og(Product JSON-LD plus OG product tags filling in missing price), andog_fallback(no JSON-LD at all, build minimal payload fromog:title,og:image,og:description,product:price:amount,product:price:currency,product:availability,product:brand).has_og_product_signal()gates the fallback onog:type=productor a price tag so blog posts don't get mis-classified as products. - URL-slug title fallback in
etsy_listingfor delisted / blocked pages. When Etsy serves a placeholder page ("etsy.com","Etsy - Your place to buy...","This item is unavailable"), humanise the URL slug (/listing/123/personalized-stainless-steel-tumblerbecomes"Personalized Stainless Steel Tumbler") so callers always get a meaningful title. Plus shop falls throughoffers[].seller.namethen top-levelbrandbecause Etsy uses both schemas depending on listing age. - Force-cloud-escalation in
amazon_productwhen local HTML lacks Product JSON-LD. Amazon A/B-tests JSON-LD presence. When local fetch succeeds but has noProductblock and a cloud client is configured, the extractor force-escalates to the cloud which reliably surfaces title + description via its render engine. Added OG meta-tag fallback so the cloud's synthesized HTML output (OG tags only, no Amazon DOM IDs) still yields title / image / description. - AWS WAF "Verifying your connection" detector in
cloud::is_bot_protected. Trustpilot serves a~565byte interstitial with aninterstitial-spinnerCSS class. The detector now fires on that pattern with a< 10_000byte size gate to avoid false positives on real articles that happen to mention the phrase.
Changed
webclaw-fetch::FetchClientgained an optionalcloudfield viawith_cloud(CloudClient). Extractors reach it throughclient.cloud()to decide whether to escalate.webclaw-server::AppStatereadsWEBCLAW_CLOUD_API_KEY(preferred) or falls back toWEBCLAW_API_KEYonly when inbound auth is not configured (open mode).- Consolidated
CloudClientintowebclaw-fetch. Previously duplicated betweenwebclaw-mcp/src/cloud.rs(302 LOC) andwebclaw-cli/src/cloud.rs(80 LOC). Single canonical home with typedCloudError(NotConfigured,Unauthorized,InsufficientPlan,RateLimited,ServerError,Network,ParseFailed) that Display with actionable URLs;From<CloudError> for Stringbridge keeps pre-existing CLI / MCP call sites compiling unchanged during migration.
Tests
- 215 unit tests passing in
webclaw-fetch(100+ new, covering every extractor's matcher, URL parser, JSON-LD / OG fallback paths, and the cloud synthesis helper).cargo clippy --workspace --release --no-depsclean.
[0.4.0] — 2026-04-22
Added
-
webclaw bench <url>— per-URL extraction micro-benchmark (#26). New subcommand. Fetches a URL once, runs the same extraction pipeline as--format llm, and prints a small ASCII table comparing raw-HTML tokens vs. llm-output tokens, bytes, and extraction time. Pass--jsonfor a single-line JSON object (stable shape, easy to append to ndjson in CI). Pass--facts <path>with a file in the same schema asbenchmarks/facts.jsonto get a fidelity column ("4/5 facts preserved"); URLs absent from the facts file produce no fidelity row, so uncurated sites aren't shown as 0/0. v1 uses an approximate tokenizer (chars/4for Latin text,chars/2when CJK dominates) — off by ±10% vs. a real BPE tokenizer, but the signal ("the LLM pipeline dropped 93% of the raw bytes") is the point. Output clearly labels counts as≈ tokensso nobody confuses them with a real tiktoken run. Swapping intiktoken-rslater is a one-function change inbench.rs. Adding this as aclapsubcommand rather than a flag also lays the groundwork for future subcommands without breaking the existing flag-based flow —webclaw <url> --format llmstill works exactly as before. -
webclaw-server— new OSS binary for self-hosting a REST API (#29). Until now,docs/self-hostingpromised awebclaw-serverbinary that only existed in the hosted-platform repo (closed source). The Docker image shipped two binaries while the docs advertised three, which sent self-hosters into a bug loop. This release closes the gap: a new crate atcrates/webclaw-server/builds a minimal, stateless axum server that exposes the OSS extraction pipeline over HTTP with the same JSON shapes as api.webclaw.io. Endpoints:GET /health,POST /v1/{scrape,crawl,map,batch,extract,summarize,diff,brand}. Run withwebclaw-server --port 3000 [--host 0.0.0.0] [--api-key <bearer>]or the matchingWEBCLAW_PORT/WEBCLAW_HOST/WEBCLAW_API_KEYenv vars. Bearer auth is constant-time (viasubtle::ConstantTimeEq); open mode (no key) is allowed on127.0.0.1for local development.What self-hosting gives you: the full extraction pipeline, Crawler, sitemap discovery, brand/diff, LLM extract/summarize (via Ollama or your own OpenAI/Anthropic key). What it does not give you: anti-bot bypass (Cloudflare, DataDome, WAFs), headless JS rendering, async job queues, multi-tenant auth/billing, domain-hints and proxy routing — those require the hosted backend at api.webclaw.io and are intentionally not open-source. The self-hosting docs have been updated to reflect this split honestly.
-
crawlendpoint runs synchronously and hard-caps at 500 pages / 20 concurrency. No job queue, no background workers — a naive caller can't OOM the process.batchcaps at 100 URLs / 20 concurrency for the same reason. For unbounded crawls use the hosted API.
Changed
- Docker image now ships three binaries, not two.
DockerfileandDockerfile.ciboth addwebclaw-serverto/usr/local/bin/andEXPOSE 3000for documentation. The entrypoint shim is unchanged:docker run IMAGE webclaw-server --port 3000Just Works, and the CLI/URL pass-through from v0.3.19 is preserved.
Docs
- Rewrote
docs/self-hostingon the landing site to differentiate OSS (self-hosted REST) from the hosted platform. Added a capability matrix so new users don't have to read the repo to figure out why Cloudflare-protected sites still 403 when pointing at their own box.
Fixed
- Dead-code warning on
cargo install webclaw-mcp(#30).rmcp1.3.x changed how the#[tool_handler]macro reads thetool_routerstruct field — it now goes through a derived trait impl instead of referencing the field by name, so rustc's dead-code lint no longer sees it. The field is still essential (dropping it unregisters every MCP tool), just invisible to the lint. Annotated with#[allow(dead_code)]and a comment explaining why. No behaviour change. Warning disappears on the nextcargo install.
[0.3.19] — 2026-04-17
Fixed
- Docker image can be used as a FROM base again. v0.3.13 switched the Docker
CMDtoENTRYPOINT ["webclaw"]so thatdocker run IMAGE https://example.comwould pass the URL through as expected. That change trapped a different use case: downstream Dockerfiles thatFROM ghcr.io/0xmassi/webclawand set their ownCMD ["./setup.sh"]— the child's./setup.shbecame the first arg towebclaw, which tried to fetch it as a URL and failed witherror sending request for uri (https://./setup.sh). BothDockerfileandDockerfile.cinow use a smalldocker-entrypoint.shshim that forwards flags (-*) and URLs (http://,https://) towebclaw, butexecs anything else directly. All four use cases now work:docker run IMAGE https://example.com,docker run IMAGE --help, child-imageCMD ["./setup.sh"], anddocker run IMAGE bashfor debugging. DefaultCMDis["webclaw", "--help"].
[0.3.18] — 2026-04-16
Fixed
- UTF-8 char boundary panic in
webclaw-core::extractor::find_content_position(#16). After rejecting a match that fell inside image syntax (), the scan advancedsearch_fromby a single byte. If the rejected match started on a multi-byte character (Cyrillic, CJK, accented Latin, emoji), the nextmarkdown[search_from..]slice landed mid-char and panicked withbyte index N is not a char boundary; it is inside 'X'. Repro waswebclaw https://bruler.ru/about_brand -f json. Now advances byneedle.len()— always a valid char boundary, and faster because it skips the whole rejected match instead of re-scanning inside it. Two regression tests cover multi-byte rejected matches and all-rejected cycles in Cyrillic text.
[0.3.17] — 2026-04-16
Changed
webclaw-fetch::sitemap::parse_robots_txtnow does proper directive parsing. The previoustrimmed[..8].eq_ignore_ascii_case("sitemap:")slice couldn't handle "Sitemap :" (space before colon) from bad generators, didn't strip inline# ...comments, and would have returned empty/garbage values if a directive line had no URL. Now splits on the first colon, matches any-casesitemapas the directive name, strips comments, and requires the value to contain://before accepting it. Eight new unit tests cover case variants, space-before-colon, inline comments, non-URL values, and non-sitemap directives.webclaw-fetch::crawler::is_cancelledusesOrdering::Acquire(wasRelaxed). Technically equivalent on x86/arm64 for single-word loads, but the explicit ordering documents the synchronization intent for readers and the compiler.
Added
webclaw-mcpcaches the Firefox FetchClient lazily. Tool calls that repeatedly request the Firefox profile without cookies used to build a fresh reqwest pool + TLS stack per call; a singleOnceLockkeeps the client alive for the life of the server. Chrome (default) and Random (by design per-call) are unaffected.
[0.3.16] — 2026-04-16
Hardened
- Response body caps across fetch + LLM providers (P2). Every HTTP response buffered from the network is now rejected if it exceeds a hard size cap.
webclaw-fetch::Response::from_wreqcaps HTML/doc responses at 50 MB (before the allocation pays for anything and as a belt-and-braces check afterbytes().await);webclaw-llmproviders (anthropic / openai / ollama) cap JSON responses at 5 MB via a sharedresponse_json_cappedhelper. Previously an adversarial or runaway upstream could push unbounded memory into the process. Closes the DoS-via-giant-body class of bugs noted in the audit. - Crawler frontier cap (P2). After each depth level the frontier is truncated to
max(max_pages × 10, 100)entries, keeping the most recently discovered links. Dense pages (tag clouds, search results) used to push the frontier into the tens of thousands even aftermax_pageshalted new fetches, keeping string allocations alive long after the crawl was effectively done. - Glob pattern validation (P2). User-supplied
include_patterns/exclude_patternspassed to the crawler are now rejected if they contain more than 4**wildcards or exceed 1024 chars. The backtracking matcher degrades exponentially on deeply-nested**against long paths; this keeps adversarial config files from weaponising it.
Cleanup
- Removed blanket
#![allow(dead_code)]inwebclaw-cli/src/main.rs. No dead code surfaced; the suppression was obsolete. .gitignore: replaced overbroad*.jsonwith specific local-artifact patterns. The previous rule would have swallowedpackage.json/components.json/.smithery/*.jsonif they were ever modified.
[0.3.15] — 2026-04-16
Fixed
- Batch/crawl no longer panics on semaphore close (P1). Three
permit.acquire().await.expect("semaphore closed")call sites inwebclaw-fetch(client::fetch_batch,client::fetch_and_extract_batch_with_options,crawlerinner loop) now surface a typedFetchError::Build("semaphore closed before acquire")or a failedPageResultinstead of panicking the spawned task. Under normal operation nothing changes; under shutdown-race or adversarial runtime state, the caller sees one failed entry in the batch instead of losing the task silently to the runtime's panic handler. Surfaced by the 2026-04-16 workspace audit.
[0.3.14] — 2026-04-16
Security
--on-changecommand injection closed (P0). The--on-changeflag onwebclaw watchand its multi-URL variant used to pipe the whole user-supplied string throughsh -c. Anyone (or any LLM driving the MCP surface, or any config file parsed on the user's behalf) that could influence the flag value could execute arbitrary shell. The command is now tokenized withshlexand executed directly viaCommand::new(prog).args(args), so metacharacters like;,&&,|,$(),<(...), and env expansion no longer fire. AWEBCLAW_ALLOW_SHELL=1escape hatch is available for users who genuinely need pipelines; it logs a warning on every invocation so it can't slip in silently. Surfaced by the 2026-04-16 workspace audit.
[0.3.13] — 2026-04-10
Fixed
- Docker CMD replaced with ENTRYPOINT: both
DockerfileandDockerfile.cinow useENTRYPOINT ["webclaw"]instead ofCMD ["webclaw"]. CLI arguments (e.g.docker run webclaw https://example.com) now pass through correctly instead of being ignored.
[0.3.12] — 2026-04-10
Added
- Crawl scope control: new
allow_subdomainsandallow_external_linksfields onCrawlConfig. By default crawls stay same-origin. Enableallow_subdomainsto follow sibling/child subdomains (e.g. blog.example.com from example.com), orallow_external_linksfor full cross-origin crawling. Root domain extraction uses a heuristic that handles two-part TLDs (co.uk, com.au).
[0.3.11] — 2026-04-10
Added
- Sitemap fallback paths: discovery now tries
/sitemap_index.xml,/wp-sitemap.xml, and/sitemap/sitemap-index.xmlin addition to the standard/sitemap.xml. Sites using WordPress or non-standard sitemap locations are now discovered without needing external search.
[0.3.10] — 2026-04-10
Changed
- Fetch timeout reduced from 30s to 12s: prevents cascading slowdowns when proxies are unresponsive. Worst-case per-URL drops from ~94s to ~25s.
- Retry attempts reduced from 3 to 2: combined with shorter timeout, total worst-case is 12s + 1s delay + 12s = 25s instead of 30s + 1s + 30s + 3s + 30s = 94s.
[0.3.9] — 2026-04-04
Fixed
- Layout tables rendered as sections: tables used for page layout (containing block elements like
<p>,<div>,<hr>) are now rendered as standalone sections instead of pipe-delimited markdown tables. Fixes Drudge Report and similar sites where all content was flattened into a single unreadable line. (by @devnen in #14) - Stack overflow on deeply nested HTML: pages with 200+ DOM nesting levels (e.g., Express.co.uk live blogs) no longer overflow the stack. Two-layer fix: depth guard in markdown.rs falls back to iterator-based text collection at depth 200, and
extract_with_options()spawns an 8 MB worker thread for safety on Windows. (by @devnen in #14) - Noise filter swallowing content in malformed HTML:
<form>tags no longer unconditionally treated as noise — ASP.NET page-wrapping forms (>500 chars) are preserved. Safety valve prevents unclosed noise containers (header/footer with >5000 chars) from absorbing entire page content. (by @devnen in #14)
Changed
- Bold/italic block passthrough:
<b>/<strong>/<em>/<i>tags containing block-level children (e.g., Drudge wrapping columns in<b>) now act as transparent containers instead of collapsing everything into inline bold/italic. (by @devnen in #14)
[0.3.8] — 2026-04-03
Fixed
- MCP research token overflow: research results are now saved to
~/.webclaw/research/and the MCP tool returns file paths + findings instead of the full report. Prevents "exceeds maximum allowed tokens" errors in Claude/Cursor. - Research caching: same query returns cached result instantly without spending credits.
- Anthropic rate limit throttling: 60s delay between LLM calls in research to stay under Tier 1 limits (50K input tokens/min).
Added
dirsdependency for~/.webclaw/research/path resolution.
[0.3.7] — 2026-04-03
Added
--researchCLI flag: run deep research via the cloud API. Prints report to stdout and saves full result (report + sources + findings) to a JSON file. Supports--deepfor longer reports.- MCP extract/summarize cloud fallback: when no local LLM is available, these tools now fall back to the cloud API instead of erroring. Set
WEBCLAW_API_KEYfor automatic fallback. - MCP research structured output: the research tool now returns structured JSON (report + sources + findings + metadata) instead of raw text, so agents can reference individual findings and source URLs.
[0.3.6] — 2026-04-02
Added
- Structured data in markdown/LLM output:
__NEXT_DATA__, SvelteKit, and JSON-LD data now appears as a## Structured Datasection with a JSON code block at the end of-f markdownand-f llmoutput. Works with--only-main-contentand all other flags.
Fixed
- Homebrew CI: formula now updates all 4 platform checksums after Docker build completes, preventing SHA mismatch on Linux installs (#12).
[0.3.5] — 2026-04-02
Added
__NEXT_DATA__extraction: Next.js pages now have theirpagePropsJSON extracted intostructured_data. Contains prices, product info, page state, and other data that isn't in the visible HTML. Tested on 45 sites — 13 now return rich structured data (BBC, Forbes, Nike, Stripe, TripAdvisor, Glassdoor, NASA, etc.).
[0.3.4] — 2026-04-01
Added
- SvelteKit data island extraction: extracts structured JSON from
kit.start()data arrays. Handles unquoted JS object keys by converting to valid JSON before parsing. Data appears in thestructured_datafield.
Changed
- License changed from MIT to AGPL-3.0.
[0.3.3] — 2026-04-01
Changed
- Replaced custom TLS stack with wreq: migrated from webclaw-tls (patched rustls/h2/hyper/reqwest) to wreq by @0x676e67. wreq uses BoringSSL for TLS and the http2 crate for HTTP/2 fingerprinting — both battle-tested with 60+ browser profiles.
- Removed all
[patch.crates-io]entries: consumers no longer need to patch rustls, h2, hyper, hyper-util, or reqwest. Just depend on webclaw normally. - Browser profiles rebuilt on wreq's Emulation API: Chrome 145, Firefox 135, Safari 18, Edge 145 with correct TLS options (cipher suites, curves, GREASE, ECH, PSK session resumption), HTTP/2 SETTINGS ordering, pseudo-header order, and header wire order.
- Better TLS compatibility: BoringSSL handles more server configurations than patched rustls (e.g. servers that previously returned IllegalParameter alerts).
Removed
- webclaw-tls dependency and all 5 forked crates (webclaw-rustls, webclaw-h2, webclaw-hyper, webclaw-hyper-util, webclaw-reqwest).
Acknowledgments
- TLS and HTTP/2 fingerprinting powered by wreq and http2 by @0x676e67, who pioneered browser-grade HTTP/2 fingerprinting in Rust.
[0.3.2] — 2026-03-31
Added
--cookie-fileflag: load cookies from JSON files exported by browser extensions (EditThisCookie, Cookie-Editor). Format:[{name, value, domain, ...}].- MCP
cookiesparameter: thescrapetool now accepts acookiesarray for authenticated scraping. - Combined cookies:
--cookieand--cookie-filecan be used together and merge automatically.
[0.3.1] — 2026-03-30
Added
- Cookie warmup fallback: when a fetch returns an Akamai challenge page, automatically visits the homepage first to collect
_abck/bm_szcookies, then retries the original URL. Enables extraction of Akamai-protected subpages (e.g. fansale ticket pages) without JS rendering.
Changed
- Fixed HTTP header wire order (accept/user-agent were in wrong positions) and added H2 PRIORITY flag in HEADERS frames.
FetchResult.headersnow useshttp::HeaderMapinstead ofHashMap<String, String>— avoids per-response allocation, preserves multi-value headers.
[0.3.0] — 2026-03-29
Changed
- Replaced primp with webclaw-tls: switched to custom TLS fingerprinting stack.
- Browser profiles: Chrome 146 (Win/Mac), Firefox 135+, Safari 18, Edge 146 — captured from real browsers.
- HTTP/2 fingerprinting: SETTINGS frame ordering and pseudo-header ordering based on concepts pioneered by @0x676e67.
Fixed
- HTTPS completely broken (#5): primp's forked rustls rejected valid certificates (UnknownIssuer on cross-signed chains like example.com). Fixed by using native OS root CAs alongside Mozilla bundle.
- Unknown certificate extensions: servers returning SCT in certificate entries no longer cause TLS errors.
Added
- Native root CA support: uses OS trust store (macOS Keychain, Windows cert store) in addition to webpki-roots.
- HTTP/2 fingerprinting: SETTINGS frame ordering and pseudo-header ordering match real browsers.
- Per-browser header ordering: HTTP headers sent in browser-specific wire order.
- Bandwidth tracking: atomic byte counters shared across cloned clients.
[0.2.2] — 2026-03-27
Fixed
cargo installbroken with primp 1.2.0: added missingreqwestpatch to[patch.crates-io]. primp moved to reqwest 0.13 which requires a patched fork.- Weekly dependency check: CI now runs every Monday to catch primp patch drift before users hit it.
[0.2.1] — 2026-03-27
Added
- Docker image on GHCR:
docker run ghcr.io/0xmassi/webclaw— auto-built on every release - QuickJS data island extraction: inline
<script>execution catcheswindow.__PRELOADED_STATE__, Next.js hydration data, and other JS-embedded content
Fixed
- Docker CI now runs as part of the release workflow (was missing, image was never published)
[0.2.0] — 2026-03-26
Added
- DOCX extraction: auto-detected by Content-Type or URL extension, outputs markdown with headings
- XLSX/XLS extraction: spreadsheets converted to markdown tables, multi-sheet support via calamine
- CSV extraction: parsed with quoted field handling, output as markdown table
- HTML output format:
-f htmlreturns sanitized HTML from the extracted content - Multi-URL watch:
--watchnow works with--urls-fileto monitor multiple URLs in parallel - Batch + LLM extraction:
--extract-promptand--extract-jsonnow work with multiple URLs - Scheduled batch watch: watch multiple URLs with aggregate change reports and per-URL diffs
[0.1.7] — 2026-03-26
Fixed
--only-main-content,--include, and--excludenow work in batch mode (#3)
[0.1.6] — 2026-03-26
Added
--watch: monitor a URL for changes at a configurable interval with diff output--watch-interval: seconds between checks (default: 300)--on-change: run a command when changes are detected (diff JSON piped to stdin)--webhook: POST JSON notifications on crawl/batch complete and watch changes. Auto-formats for Discord and Slack webhooks
[0.1.5] — 2026-03-26
Added
--output-dir: save each page to a separate file instead of stdout. Works with single URL, crawl, and batch modes- CSV input with custom filenames:
url,filenameformat in--urls-file - Root URLs use
hostname/index.extto avoid collisions in batch mode - Subdirectories created automatically from URL path structure
[0.1.4] — 2026-03-26
Added
- QuickJS integration for extracting data from inline JavaScript (NYTimes +168%, Wired +580% more content)
- Executes inline
<script>tags in a sandboxed runtime to capturewindow.__*data blobs - Parses Next.js RSC flight data (
self.__next_f) for App Router sites - Smart text filtering rejects CSS, base64, file paths, and code — only keeps readable prose
- Feature-gated with
quickjsfeature flag (enabled by default, disable for WASM builds)
[0.1.3] — 2026-03-25
Added
- Crawl streaming: real-time progress on stderr as pages complete (
[2/50] OK https://... (234ms, 1523 words)) - Crawl resume/cancel:
--crawl-state <path>saves progress on Ctrl+C and resumes from where it left off - MCP server proxy support via
WEBCLAW_PROXYandWEBCLAW_PROXY_FILEenv vars
Changed
- Crawl results now expose visited set and remaining frontier for accurate state persistence
[0.1.2] — 2026-03-25
Changed
- Default TLS profile switched from Chrome145/Win to Safari26/Mac (highest pass rate across CF-protected sites)
- Plain client fallback: when impersonated TLS gets connection error or 403, automatically retries without impersonation (fixes ycombinator.com, producthunt.com, and similar sites)
Fixed
- Reddit scraping: use plain HTTP client for
.jsonendpoint (TLS fingerprinting was getting blocked)
Added
- YouTube transcript extraction infrastructure in webclaw-core (caption track parsing, timed text XML parser) — wired up when cloud API launches
[0.1.1] — 2026-03-24
Fixed
- MCP server now identifies as
webclaw-mcpinstead ofrmcpin the MCP handshake - Research tool polling caps at 200 iterations (~10 min) instead of looping forever
- CLI returns non-zero exit codes on errors (invalid format, fetch failures, missing LLM)
- Text format output strips markdown table syntax (
| --- |pipes) - All MCP tools validate URLs before network calls with clear error messages
- Cloud API HTTP client has 60s timeout instead of no timeout
- Local fetch calls timeout after 30s to prevent hanging on slow servers
- Diff cloud fallback computes actual diff instead of returning raw scrape JSON
- FetchClient startup failure logs and exits gracefully instead of panicking
Added
- Upper bounds: batch capped at 100 URLs, crawl capped at 500 pages
[0.1.0] — 2026-03-18
First public release. Full-featured web content extraction toolkit for LLMs.
Core Extraction
- Readability-style content scoring with text density, semantic tags, and link density penalties
- Exact CSS class token noise filtering with body-force fallback for SPAs
- HTML → markdown conversion with URL resolution, image alt text, srcset optimization
- 9-step LLM text optimization pipeline (67% token reduction vs raw HTML)
- JSON data island extraction (React, Next.js, Contentful CMS)
- YouTube transcript extraction (title, channel, views, duration, description)
- Lazy-loaded image detection (data-src, data-lazy-src, data-original)
- Brand identity extraction (name, colors, fonts, logos, OG image)
- Content change tracking / diff engine
- CSS selector filtering (include/exclude)
Fetching & Crawling
- TLS fingerprint impersonation via Impit (Chrome 142, Firefox 144, random mode)
- BFS same-origin crawler with configurable depth, concurrency, and delay
- Sitemap.xml and robots.txt discovery
- Batch multi-URL concurrent extraction
- Per-request proxy rotation from pool file
- Reddit JSON API and LinkedIn post extractors
LLM Integration
- Provider chain: Ollama (local-first) → OpenAI → Anthropic
- JSON schema extraction (structured data from pages)
- Natural language prompt extraction
- Page summarization with configurable sentence count
- PDF text extraction via pdf-extract
- Auto-detection by Content-Type header
MCP Server
- 8 tools: scrape, crawl, map, batch, extract, summarize, diff, brand
- stdio transport for Claude Desktop, Claude Code, and any MCP client
- Smart Fetch: local extraction first, cloud API fallback
CLI
- 4 output formats: markdown, JSON, plain text, LLM-optimized
- CSS selector filtering, crawling, sitemap discovery
- Brand extraction, content diffing, LLM features
- Browser profile selection, proxy support, stdin/file input
Infrastructure
- Docker multi-stage build with Ollama sidecar
- Deploy script for Hetzner VPS