mirror of
https://github.com/0xMassi/webclaw.git
synced 2026-05-05 04:52:39 +02:00
- Add GeminiCliProvider: shells out to `gemini -p` with --output-format json, injection-safe prompt passing, MCP server suppression via temp workdir, 6-slot concurrency semaphore, 60s subprocess deadline - Add --llm-provider, --llm-model, --llm-base-url CLI flags for per-call overrides - Provider chain: Gemini CLI → OpenAI → Ollama → Anthropic - Move LLM timing to dispatch layer (LLM: Xs on stderr) - Default Ollama model: qwen3:8b → qwen3.5:9b (benchmark shows better schema extraction) - Add noxa mcp subcommand - Add docs/reports/llm-benchmark-2026-04-11.md (Gemini vs qwen3.5:4b vs qwen3.5:9b) - Bump version 0.3.11 → 0.4.0 Co-authored-by: Claude <claude@anthropic.com>
28 lines
861 B
Text
28 lines
861 B
Text
# Secrets, URLs, and path overrides only — everything else goes in config.json
|
|
# See config.example.json for the full list of configurable defaults.
|
|
|
|
# Cloud API key (required for --cloud / --research)
|
|
NOXA_API_KEY=
|
|
|
|
# Single proxy URL (or use NOXA_PROXY_FILE for pool rotation)
|
|
NOXA_PROXY=
|
|
|
|
# Proxy pool file path for rotating proxies
|
|
NOXA_PROXY_FILE=
|
|
|
|
# Webhook URL for completion notifications
|
|
NOXA_WEBHOOK_URL=
|
|
|
|
# LLM provider configuration and backend defaults
|
|
# NOXA_LLM_PROVIDER=gemini
|
|
# NOXA_LLM_MODEL=gemini-2.5-pro
|
|
# NOXA_LLM_BASE_URL= (Ollama or OpenAI-compatible endpoint)
|
|
# GEMINI_MODEL=gemini-2.5-pro
|
|
# OLLAMA_HOST=http://localhost:11434
|
|
# OLLAMA_MODEL=qwen3.5:9b
|
|
# OLLAMA_HEALTH_TIMEOUT_MS=2000
|
|
# OPENAI_API_KEY=
|
|
# ANTHROPIC_API_KEY=
|
|
|
|
# Optional: path to a non-default config file (default: ./config.json)
|
|
# NOXA_CONFIG=/path/to/my-config.json
|