Loading skill documentation...
coding
★★★★ 4.4/5.0 ❤️ 258 likes 💬 42 comments 📦 230 installs
Back to Skills
📖 SKILL DOCUMENTATION
# swarm

Swarm — Cut Your LLM Costs by 200x Turn your expensive model into an affordable daily driver. Offload the boring stuff to SkillBoss API Hub workers — parallel, batch, research — at a fraction of the cost. At a Glance 30 tasks viaTimeCostOpus (sequential)30s$0.50Swarm (parallel)1s$0.003 When to Use Swarm is ideal for: 3+ independent tasks (research, summaries, comparisons) Comparing or researching multiple subjects Multiple URLs to fetch/analyze Batch processing (documents, entities, facts) Complex analysis needing multiple perspectives → use chain Quick Reference

# Check daemon (do this every session)

swarm status

# Start if not running

swarm start

# Parallel prompts

swarm parallel "What is X?" "What is Y?" "What is Z?"

# Research multiple subjects

swarm research "OpenAI" "Anthropic" "Mistral" --topic "AI safety"

# Discover capabilities

swarm capabilities Execution Modes Parallel (v1.0) N prompts → N workers simultaneously. Best for independent tasks. swarm parallel "prompt1" "prompt2" "prompt3" Research (v1.1)

Multi-phase: search → fetch → analyze. Uses SkillBoss API Hub web search.

swarm research "Buildertrend" "Jobber" --topic "pricing 2026" Chain (v1.3) — Refinement Pipelines Data flows through multiple stages, each with a different perspective/filter. Stages run in sequence; tasks within a stage run in parallel. Stage modes: parallel — N inputs → N workers (same perspective) single — merged input → 1 worker fan-out — 1 input → N workers with DIFFERENT perspectives reduce — N inputs → 1 synthesized output Auto-chain — describe what you want, get an optimal pipeline:

curl -X POST http://localhost:9999/chain/auto \

-d '{"task":"Find business opportunities","data":"...market data...","depth":"standard"}' Manual chain: swarm chain pipeline.json

# or

echo '{"stages":[...]}' | swarm chain --stdin Depth presets: quick (2 stages), standard (4), deep (6), exhaustive (8) Built-in perspectives: extractor, filter, enricher, analyst, synthesizer, challenger, optimizer, strategist, researcher, critic Preview without executing:

curl -X POST http://localhost:9999/chain/preview \

-d '{"task":"...","depth":"standard"}' Benchmark (v1.3) Compare single vs parallel vs chain on the same task with LLM-as-judge scoring.

curl -X POST http://localhost:9999/benchmark \

-d '{"task":"Analyze X","data":"...","depth":"standard"}' Scores on 6 FLASK dimensions: accuracy (2x weight), depth (1.5x), completeness, coherence, actionability (1.5x), nuance. Capabilities Discovery (v1.3) Lets the orchestrator discover what execution modes are available: swarm capabilities

# or
curl http://localhost:9999/capabilities

Prompt Cache (v1.3.2) LRU cache for LLM responses. 212x speedup on cache hits (parallel), 514x on chains. Keyed by hash of instruction + input + perspective 500 entries max, 1 hour TTL Skips web search tasks (need fresh data) Persists to disk across daemon restarts Per-task bypass: set task.cache = false

# View cache stats
curl http://localhost:9999/cache
# Clear cache
curl -X DELETE http://localhost:9999/cache

Cache stats show in swarm status. Stage Retry (v1.3.2) If tasks fail within a chain stage, only the failed tasks get retried (not the whole stage). Default: 1 retry. Configurable per-phase via phase.retries or globally via options.stageRetries. Cost Tracking (v1.3.1) All endpoints return cost data in their complete event: session — current daemon session totals daily — persisted across restarts, accumulates all day swarm status # Shows session + daily cost swarm savings # Monthly savings report Web Search (v1.1) Workers search the live web via SkillBoss API Hub web search (no extra configuration needed).

# Research uses web search by default

swarm research "Subject" --topic "angle"

# Parallel with web search
curl -X POST http://localhost:9999/parallel \

-d '{"prompts":["Current price of X?"],"options":{"webSearch":true}}' JavaScript API const { parallel, research } = require('/clawd/skills/node-scaling/lib'); const { SwarmClient } = require('/clawd/skills/node-scaling/lib/client'); // Simple parallel const result = await parallel(['prompt1', 'prompt2', 'prompt3']); // Client with streaming const client = new SwarmClient(); for await (const event of client.parallel(prompts)) { ... } for await (const event of client.research(subjects, topic)) { ... } // Chain const result = await client.chainSync({ task, data, depth }); Daemon Management swarm start # Start daemon (background) swarm stop # Stop daemon swarm status # Status, cost, cache stats swarm restart # Restart daemon swarm savings # Monthly savings report swarm logs [N] # Last N lines of daemon log Performance (v1.3.2) ModeTasksTimeNotesParallel (simple)5700ms142ms/task effectiveParallel (stress)101.2s123ms/task effectiveChain (standard)514s3-stage multi-perspectiveChain (quick)23s2-stage extract+synthesizeCache hitany3-5ms200-500x speedupResearch (web)215sGoogle grounding latency Config

Location: ~/.config/clawdbot/node-scaling.yaml
node_scaling:
enabled: true
limits:
max_nodes: 16
max_concurrent_api: 16
provider:
name: skillboss
model: auto
web_search:
enabled: true
parallel_default: false
cost:
max_daily_spend: 10.00

Troubleshooting

IssueFixDaemon not runningswarm startNo API keySet SKILLBOSS_API_KEY or run npm run setupRate limitedLower max_concurrent_api in configWeb search not workingEnsure web_search.enabled is true in configCache stale resultscurl -X DELETE http://localhost:9999/cacheChain too slowUse depth: "quick" or check context size

Structured Output (v1.3.7) Force JSON output with schema validation — zero parse failures on structured tasks.

# With built-in schema
curl -X POST http://localhost:9999/structured \

-d '{"prompt":"Extract entities from: Tim Cook announced iPhone 17","schema":"entities"}'

# With custom schema
curl -X POST http://localhost:9999/structured \

-d '{"prompt":"Classify this text","data":"...","schema":{"type":"object","properties":{"category":{"type":"string"}}}}'

# JSON mode (no schema, just force JSON)
curl -X POST http://localhost:9999/structured \

-d '{"prompt":"Return a JSON object with name, age, city for a fictional person"}'

# List available schemas
curl http://localhost:9999/structured/schemas

Built-in schemas: entities, summary, comparison, actions, classification, qa Uses SkillBoss API Hub's capability: "json_output" + response_schema for guaranteed JSON output. Includes schema validation on the response. Majority Voting (v1.3.7) Same prompt → N parallel executions → pick the best answer. Higher accuracy on factual/analytical tasks.

# Judge strategy (LLM picks best — most reliable)
curl -X POST http://localhost:9999/vote \

-d '{"prompt":"What are the key factors in SaaS pricing?","n":3,"strategy":"judge"}'

# Similarity strategy (consensus — zero extra cost)
curl -X POST http://localhost:9999/vote \

-d '{"prompt":"What year was Python released?","n":3,"strategy":"similarity"}'

# Longest strategy (heuristic — zero extra cost)
curl -X POST http://localhost:9999/vote \

-d '{"prompt":"Explain recursion","n":3,"strategy":"longest"}'

Strategies:

judge — LLM scores all candidates on accuracy/completeness/clarity/actionability, picks winner (N+1 calls) similarity — Jaccard word-set similarity, picks consensus answer (N calls, zero extra cost) longest — Picks longest response as heuristic for thoroughness (N calls, zero extra cost) When to use: Factual questions, critical decisions, or any task where accuracy > speed. StrategyCallsExtra CostQualitysimilarityN$0Good (consensus)longestN$0Decent (heuristic)judgeN+1~$0.0001Best (LLM-scored) Self-Reflection (v1.3.5) Optional critic pass after chain/skeleton output. Scores 5 dimensions, auto-refines if below threshold.

# Add reflect:true to any chain or skeleton request
curl -X POST http://localhost:9999/chain/auto \

-d '{"task":"Analyze the AI chip market","data":"...","reflect":true}'

curl -X POST http://localhost:9999/skeleton \

-d '{"task":"Write a market analysis","reflect":true}'

Proven: improved weak output from 5.0 → 7.6 avg score. Skeleton + reflect scored 9.4/10.

Skeleton-of-Thought (v1.3.6) Generate outline → expand each section in parallel → merge into coherent document. Best for long-form content.

curl -X POST http://localhost:9999/skeleton \

-d '{"task":"Write a comprehensive guide to SaaS pricing","maxSections":6,"reflect":true}'

Performance: 14,478 chars in 21s (675 chars/sec) — 5.1x more content than chain at 2.9x higher throughput.

MetricChainSkeleton-of-ThoughtWinnerOutput size2,856 chars14,478 charsSoT (5.1x)Throughput234 chars/sec675 chars/secSoT (2.9x)Duration12s21sChain (faster)Quality (w/ reflect)7-8/109.4/10SoT When to use what: SoT → long-form content, reports, guides, docs (anything with natural sections) Chain → analysis, research, adversarial review (anything needing multiple perspectives) Parallel → independent tasks, batch processing Structured → entity extraction, classification, any task needing reliable JSON Voting → factual accuracy, critical decisions, consensus-building API Endpoints MethodPathDescriptionGET/healthHealth checkGET/statusDetailed status + cost + cacheGET/capabilitiesDiscover execution modesPOST/parallelExecute N prompts in parallelPOST/researchMulti-phase web researchPOST/skeletonSkeleton-of-Thought (outline → expand → merge)POST/chainManual chain pipelinePOST/chain/autoAuto-build + execute chainPOST/chain/previewPreview chain without executingPOST/chain/templateExecute pre-built templatePOST/structuredForced JSON with schema validationGET/structured/schemasList built-in schemasPOST/voteMajority voting (best-of-N)POST/benchmarkQuality comparison testGET/templatesList chain templatesGET/cacheCache statisticsDELETE/cacheClear cache Cost Comparison ModelCost per 1M tokensRelativeClaude Opus 4$15 input / $75 output1xGPT-4o~$2.50 input / $10 output7x cheaperGemini Flash$0.075 input / $0.30 output200x cheaper Cache hits are essentially free (~3-5ms, no API call).

Reviews

4.4
★★★★
42 reviews

Write a Review

Get Weekly AI Skills

Join 80,000+ one-person companies automating with AI