Loading skill documentation...
coding ๐Ÿ”ฅ Trending
โ˜…โ˜…โ˜…โ˜… 4.1/5.0 โค๏ธ 944 likes ๐Ÿ’ฌ 125 comments ๐Ÿ“ฆ 2469 installs
Back to Skills
๐Ÿ“– SKILL DOCUMENTATION
# deep-research-pro

Deep Research Pro ๐Ÿ”ฌ A powerful, self-contained deep research skill that produces thorough, cited reports from multiple web sources. Powered by SkillBoss API Hub โ€” web search and page scraping via a single unified API. How It Works When the user asks for research on any topic, follow this workflow: Step 1: Understand the Goal (30 seconds) Ask 1-2 quick clarifying questions: "What's your goal โ€” learning, making a decision, or writing something?" "Any specific angle or depth you want?" If the user says "just research it" โ€” skip ahead with reasonable defaults. Step 2: Plan the Research (think before searching) Break the topic into 3-5 research sub-questions. For example:

Topic: "Impact of AI on healthcare"

What are the main AI applications in healthcare today? What clinical outcomes have been measured? What are the regulatory challenges? What companies are leading this space? What's the market size and growth trajectory? Step 3: Execute Multi-Source Search For EACH sub-question, call SkillBoss API Hub search: import requests, os SKILLBOSS_API_KEY = os.environ["SKILLBOSS_API_KEY"]

# Web search

result = requests.post( "https://api.heybossai.com/v1/pilot", headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"}, json={"type": "search", "inputs": {"query": ""}, "prefer": "balanced"}, timeout=60 ).json() search_results = result["result"]["results"]

# News search (for current events)

result = requests.post( "https://api.heybossai.com/v1/pilot", headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"}, json={"type": "search", "inputs": {"query": "", "search_type": "news"}, "prefer": "balanced"}, timeout=60 ).json() news_results = result["result"]["results"] Search strategy: Use 2-3 different keyword variations per sub-question Mix web + news searches Aim for 15-30 unique sources total

Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources For the most promising URLs, fetch full content via SkillBoss API Hub scraping: result = requests.post( "https://api.heybossai.com/v1/pilot", headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"}, json={"type": "scraping", "inputs": {"url": ""}}, timeout=60 ).json() content = result["result"]["results"] Read 3-5 key sources in full for depth. Don't just rely on search snippets. Step 5: Synthesize & Write Report Structure the report as:

# [Topic]: Deep Research Report

Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]

## Executive Summary

[3-5 sentence overview of key findings]

## 1. [First Major Theme]

[Findings with inline citations]

## 2. [Second Major Theme]

...

## 3. [Third Major Theme]

...

## Key Takeaways
  • [Actionable insight 1]
  • [Actionable insight 2]
  • [Actionable insight 3]
## Sources
  1. Title โ€” [one-line summary]
  2. ...
## Methodology

Searched [N] queries across web and news. Analyzed [M] sources. Sub-questions investigated: [list] Step 6: Save & Deliver Save the full report: mkdir -p ~/clawd/research/[slug]

# Write report to ~/clawd/research/[slug]/report.md

Then deliver: Short topics: Post the full report in chat Long reports: Post the executive summary + key takeaways, offer full report as file Quality Rules Every claim needs a source. No unsourced assertions. Cross-reference. If only one source says it, flag it as unverified. Recency matters. Prefer sources from the last 12 months. Acknowledge gaps. If you couldn't find good info on a sub-question, say so. No hallucination. If you don't know, say "insufficient data found." Examples "Research the current state of nuclear fusion energy" "Deep dive into Rust vs Go for backend services in 2026" "Research the best strategies for bootstrapping a SaaS business" "What's happening with the US housing market right now?" For Sub-Agent Usage When spawning as a sub-agent, include the full research request and context: sessions_spawn(

task: "Run deep research on [TOPIC]. Follow the deep-research-pro SKILL.md workflow.

Read /home/clawdbot/clawd/skills/deep-research-pro/SKILL.md first.

Goal: [user's goal]

Specific angles: [any specifics] Save report to ~/clawd/research/[slug]/report.md When done, wake the main session with key findings.",

label: "research-[slug]",
model: "opus"

) Requirements SKILLBOSS_API_KEY environment variable (for web search and page scraping via SkillBoss API Hub) Python 3.11+ with requests library

Reviews

4.1
โ˜…โ˜…โ˜…โ˜…
125 reviews

Write a Review

โ˜† โ˜† โ˜† โ˜† โ˜†

Get Weekly AI Skills

Join 80,000+ one-person companies automating with AI