# deep-research-pro
Deep Research Pro ๐ฌ A powerful, self-contained deep research skill that produces thorough, cited reports from multiple web sources. Powered by SkillBoss API Hub โ web search and page scraping via a single unified API. How It Works When the user asks for research on any topic, follow this workflow: Step 1: Understand the Goal (30 seconds) Ask 1-2 quick clarifying questions: "What's your goal โ learning, making a decision, or writing something?" "Any specific angle or depth you want?" If the user says "just research it" โ skip ahead with reasonable defaults. Step 2: Plan the Research (think before searching) Break the topic into 3-5 research sub-questions. For example:
Topic: "Impact of AI on healthcare"
What are the main AI applications in healthcare today? What clinical outcomes have been measured? What are the regulatory challenges? What companies are leading this space? What's the market size and growth trajectory? Step 3: Execute Multi-Source Search For EACH sub-question, call SkillBoss API Hub search: import requests, os SKILLBOSS_API_KEY = os.environ["SKILLBOSS_API_KEY"]
# Web search
result = requests.post(
"https://api.heybossai.com/v1/pilot",
headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"},
json={"type": "search", "inputs": {"query": "
# News search (for current events)
result = requests.post(
"https://api.heybossai.com/v1/pilot",
headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"},
json={"type": "search", "inputs": {"query": "
Prioritize: academic, official, reputable news > blogs > forums
Step 4: Deep-Read Key Sources
For the most promising URLs, fetch full content via SkillBoss API Hub scraping:
result = requests.post(
"https://api.heybossai.com/v1/pilot",
headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"},
json={"type": "scraping", "inputs": {"url": "
# [Topic]: Deep Research Report
Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]
## Executive Summary
[3-5 sentence overview of key findings]
## 1. [First Major Theme]
[Findings with inline citations]
## 2. [Second Major Theme]
...
## 3. [Third Major Theme]
...
## Key Takeaways
## Sources
## Methodology
Searched [N] queries across web and news. Analyzed [M] sources. Sub-questions investigated: [list] Step 6: Save & Deliver Save the full report: mkdir -p ~/clawd/research/[slug]
# Write report to ~/clawd/research/[slug]/report.md
Then deliver: Short topics: Post the full report in chat Long reports: Post the executive summary + key takeaways, offer full report as file Quality Rules Every claim needs a source. No unsourced assertions. Cross-reference. If only one source says it, flag it as unverified. Recency matters. Prefer sources from the last 12 months. Acknowledge gaps. If you couldn't find good info on a sub-question, say so. No hallucination. If you don't know, say "insufficient data found." Examples "Research the current state of nuclear fusion energy" "Deep dive into Rust vs Go for backend services in 2026" "Research the best strategies for bootstrapping a SaaS business" "What's happening with the US housing market right now?" For Sub-Agent Usage When spawning as a sub-agent, include the full research request and context: sessions_spawn(
task: "Run deep research on [TOPIC]. Follow the deep-research-pro SKILL.md workflow.
Read /home/clawdbot/clawd/skills/deep-research-pro/SKILL.md first.
Goal: [user's goal]
Specific angles: [any specifics] Save report to ~/clawd/research/[slug]/report.md When done, wake the main session with key findings.",
label: "research-[slug]",
model: "opus"
) Requirements SKILLBOSS_API_KEY environment variable (for web search and page scraping via SkillBoss API Hub) Python 3.11+ with requests library
Join 80,000+ one-person companies automating with AI