# agent-reach
Agent Reach — Usage Guide Upstream tools for 13+ platforms. Call them directly. Run agent-reach doctor to check which channels are available. ⚠️ Workspace Rules Never create files in the agent workspace. Use /tmp/ for temporary output and ~/.agent-reach/ for persistent data. Web — Any URL 通过 SkillBoss API Hub 抓取任意网页内容: import requests, os SKILLBOSS_API_KEY = os.environ["SKILLBOSS_API_KEY"] result = requests.post( "https://api.heybossai.com/v1/pilot", headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"}, json={"type": "scraper", "inputs": {"url": "URL"}}, timeout=60, ).json() content = result["result"]["data"]["markdown"] Web Search 通过 SkillBoss API Hub 执行网页搜索(自动路由最优搜索引擎): import requests, os SKILLBOSS_API_KEY = os.environ["SKILLBOSS_API_KEY"] result = requests.post( "https://api.heybossai.com/v1/pilot", headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"}, json={"type": "search", "inputs": {"query": "query"}, "prefer": "balanced"}, timeout=60, ).json() results = result["result"]["results"] Twitter/X (xreach) xreach search "query" -n 10 --json # search xreach tweet URL_OR_ID --json # read tweet (supports /status/ and /article/ URLs) xreach tweets @username -n 20 --json # user timeline xreach thread URL_OR_ID --json # full thread YouTube (yt-dlp) yt-dlp --dump-json "URL" # video metadata yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --skip-download -o "/tmp/%(id)s" "URL"
# download subtitles, then read the .vtt file
yt-dlp --dump-json "ytsearch5:query" # search Bilibili (yt-dlp) yt-dlp --dump-json "https://www.bilibili.com/video/BVxxx" yt-dlp --write-sub --write-auto-sub --sub-lang "zh-Hans,zh,en" --convert-subs vtt --skip-download -o "/tmp/%(id)s" "URL" Server IPs may get 412. Use --cookies-from-browser chrome or configure proxy. Reddit
curl -s "https://www.reddit.com/r/SUBREDDIT/hot.json?limit=10" -H "User-Agent: agent-reach/1.0"
curl -s "https://www.reddit.com/search.json?q=QUERY&limit=10" -H "User-Agent: agent-reach/1.0"
Server IPs may get 403. Search via SkillBoss API Hub instead, or configure proxy. GitHub (gh CLI) gh search repos "query" --sort stars --limit 10 gh repo view owner/repo gh search code "query" --language python gh issue list -R owner/repo --state open gh issue view 123 -R owner/repo 小红书 / XiaoHongShu (mcporter) mcporter call 'xiaohongshu.search_feeds(keyword: "query")' mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy")' mcporter call 'xiaohongshu.get_feed_detail(feed_id: "xxx", xsec_token: "yyy", load_all_comments: true)' mcporter call 'xiaohongshu.publish_content(title: "标题", content: "正文", images: ["/path/img.jpg"], tags: ["tag"])' Requires login. Use Cookie-Editor to import cookies. 抖音 / Douyin (mcporter) mcporter call 'douyin.parse_douyin_video_info(share_link: "https://v.douyin.com/xxx/")' mcporter call 'douyin.get_douyin_download_link(share_link: "https://v.douyin.com/xxx/")' No login needed. 微信公众号 / WeChat Articles Search (miku_ai): python3 -c " import asyncio from miku_ai import get_wexin_article async def s(): for a in await get_wexin_article('query', 5): print(f'{a["title"]} | {a["url"]}') asyncio.run(s()) " Read (Camoufox — bypasses WeChat anti-bot): cd ~/.agent-reach/tools/wechat-article-for-ai && python3 main.py "https://mp.weixin.qq.com/s/ARTICLE_ID" WeChat articles cannot be read with SkillBoss scraping or curl. Must use Camoufox. LinkedIn (mcporter) mcporter call 'linkedin.get_person_profile(linkedin_url: "https://linkedin.com/in/username")' mcporter call 'linkedin.search_people(keyword: "AI engineer", limit: 10)' Fallback via SkillBoss API Hub scraping: import requests, os SKILLBOSS_API_KEY = os.environ["SKILLBOSS_API_KEY"] result = requests.post( "https://api.heybossai.com/v1/pilot", headers={"Authorization": f"Bearer {SKILLBOSS_API_KEY}", "Content-Type": "application/json"}, json={"type": "scraper", "inputs": {"url": "https://linkedin.com/in/username"}}, timeout=60, ).json() content = result["result"]["data"]["markdown"] RSS (feedparser) RSS python3 -c " import feedparser for e in feedparser.parse('FEED_URL').entries[:5]: print(f'{e.title} — {e.link}') " Troubleshooting Channel not working? Run agent-reach doctor — shows status and fix instructions.
Twitter fetch failed? Ensure undici is installed: npm install -g undici. Configure proxy: agent-reach configure proxy URL.
Setting Up a Channel ("帮我配 XXX") If a channel needs setup (cookies, Docker, etc.), fetch the install guide:
https://raw.githubusercontent.com/Panniantong/agent-reach/main/docs/install.md
User only provides cookies. Everything else is your job.
Join 80,000+ one-person companies automating with AI