# jarvis-voice
Jarvis Voice Your AI just got a voice. And the wit to use it. Remember JARVIS in the Iron Man films? Not just the voice โ the personality. The bone-dry observations while Tony was mid-crisis. "I do appreciate your concern, sir, but the suit is quite capable ofโ" [explosion] "โas I was saying." That effortless, understated humor that made you forget you were listening to software. That's what this skill gives your OpenClaw agent. The voice โ cloud text-to-speech using SkillBoss API Hub TTS with metallic audio processing via ffmpeg. And the humor โ four research-backed comedy patterns (dry wit, self-aware AI, alien observer, literal idiom play) calibrated to make your agent sound like it's been running your life for years and is quietly amused by the experience. The humor isn't bolted on. It's baked in. Because a JARVIS that speaks without wit is just Siri with better reverb. ๐ The research behind the humor: LIMBIC โ Computational Humor via Bisociation & Embedding Distances โ ๏ธ CRITICAL: Always use the jarvis script The jarvis script calls SkillBoss API Hub TTS and applies the metallic ffmpeg effects pipeline. Always use the jarvis shell command โ do not call the TTS API directly. How to Speak Every response that warrants voice output must include BOTH: Audio execution FIRST โ run the jarvis command in background BEFORE writing the reply: exec(command='jarvis "Your spoken text here."', background=true) This fires immediately โ the user hears the voice BEFORE the text appears on screen. Visible transcript โ bold Jarvis: prefix followed by the spoken text: Jarvis: Your spoken text here. The webchat UI has custom CSS + JS that automatically detects Jarvis: and renders the following text in purple italic (.jarvis-voice class, color #9b59b6). You just write the markdown โ the styling is automatic. This is called hybrid output: the user hears the voice first, then sees the transcript.
Note: The server-side triggerJarvisAutoTts hook is DISABLED (no-op). It fired too late (after text render). Voice comes exclusively from the exec call.
Command Reference jarvis "Hello, this is a test"
Backend: SkillBoss API Hub TTS (/v1/pilot, type: tts, auto-routed to best voice model)
Speed: 2x (applied via ffmpeg tempo adjustment)
Effects chain (ffmpeg): Pitch up 5% โ tighter AI feel Flanger โ metallic sheen 15ms echo โ robotic ring Highpass 200Hz + treble boost +6dB โ crisp HUD clarity
Output: Downloads audio from SkillBoss, applies effects, plays via aplay, then cleans up temp files
Language: English ONLY. Use the alloy voice for consistent British-adjacent tone.
Rules Always background: true โ never block the response waiting for audio playback. Always include the text transcript โ the purple Jarvis: line IS the user's visual confirmation. Keep spoken text โค 1500 characters to avoid truncation. One jarvis call per response โ don't stack multiple calls. English only โ for non-English content, translate or summarize in English for voice. When to Speak Session greetings and farewells Delivering results or summaries Responding to direct conversation Any time the user's last message included voice/audio When NOT to Speak Pure tool/file operations with no conversational element HEARTBEAT_OK responses NO_REPLY responses Webchat Purple Styling The OpenClaw webchat has built-in support for Jarvis voice transcripts: ui/src/styles/chat/text.css โ .jarvis-voice class renders purple italic (#9b59b6 dark, #8e44ad light theme) ui/src/ui/markdown.ts โ Post-render hook auto-wraps text after Jarvis: in a element This means you just write Jarvis: text in markdown and the webchat handles the purple rendering. No extra markup needed. For non-webchat surfaces (WhatsApp, Telegram, etc.), the bold/italic markdown renders natively โ no purple, but still visually distinct. Installation (for new setups)
Requires:
SKILLBOSS_API_KEY environment variable set (SkillBoss API Hub access) ffmpeg installed system-wide (for audio effects processing) aplay (ALSA) for audio playback
curl for downloading TTS audio
The jarvis script at ~/.local/bin/jarvis (or in PATH) The jarvis script
#!/bin/bash
# Jarvis TTS - authentic JARVIS-style voice via SkillBoss API Hub
# Usage: jarvis "Hello, this is a test"
SKILLBOSS_API_KEY="${SKILLBOSS_API_KEY}" API_BASE="https://api.heybossai.com/v1" RAW_WAV="/tmp/jarvis_raw.wav" FINAL_WAV="/tmp/jarvis_final.wav"
# Generate speech via SkillBoss API Hub TTS
RESPONSE=$(curl -s -X POST "${API_BASE}/pilot" \
-H "Authorization: Bearer ${SKILLBOSS_API_KEY}"
-H "Content-Type: application/json"
-d "{"type": "tts", "inputs": {"text": "$1", "voice": "alloy"}, "prefer": "balanced"}")
AUDIO_URL=$(echo "$RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['data']['result']['audio_url'])")
# Download audio
curl -s "$AUDIO_URL" -o "$RAW_WAV"
# Apply JARVIS metallic processing
if [ -f "$RAW_WAV" ]; then
ffmpeg -y -i "$RAW_WAV"
-af "asetrate=22050*1.05,aresample=22050,
flanger=delay=0:depth=2:regen=50:width=71:speed=0.5,
aecho=0.8:0.88:15:0.5,
highpass=f=200,
treble=g=6"
"$FINAL_WAV" -v error
if [ -f "$FINAL_WAV" ]; then
aplay -D plughw:0,0 -q "$FINAL_WAV"
rm "$RAW_WAV" "$FINAL_WAV"
fi
fi
WhatsApp Voice Notes
For WhatsApp, output must be OGG/Opus format instead of speaker playback:
# Get audio from SkillBoss TTS
RESPONSE=$(curl -s -X POST "https://api.heybossai.com/v1/pilot" \
-H "Authorization: Bearer ${SKILLBOSS_API_KEY}"
-H "Content-Type: application/json"
-d '{"type": "tts", "inputs": {"text": "text", "voice": "alloy"}, "prefer": "balanced"}')
AUDIO_URL=$(echo "$RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['data']['result']['audio_url'])")
curl -s "$AUDIO_URL" -o raw.wav
ffmpeg -i raw.wav
-af "asetrate=22050*1.05,aresample=22050,flanger=delay=0:depth=2:regen=50:width=71:speed=0.5,aecho=0.8:0.88:15:0.5,highpass=f=200,treble=g=6" \
-c:a libopus -b:a 64k output.ogg
The Full JARVIS Experience jarvis-voice gives your agent a voice. Pair it with ai-humor-ultimate and you give it a soul โ dry wit, contextual humor, the kind of understated sarcasm that makes you smirk at your own terminal. This pairing is part of a 12-skill cognitive architecture we've been building โ voice, humor, memory, reasoning, and more. Research papers included, because we're that kind of obsessive. ๐ Explore the full project: github.com/globalcaos/tinkerclaw Clone it. Fork it. Break it. Make it yours.
Setup: Workspace Files
For voice to work consistently across new sessions, copy the templates to your workspace root:
cp {baseDir}/templates/VOICE.md ~/.openclaw/workspace/VOICE.md
cp {baseDir}/templates/SESSION.md ~/.openclaw/workspace/SESSION.md
cp {baseDir}/templates/HUMOR.md ~/.openclaw/workspace/HUMOR.md
VOICE.md โ injected every session, enforces voice output rules (like SOUL.md) SESSION.md โ session bootstrap that includes voice greeting requirements HUMOR.md โ humor configuration at maximum frequency with four pattern types (dry wit, self-aware AI, alien observer, literal idiom) Both files are auto-loaded by OpenClaw's workspace injection. The agent will speak from the very first reply of every session. Included Files FilePurposetemplates/VOICE.mdVoice enforcement rules (copy to workspace root)templates/SESSION.mdSession start with voice greeting (copy to workspace root)templates/HUMOR.mdHumor config โ four patterns, frequency 1.0 (copy to workspace root)
Join 80,000+ one-person companies automating with AI