Voice agents
Useful when you already have the orchestration working and the TTS bill is the next thing making the economics ugly.
If you already use OpenClaw's OpenAI TTS provider path, the migration story should stay boring: keep the provider, swap the base URL, test the voice, and compare the cost math.
Useful when you already have the orchestration working and the TTS bill is the next thing making the economics ugly.
Useful when you need cheap iteration for NPCs, branching dialogue, or cinematic placeholders before final polish.
Useful when you do not want to rework your app architecture just to try a cheaper TTS backend.
OpenClaw setup
This is the whole point of the compatibility story. You do not need a bespoke SayWell client to see whether the price and voice quality are interesting.
{
messages: {
tts: {
auto: "always",
provider: "openai",
summaryModel: "openai/gpt-4.1-mini",
modelOverrides: {
enabled: true,
},
providers: {
openai: {
apiKey: "saywell_api_key",
baseUrl: "https://saywell.io/v1",
model: "studio",
voice: "aria",
},
},
},
},
}The page exists to make the integration legible enough that a builder can try it, judge the economics, and decide whether early access is worth it.
Join the early-access list or email directly for a manual alpha key while the product surface stays intentionally small.
Use OpenClaw's built-in OpenAI speech provider instead of inventing a separate integration surface.
The only operational change is the endpoint. That keeps the migration story simple and makes cost comparisons legible.
Pick the SayWell voice you want and test one outbound reply before wiring it into a larger automation or call flow.
What stays the same
What changes
Notes
If you already keep a second provider around for fallback, do not throw that away. The goal here is to validate whether SayWell deserves the primary slot in your stack, not to force a more dramatic migration than necessary.
Current alpha support: `model=studio`, voices `aria`, `sol`, and `vale`, plus `mp3`, `wav`, and `opus` output from a real live endpoint at `https://saywell.io/v1`.
This section is intentionally simple. If the budget math is not compelling enough to make you curious, there is no point pretending the integration story will save it.
Free tier
100K chars / month
1M characters
$5
5M characters
$25
10M characters
$50
Warm onboarding guide
Balanced and clear for product tours, support flows, and tutorials.
“Welcome to SayWell. Type your script once, choose a voice, and ship production-ready audio without chasing credits.”
Confident operator
Sharper cadence for voice agents, assistants, and menu systems.
“Your API key includes one hundred thousand free characters every month, with commercial use included from day one.”
Cinematic narrator
A broader performance range for trailers, games, and promo reads.
“Build the scene, stream the dialogue, and keep every line item predictable from prototype to launch.”
If any of these still feel fuzzy, the page is doing a bad job and we should tighten it before spending money on traffic.
No. The point of the guide is to keep the migration narrow: stay on the OpenAI TTS provider path, swap the API key and base URL, then test one voice.
Yes. If you already rely on ElevenLabs or another provider for fallback, keep that block in your OpenClaw config and use SayWell as the primary OpenAI-compatible path.
No. This guide is for early-access validation and onboarding. The waitlist lets us see which builders actually want the integration before we widen access.
The live alpha supports `POST /v1/audio/speech`, the `studio` model, `aria` / `sol` / `vale` voices, and `mp3` / `wav` / `opus` output. The goal is to make OpenClaw testing real before broadening the surface area.
Outreach asset
It gives us a neutral reference asset for community posts, maintainer outreach, and any future docs contribution without forcing OpenClaw to host vendor marketing on day one.
If this integration story is interesting, join the list. We are onboarding the first wave manually so we can watch real usage, tighten the compatibility surface, and decide what to widen next.