action.start
Start a live translation session.
An object that contains the start parameters.
start Parameters
The webhook URI to be called. Authentication can also be set in the url in the format of username:password@url.
enThe language to transcribe.
Learn more about our supported Voices & Languages here.
falseWhether to enable automatic AI summarization. When enabled, an AI-generated summary of the conversation will be sent to your webhook when the transcription session ends.
60000The timeout for speech recognition.
Possible Values: [Minimum value: 1500, Maximum Value: None]
300 | 500Voice activity detection silence time in milliseconds.
Default depends on the speech engine: 300 for Deepgram, 500 for Google.
Possible Values: [Minimum value: 1, Maximum Value: None]
400Voice activity detection threshold.
Possible Values: [Minimum value: 0, Maximum Value: 1800]
local-callerThe direction of the call that should be transcribed.
Possible Values: [remote-caller, local-caller]
deepgramThe speech recognition engine to use.
Possible Values: [deepgram, google]
The AI prompt that instructs how to summarize the conversation when ai_summary is enabled.
This prompt is sent to an AI model to guide how it generates the summary.
Example: "Summarize the key points and action items from this conversation."
Example
- YAML
- JSON
live_transcribe:
action:
start:
webhook: 'https://example.com/webhook'
lang: en
live_events: true
ai_summary: true
ai_summary_prompt: Summarize this conversation
speech_timeout: 60000
vad_silence_ms: 500
vad_thresh: 400
debug_level: 0
direction:
- remote-caller
- local-caller
speech_engine: deepgram
{
"live_transcribe": {
"action": {
"start": {
"webhook": "https://example.com/webhook",
"lang": "en",
"live_events": true,
"ai_summary": true,
"ai_summary_prompt": "Summarize this conversation",
"speech_timeout": 60000,
"vad_silence_ms": 500,
"vad_thresh": 400,
"debug_level": 0,
"direction": [
"remote-caller",
"local-caller"
],
"speech_engine": "deepgram"
}
}
}
}