JavaScript
npm install ultravox-client
The Ultravox REST API is used to create calls but you must use one of the Ultravox client SDKs to join and end calls. This page primarily uses examples in JavaScript. The concepts are the same across all the different SDK implementations.
The core of the SDK is the UltravoxSession
. The session is used to join and leave calls.
The UltravoxSession
contains methods for:
Joins a call. Requires a joinUrl (string). Returns an UltravoxSessionState
.
Leaves the current call. Returns a promise (with no return value) that resolves when the call has successfully been left.
Sends a text message to the agent. Requires inputting the text message (string).
Sets the agent’s output medium for future utterances. If the agent is currently speaking, this will take effect at the end of the agent’s utterance. Also see muteSpeaker and unmuteSpeaker below.
parameter | description |
---|---|
medium | How replies are communicated. Must be either 'text' or 'voice' . |
Registers a client tool implementation with the given name. If the call is started with a client-implemented tool, this implementation will be invoked when the model calls the tool.
parameter | description |
---|---|
name | String. The name of the tool. Must match what is defined in selectedTools during call creation. If nameOverride is set then must match that name. Otherwise must match modelToolName . |
implementation | ClientToolImplementation function that implements the tool’s logic. |
ClientToolImplementation
This is a function that:
Accepts parameters
→ An object containing key-value pairs for the tool’s parameters. The keys will be strings.
Returns → Either a string result, or an object with a result string and a responseType, or a Promise that resolves to one of these.
For example:
Convenience batch wrapper for registerToolImplementation.
implementationMap
→ An object where each key (a string) represents the name of the tool and each value is a ClientToolImplementation
function.
Returns a boolen indicating if the end user’s microphone is muted. This is scoped to the Ultravox SDK and does not detect muting done by the user outside of your application.
Returns a boolen indicating if the speaker (the agent’s voice output) is muted. This is scoped to the Ultravox SDK and does not detect muting done by the user outside of your application.
Mutes the end user’s microphone. This is scoped to the Ultravox SDK.
Unmutes the end user’s microphone. This is scoped to the Ultravox SDK.
Mutes the end user’s speaker (the agent’s voice output). This is scoped to the Ultravox SDK.
Unmutes the end user’s speaker (the agent’s voice output). This is scoped to the Ultravox SDK.
Ultravox has robust support for tools. The SDK has support for client tools. Client tools will be invoked in your client code and enable you to add interactivity in your app that is driven by user interactions with your agent. For example, your agent could choose to invoke a tool that would trigger some UI change.
Client tools are defined just like “server” tools with three exceptions:
You don’t add the URL and HTTP method for client tools. Instead, you add "client": {}
to the tool definition.
Your client tool must be registered in your client code. Here’s a snippet that might be found in client code to register the client tool and implement the logic for the tool.
See SDK Methods for more information.
Unlike server tools (which accept parameters passed by path, header, body, etc.), client tools only allow parameters to be passed in the body of the request. That means client tools will always have parameter location set like this:
The UltravoxSession
exposes status. Based on the UltravoxSessionStatus
enum, status can be one of the following:
status | description |
---|---|
disconnected | Session is not connected. This is the initial state prior to joinCall. |
disconnecting | Session is in the process of disconnecting. |
connecting | Session is establishing the connection. |
idle | Session is connected but not yet active. |
listening | Listening to the end user. |
thinking | The model is processing/thinking. |
speaking | The model is speaking. |
The status can be retrieved by adding an event listener to the session status. Building on what we did above:
Sometimes you may want to augment the audio with text transcripts (e.g. if you want to show the end user the model’s output in real-time). Transcripts can be retrieved by adding an event listener:
Transcripts are an array of transcript objects. Each transcript has the following properties:
property | type | definition |
---|---|---|
text | string | Text transcript of the speech from the end user or the agent. |
isFinal | boolean | True if the transcript represents a complete utterance. False if it is a fragment of an utterance that is still underway. |
speaker | Role | Either “user” or “agent”. Denotes who was speaking. |
medium | Medium | Either “voice” or “text”. Denotes how the message was sent. |
The UltravoxSession
object also provides debug messages. Debug messages must be enabled when creating a new session and then are available via an event listener similar to status and transcripts:
When the agent invokes a tool, the message contains the function, all arguments, and an invocation ID:
When the tool call completes, the message contains an array of messages. Multiple tools can be invoked by the model. This message array will conatain all the calls followed by all the results. These messages are also available via List Call Messages.
Here’s an example of what we might see from a single tool invocation:
JavaScript
npm install ultravox-client
Flutter
flutter add ultravox_client
Python
pip install ultravox-client
Kotlin (Android)
Swift (iOS) [alpha]