Skip to main content

How to use the plugin

The Runtime AI Chatbot Integrator provides two main functionalities: Text-to-Text chat and Text-to-Speech (TTS). Both features follow a similar workflow:

  1. Register your API provider token
  2. Configure feature-specific settings
  3. Send requests and process responses

Register Provider Token

Before sending any requests, register your API provider token using the RegisterProviderToken function.

Register Provider Token in Blueprint

Text-to-Text Chat Functionality

The plugin supports two chat request modes for each provider:

Non-Streaming Chat Requests

Retrieve the complete response in a single call.

Send OpenAI Chat Request

Streaming Chat Requests

Receive response chunks in real-time for a more dynamic interaction.

Send OpenAI Streaming Chat Request

Text-to-Speech (TTS) Functionality

Convert text to high-quality speech audio using leading TTS providers. The plugin returns raw audio data (TArray<uint8>) that you can process according to your project's needs.

While the examples below demonstrate audio processing for playback using the Runtime Audio Importer plugin (see audio importing documentation), the Runtime AI Chatbot Integrator is designed to be flexible. The plugin simply returns the raw audio data, giving you complete freedom in how you process it for your specific use case, which might include audio playback, saving to file, further audio processing, transmitting to other systems, custom visualizations, and more.

Non-Streaming TTS Requests

Non-streaming TTS requests return the complete audio data in a single response after the entire text has been processed. This approach is suitable for shorter texts where waiting for the complete audio isn't problematic.

Send OpenAI TTS Request

Streaming TTS Requests

Streaming TTS delivers audio chunks as they're generated, allowing you to process data incrementally rather than waiting for the entire audio to be synthesized. This significantly reduces the perceived latency for longer texts and enables real-time applications.

Send OpenAI Streaming TTS Request

Error Handling

When sending any requests, it's crucial to handle potential errors by checking the ErrorStatus in your callback. The ErrorStatus provides information about any issues that might occur during the request.

Error Handling

Cancelling Requests

The plugin allows you to cancel both text-to-text and TTS requests while they are in progress. This can be useful when you want to interrupt a long-running request or change the conversation flow dynamically.

Cancel Request

Best Practices

  1. Always handle potential errors by checking the ErrorStatus in your callback
  2. Be mindful of API rate limits and costs
  3. Use streaming mode for long-form or interactive conversations
  4. Consider cancelling requests that are no longer needed to manage resources efficiently
  5. Use streaming TTS for longer texts to reduce perceived latency
  6. For audio processing, Runtime Audio Importer plugin offers a convenient solution, but you can implement custom processing based on your project needs

Troubleshooting

  • Verify your API credentials are correct
  • Check your internet connection
  • Ensure any audio processing libraries you use (such as Runtime Audio Importer) are properly installed when working with TTS features
  • Verify you're using the correct audio format when processing TTS response data
  • For streaming TTS, make sure you're handling audio chunks correctly