플러그인 사용 방법
Runtime AI Chatbot Integrator는 두 가지 주요 기능을 제공합니다: Text-to-Text 채팅과 Text-to-Speech(TTS). 두 기능 모두 유사한 워크플로우를 따릅니다:
- API 제공자 토큰 등록
- 기능별 설정 구성
- 요청 전송 및 응답 처리
제공자 토큰 등록
요청을 전송하기 전에 RegisterProviderToken
함수를 사용하여 API 제공자 토큰을 등록하세요.
- Blueprint
- C++
// Register an OpenAI provider token, as an example
UAIChatbotCredentialsManager::RegisterProviderToken(
EAIChatbotIntegratorOrgs::OpenAI,
TEXT("sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
);
텍스트-투-텍스트 채팅 기능
이 플러그인은 각 제공자별로 두 가지 채팅 요청 모드를 지원합니다:
논-스트리밍 채팅 요청
단일 호출로 완전한 응답을 받아옵니다.
- OpenAI
- DeepSeek
- Claude
- Blueprint
- C++
// Example of sending a non-streaming chat request to OpenAI
FChatbotIntegrator_OpenAISettings Settings;
Settings.Messages.Add(FChatbotIntegrator_OpenAIMessage(
EChatbotIntegrator_OpenAIRole::SYSTEM,
TEXT("You are a helpful assistant.")
));
Settings.Messages.Add(FChatbotIntegrator_OpenAIMessage(
EChatbotIntegrator_OpenAIRole::USER,
TEXT("What is the capital of France?")
));
UAIChatbotIntegratorOpenAI::SendChatRequestNative(
Settings,
FOnOpenAIChatCompletionResponseNative::CreateWeakLambda(
this,
[this](const FString& Response, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
UE_LOG(LogTemp, Log, TEXT("Chat completion response: %s, Error: %d: %s"),
*Response, ErrorStatus.bIsError, *ErrorStatus.ErrorMessage);
}
)
);
- Blueprint
- C++
// Example of sending a non-streaming chat request to DeepSeek
FChatbotIntegrator_DeepSeekSettings Settings;
Settings.Messages.Add(FChatbotIntegrator_DeepSeekMessage(
EChatbotIntegrator_DeepSeekRole::SYSTEM,
TEXT("You are a helpful assistant.")
));
Settings.Messages.Add(FChatbotIntegrator_DeepSeekMessage(
EChatbotIntegrator_DeepSeekRole::USER,
TEXT("What is the capital of France?")
));
UAIChatbotIntegratorDeepSeek::SendChatRequestNative(
Settings,
FOnDeepSeekChatCompletionResponseNative::CreateWeakLambda(
this,
[this](const FString& Reasoning, const FString& Content, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
UE_LOG(LogTemp, Log, TEXT("Chat completion reasoning: %s, Content: %s, Error: %d: %s"),
*Reasoning, *Content, ErrorStatus.bIsError, *ErrorStatus.ErrorMessage);
}
)
);
- Blueprint
- C++
// Example of sending a non-streaming chat request to Claude
FChatbotIntegrator_ClaudeSettings Settings;
Settings.Messages.Add(FChatbotIntegrator_ClaudeMessage(
EChatbotIntegrator_ClaudeRole::SYSTEM,
TEXT("You are a helpful assistant.")
));
Settings.Messages.Add(FChatbotIntegrator_ClaudeMessage(
EChatbotIntegrator_ClaudeRole::USER,
TEXT("What is the capital of France?")
));
UAIChatbotIntegratorClaudeStream::SendStreamingChatRequestNative(
Settings,
FOnClaudeChatCompletionStreamNative::CreateWeakLambda(
this,
[this](const FString& Response, bool IsFinalChunk, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
UE_LOG(LogTemp, Log, TEXT("Streaming chat completion response: %s, IsFinalChunk: %d, Error: %d: %s"),
*Response, IsFinalChunk, ErrorStatus.bIsError, *ErrorStatus.ErrorMessage);
}
)
);
스트리밍 채팅 요청
실시간으로 응답 청크를 수신하여 더 동적인 상호작용이 가능합니다.
- OpenAI
- DeepSeek
- Claude
- Blueprint
- C++
// Example of sending a streaming chat request to OpenAI
FChatbotIntegrator_OpenAISettings Settings;
Settings.Messages.Add(FChatbotIntegrator_OpenAIMessage(
EChatbotIntegrator_OpenAIRole::SYSTEM,
TEXT("You are a helpful assistant.")
));
Settings.Messages.Add(FChatbotIntegrator_OpenAIMessage(
EChatbotIntegrator_OpenAIRole::USER,
TEXT("What is the capital of France?")
));
UAIChatbotIntegratorOpenAIStream::SendStreamingChatRequestNative(
Settings,
FOnOpenAIChatCompletionStreamNative::CreateWeakLambda(
this,
[this](const FString& Response, bool IsFinalChunk, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
UE_LOG(LogTemp, Log, TEXT("Streaming chat completion response: %s, IsFinalChunk: %d, Error: %d: %s"),
*Response, IsFinalChunk, ErrorStatus.bIsError, *ErrorStatus.ErrorMessage);
}
)
);
- Blueprint
- C++
// Example of sending a streaming chat request to DeepSeek
FChatbotIntegrator_DeepSeekSettings Settings;
Settings.Messages.Add(FChatbotIntegrator_DeepSeekMessage(
EChatbotIntegrator_DeepSeekRole::SYSTEM,
TEXT("You are a helpful assistant.")
));
Settings.Messages.Add(FChatbotIntegrator_DeepSeekMessage(
EChatbotIntegrator_DeepSeekRole::USER,
TEXT("What is the capital of France?")
));
UAIChatbotIntegratorDeepSeekStream::SendStreamingChatRequestNative(
Settings,
FOnDeepSeekChatCompletionStreamNative::CreateWeakLambda(
this,
[this](const FString& ReasoningChunk, const FString& ContentChunk,
bool IsReasoningFinalChunk, bool IsContentFinalChunk,
const FChatbotIntegratorErrorStatus& ErrorStatus)
{
UE_LOG(LogTemp, Log, TEXT("Streaming chat completion reasoning chunk: %s, Content chunk: %s, IsReasoningFinalChunk: %d, IsContentFinalChunk: %d, Error: %d: %s"),
*ReasoningChunk, *ContentChunk, IsReasoningFinalChunk, IsContentFinalChunk,
ErrorStatus.bIsError, *ErrorStatus.ErrorMessage);
}
)
);
- Blueprint
- C++
// Example of sending a streaming chat request to Claude
FChatbotIntegrator_ClaudeSettings Settings;
Settings.Messages.Add(FChatbotIntegrator_ClaudeMessage(
EChatbotIntegrator_ClaudeRole::SYSTEM,
TEXT("You are a helpful assistant.")
));
Settings.Messages.Add(FChatbotIntegrator_ClaudeMessage(
EChatbotIntegrator_ClaudeRole::USER,
TEXT("What is the capital of France?")
));
UAIChatbotIntegratorClaudeStream::SendStreamingChatRequestNative(
Settings,
FOnClaudeChatCompletionStreamNative::CreateWeakLambda(
this,
[this](const FString& Response, bool IsFinalChunk, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
UE_LOG(LogTemp, Log, TEXT("Streaming chat completion response: %s, IsFinalChunk: %d, Error: %d: %s"),
*Response, IsFinalChunk, ErrorStatus.bIsError, *ErrorStatus.ErrorMessage);
}
)
);
텍스트-음성 변환(TTS) 기능
선도적인 TTS 제공업체를 사용하여 텍스트를 고품질 음성 오디오로 변환합니다. 이 플러그인은 프로젝트 요구사항에 따라 처리할 수 있는 원시 오디오 데이터(TArray<uint8>
)를 반환합니다.
아래 예제들은 Runtime Audio Importer 플러그인을 사용한 재생을 위한 오디오 처리 방법을 보여주지만(자세한 내용은 오디오 임포트 문서 참조), Runtime AI Chatbot Integrator 플러그인은 유연하게 설계되었습니다. 이 플러그인은 단순히 원시 오디오 데이터를 반환하므로, 특정 사용 사례에 맞게 오디오 재생, 파일 저장, 추가 오디오 처리, 다른 시스템으로 전송, 커스텀 시각화 등 다양한 방식으로 자유롭게 처리할 수 있습니다.
비스트리밍 TTS 요청
비스트리밍 TTS 요청은 전체 텍스트가 처리된 후 완전한 오디오 데이터를 단일 응답으로 반환합니다. 이 접근 방식은 전체 오디오를 기다리는 것이 문제되지 않는 짧은 텍스트에 적합합니다.
- OpenAI TTS
- ElevenLabs TTS
- Blueprint
- C++
// Example of sending a TTS request to OpenAI
FChatbotIntegrator_OpenAITTSSettings TTSSettings;
TTSSettings.Input = TEXT("Hello, this is a test of text-to-speech functionality.");
TTSSettings.Voice = EChatbotIntegrator_OpenAITTSVoice::NOVA;
TTSSettings.Speed = 1.0f;
TTSSettings.ResponseFormat = EChatbotIntegrator_OpenAITTSFormat::MP3;
UAIChatbotIntegratorOpenAITTS::SendTTSRequestNative(
TTSSettings,
FOnElevenLabsTTSResponseNative::CreateWeakLambda(
this,
[this](const TArray<uint8>& AudioData, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
if (!ErrorStatus.bIsError)
{
// Process the audio data using Runtime Audio Importer plugin
// Example: Import the audio data as a sound wave
UE_LOG(LogTemp, Log, TEXT("Received TTS audio data: %d bytes"), AudioData.Num());
// Audio data can be imported using Runtime Audio Importer plugin
URuntimeAudioImporterLibrary* RuntimeAudioImporter = URuntimeAudioImporterLibrary::CreateRuntimeAudioImporter();
RuntimeAudioImporter->AddToRoot();
RuntimeAudioImporter->OnResultNative.AddWeakLambda(this, [this](URuntimeAudioImporterLibrary* Importer, UImportedSoundWave* ImportedSoundWave, ERuntimeImportStatus Status)
{
if (Status == ERuntimeImportStatus::SuccessfulImport)
{
UE_LOG(LogTemp, Warning, TEXT("Successfully imported audio with sound wave %s"), *ImportedSoundWave->GetName());
// Here you can handle ImportedSoundWave playback, like "UGameplayStatics::PlaySound2D(GetWorld(), ImportedSoundWave);"
}
else
{
UE_LOG(LogTemp, Error, TEXT("Failed to import audio"));
}
Importer->RemoveFromRoot();
});
RuntimeAudioImporter->ImportAudioFromBuffer(AudioData, ERuntimeAudioFormat::Mp3);
}
else
{
UE_LOG(LogTemp, Error, TEXT("TTS request failed: %s"), *ErrorStatus.ErrorMessage);
}
}
)
);
- Blueprint
- C++
// Example of sending a TTS request to ElevenLabs
FChatbotIntegrator_ElevenLabsTTSSettings TTSSettings;
TTSSettings.Text = TEXT("Hello, this is a test of text-to-speech functionality.");
TTSSettings.VoiceID = TEXT("your-voice-id"); // Replace with actual voice ID
TTSSettings.Model = EChatbotIntegrator_ElevenLabsTTSModel::ELEVEN_TURBO_V2;
TTSSettings.OutputFormat = EChatbotIntegrator_ElevenLabsTTSFormat::MP3_44100_128;
UAIChatbotIntegratorElevenLabsTTS::SendTTSRequestNative(
TTSSettings,
FOnElevenLabsTTSResponseNative::CreateWeakLambda(
this,
[this](const TArray<uint8>& AudioData, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
if (!ErrorStatus.bIsError)
{
// Process the audio data using Runtime Audio Importer plugin
// Example: Import the audio data as a sound wave
UE_LOG(LogTemp, Log, TEXT("Received TTS audio data: %d bytes"), AudioData.Num());
// Audio data can be imported using Runtime Audio Importer plugin
URuntimeAudioImporterLibrary* RuntimeAudioImporter = URuntimeAudioImporterLibrary::CreateRuntimeAudioImporter();
RuntimeAudioImporter->AddToRoot();
RuntimeAudioImporter->OnResultNative.AddWeakLambda(this, [this](URuntimeAudioImporterLibrary* Importer, UImportedSoundWave* ImportedSoundWave, ERuntimeImportStatus Status)
{
if (Status == ERuntimeImportStatus::SuccessfulImport)
{
UE_LOG(LogTemp, Warning, TEXT("Successfully imported audio with sound wave %s"), *ImportedSoundWave->GetName());
// Here you can handle ImportedSoundWave playback, like "UGameplayStatics::PlaySound2D(GetWorld(), ImportedSoundWave);"
}
else
{
UE_LOG(LogTemp, Error, TEXT("Failed to import audio"));
}
Importer->RemoveFromRoot();
});
RuntimeAudioImporter->ImportAudioFromBuffer(AudioData, ERuntimeAudioFormat::Mp3);
}
else
{
UE_LOG(LogTemp, Error, TEXT("TTS request failed: %s"), *ErrorStatus.ErrorMessage);
}
}
)
);
스트리밍 TTS 요청
스트리밍 TTS는 오디오 청크가 생성되는 대로 전달하여 전체 오디오가 합성될 때까지 기다리지 않고 데이터를 점진적으로 처리할 수 있게 합니다. 이는 긴 텍스트에 대한 인지된 지연 시간을 크게 줄이고 실시간 애플리케이션을 가능하게 합니다.
- OpenAI Streaming TTS
- ElevenLabs Streaming TTS
- Blueprint
- C++
UPROPERTY()
UStreamingSoundWave* StreamingSoundWave;
UPROPERTY()
bool bIsPlaying = false;
UFUNCTION(BlueprintCallable)
void StartStreamingTTS()
{
// Create a sound wave for streaming if not already created
if (!StreamingSoundWave)
{
StreamingSoundWave = UStreamingSoundWave::CreateStreamingSoundWave();
StreamingSoundWave->OnPopulateAudioStateNative.AddWeakLambda(this, [this]()
{
if (!bIsPlaying)
{
bIsPlaying = true;
UGameplayStatics::PlaySound2D(GetWorld(), StreamingSoundWave);
}
});
}
FChatbotIntegrator_OpenAIStreamingTTSSettings TTSSettings;
TTSSettings.Text = TEXT("Streaming synthesis output begins with a steady flow of data. This data is processed in real-time to ensure consistency. As the process continues, information is streamed without interruption. The output adapts seamlessly to changing inputs. Each piece of data is instantly integrated into the stream. Real-time processing allows for immediate adjustments. This constant flow ensures that the synthesis output is dynamic. As new data comes in, the output evolves accordingly. The system is designed to maintain a continuous output stream. This uninterrupted flow is what drives the efficiency of streaming synthesis.");
TTSSettings.Voice = EChatbotIntegrator_OpenAIStreamingTTSVoice::ALLOY;
UAIChatbotIntegratorOpenAIStreamTTS::SendStreamingTTSRequestNative(TTSSettings, FOnOpenAIStreamingTTSNative::CreateWeakLambda(this, [this](const TArray<uint8>& AudioData, bool IsFinalChunk, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
if (!ErrorStatus.bIsError)
{
// Process the audio data using Runtime Audio Importer plugin
// Example: Stream audio data into a sound wave
UE_LOG(LogTemp, Log, TEXT("Received TTS audio data: %d bytes"), AudioData.Num());
StreamingSoundWave->AppendAudioDataFromRAW(AudioData, ERuntimeRAWAudioFormat::Int16, 24000, 1);
}
else
{
UE_LOG(LogTemp, Error, TEXT("TTS request failed: %s"), *ErrorStatus.ErrorMessage);
}
}));
}
- Blueprint
- C++
UPROPERTY()
UStreamingSoundWave* StreamingSoundWave;
UPROPERTY()
bool bIsPlaying = false;
UFUNCTION(BlueprintCallable)
void StartStreamingTTS()
{
// Create a sound wave for streaming if not already created
if (!StreamingSoundWave)
{
StreamingSoundWave = UStreamingSoundWave::CreateStreamingSoundWave();
StreamingSoundWave->OnPopulateAudioStateNative.AddWeakLambda(this, [this]()
{
if (!bIsPlaying)
{
bIsPlaying = true;
UGameplayStatics::PlaySound2D(GetWorld(), StreamingSoundWave);
}
});
}
FChatbotIntegrator_ElevenLabsStreamingTTSSettings TTSSettings;
TTSSettings.Text = TEXT("Streaming synthesis output begins with a steady flow of data. This data is processed in real-time to ensure consistency. As the process continues, information is streamed without interruption. The output adapts seamlessly to changing inputs. Each piece of data is instantly integrated into the stream. Real-time processing allows for immediate adjustments. This constant flow ensures that the synthesis output is dynamic. As new data comes in, the output evolves accordingly. The system is designed to maintain a continuous output stream. This uninterrupted flow is what drives the efficiency of streaming synthesis.");
TTSSettings.Model = EChatbotIntegrator_ElevenLabsTTSModel::ELEVEN_TURBO_V2_5;
TTSSettings.OutputFormat = EChatbotIntegrator_ElevenLabsTTSFormat::MP3_22050_32;
TTSSettings.VoiceID = TEXT("YOUR_VOICE_ID");
UAIChatbotIntegratorElevenLabsStreamTTS::SendStreamingTTSRequestNative(GetWorld(), TTSSettings, FOnElevenLabsStreamingTTSNative::CreateWeakLambda(this, [this](const TArray<uint8>& AudioData, bool IsFinalChunk, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
if (!ErrorStatus.bIsError)
{
// Process the audio data using Runtime Audio Importer plugin
// Example: Stream audio data into a sound wave
UE_LOG(LogTemp, Log, TEXT("Received TTS audio data: %d bytes"), AudioData.Num());
StreamingSoundWave->AppendAudioDataFromEncoded(AudioData, ERuntimeAudioFormat::Mp3);
}
else
{
UE_LOG(LogTemp, Error, TEXT("TTS request failed: %s"), *ErrorStatus.ErrorMessage);
}
}));
}
에러 처리
요청을 보낼 때는 콜백에서 ErrorStatus
를 확인하여 잠재적인 에러를 처리하는 것이 중요합니다. ErrorStatus
는 요청 중 발생할 수 있는 문제에 대한 정보를 제공합니다.
- Blueprint
- C++
// Example of error handling in a request
UAIChatbotIntegratorOpenAI::SendChatRequestNative(
Settings,
FOnOpenAIChatCompletionResponseNative::CreateWeakLambda(
this,
[this](const FString& Response, const FChatbotIntegratorErrorStatus& ErrorStatus)
{
if (ErrorStatus.bIsError)
{
// Handle the error
UE_LOG(LogTemp, Error, TEXT("Chat request failed: %s"), *ErrorStatus.ErrorMessage);
}
else
{
// Process the successful response
UE_LOG(LogTemp, Log, TEXT("Received response: %s"), *Response);
}
}
)
);
요청 취소하기
이 플러그인은 진행 중인 text-to-text 및 TTS 요청을 모두 취소할 수 있도록 지원합니다. 이 기능은 장시간 실행되는 요청을 중단하거나 대화 흐름을 동적으로 변경하고자 할 때 유용합니다.
- Blueprint
- C++
// Example of cancelling requests
UAIChatbotIntegratorOpenAI* ChatRequest = UAIChatbotIntegratorOpenAI::SendChatRequestNative(
ChatSettings,
ChatResponseCallback
);
// Cancel the chat request at any time
ChatRequest->Cancel();
// TTS requests can be cancelled similarly
UAIChatbotIntegratorOpenAITTS* TTSRequest = UAIChatbotIntegratorOpenAITTS::SendTTSRequestNative(
TTSSettings,
TTSResponseCallback
);
// Cancel the TTS request
TTSRequest->Cancel();
모범 사례
- 콜백에서
ErrorStatus
를 확인하여 잠재적 오류를 항상 처리하세요 - API 속도 제한 및 비용을 염두에 두세요
- 장문 또는 대화형 콘텐츠에는 스트리밍 모드를 사용하세요
- 더 이상 필요하지 않은 요청은 취소하여 리소스를 효율적으로 관리하세요
- 긴 텍스트의 경우 스트리밍 TTS를 사용하여 인지된 지연 시간을 줄이세요
- 오디오 처리 시 Runtime Audio Importer 플러그인이 편리한 솔루션을 제공하지만, 프로젝트 요구사항에 따라 커스텀 처리를 구현할 수 있습니다
문제 해결
- API 자격 증명이 정확한지 확인하세요
- 인터넷 연결 상태를 점검하세요
- TTS 기능 작업 시 Runtime Audio Importer와 같은 오디오 처리 라이브러리가 제대로 설치되었는지 확인하세요
- TTS 응답 데이터를 처리할 때 올바른 오디오 형식을 사용 중인지 확인하세요
- 스트리밍 TTS의 경우 오디오 청크를 올바르게 처리하고 있는지 확인하세요