How to use the plugin
This guide walks you through the process of setting up Runtime MetaHuman Lip Sync for your MetaHuman characters.
Prerequisites
Before getting started, ensure:
- The MetaHuman plugin is enabled in your project
- You have at least one MetaHuman character downloaded and available in your project
- The Runtime MetaHuman Lip Sync plugin is installed
Additional Plugins:
- If you plan to use audio capture (e.g., microphone input), install the Runtime Audio Importer plugin.
- If you plan to use text-to-speech functionality, install the Runtime Text To Speech plugin.
Setup Process
Step 1: Locate and modify the face animation Blueprint
You need to modify an Animation Blueprint that will be used for your MetaHuman character's facial animations. The default MetaHuman face Animation Blueprint is located at:
Content/MetaHumans/Common/Face/Face_AnimBP
You have several options for implementing the lip sync functionality:
- Edit Default Asset (Simplest Option)
- Create Duplicate
- Use Custom Animation Blueprint
Open the default Face_AnimBP
directly and make your modifications. Any changes will affect all MetaHuman characters using this Animation Blueprint.
Note: This approach is convenient but will impact all characters using the default Animation Blueprint.
- Duplicate
Face_AnimBP
and give it a descriptive name - Locate your character's Blueprint class (e.g., for character "Bryan", it would be at
Content/MetaHumans/Bryan/BP_Bryan
) - Open the character Blueprint and find the Face component
- Change the Anim Class property to your newly duplicated Animation Blueprint
Note: This approach allows you to customize lip sync for specific characters while leaving others unchanged.
You can implement the lip sync blending in any Animation Blueprint that has access to the required facial bones:
- Create or use an existing custom Animation Blueprint
- Ensure your Animation Blueprint works with a skeleton that contains the same facial bones as the default MetaHuman's
Face_Archetype_Skeleton
(which is the standard skeleton used for any MetaHuman character)
Note: This approach gives you maximum flexibility for integration with custom animation systems.
Important: The Runtime MetaHuman Lip Sync blending can be implemented in any Animation Blueprint asset that has access to a pose containing the facial bones present in the default MetaHuman's Face_Archetype_Skeleton
. You're not limited to the options above - these are just common implementation approaches.
Step 2: Event Graph setup
Open your Face Animation Blueprint and switch to the Event Graph
. You'll need to create a Runtime Viseme Generator that will process audio data and generate visemes.
- Add the
Event Blueprint Begin Play
node if it doesn't exist already - Add the
Create Runtime Viseme Generator
node and connect it to the Begin Play event - Save the output as a variable (e.g. "VisemeGenerator") for use in other parts of the graph
Step 3: Set up audio input processing
You need to set up a method to process audio input. There are several ways to do this depending on your audio source.
- Microphone (Real-time)
- Microphone (Playback)
- Text-to-Speech
- Custom Audio Source
This approach performs lip sync in real-time while speaking into the microphone:
- Create a Capturable Sound Wave using Runtime Audio Importer
- Before starting to capture audio, bind to the
OnPopulateAudioData
delegate - In the bound function, call
ProcessAudioData
from your Runtime Viseme Generator - Start capturing audio from the microphone
This approach captures audio from a microphone, then plays it back with lip sync:
- Create a Capturable Sound Wave using Runtime Audio Importer
- Start audio capture from the microphone
- Before playing back the capturable sound wave, bind to its
OnGeneratePCMData
delegate - In the bound function, call
ProcessAudioData
from your Runtime Viseme Generator
This approach synthesizes speech from text and performs lip sync:
- Use Runtime Text To Speech to generate speech from text
- Use Runtime Audio Importer to import the synthesized audio
- Before playing back the imported sound wave, bind to its
OnGeneratePCMData
delegate - In the bound function, call
ProcessAudioData
from your Runtime Viseme Generator
For a custom audio source, you need:
- Audio data in float PCM format (an array of floating-point samples)
- The sample rate and number of channels
- Call
ProcessAudioData
from your Runtime Viseme Generator with these parameters
Step 4: Anim Graph setup
After setting up the Event Graph, switch to the Anim Graph
to connect the viseme generator to the character's animation:
- Locate the pose that contains the MetaHuman face (typically from
Use cached pose 'Body Pose'
) - Add the
Blend Runtime MetaHuman Lip Sync
node - Connect the pose to the
Source Pose
of theBlend Runtime MetaHuman Lip Sync
node - Connect your
RuntimeVisemeGenerator
variable to theViseme Generator
pin - Connect the output of the
Blend Runtime MetaHuman Lip Sync
node to theResult
pin of theOutput Pose
Note: The lip sync plugin is designed to work non-destructively with your existing animation setup. It only affects the specific facial bones needed for lip movement, leaving other facial animations intact. This means you can safely integrate it at any point in your animation chain - either before other facial animations (allowing those animations to override lip sync) or after them (letting lip sync blend on top of your existing animations). This flexibility lets you combine lip sync with eye blinking, eyebrow movements, emotional expressions, and other facial animations without conflicts.
Configuration
The Blend Runtime MetaHuman Lip Sync
node has configuration options in its properties panel:
Property | Default | Description |
---|---|---|
Interpolation Speed | 25 | Controls how quickly the lip movements transition between visemes. Higher values result in faster more abrupt transitions. |
Reset Time | 0.2 | The duration in seconds after which the lip sync is reset. This is useful to prevent the lip sync from continuing after the audio has stopped. |