Overview
Runtime MetaHuman Lip Sync is a plugin that enables real-time, offline, and cross-platform lip sync for MetaHuman characters. It allows you to animate a character's lips in response to audio input from various sources, including:
- Microphone input via Runtime Audio Importer's capturable sound wave
- Synthesized speech from Runtime Text To Speech
- Any audio data in float PCM format (an array of floating-point samples)
The plugin internally generates visemes (visual representations of phonemes) based on the audio input and performs lip sync animation using a predefined pose asset.
Animation Preview
Check out this short animation to see the quality of real-time lip sync animation produced by the plugin. The animation can be applied to any MetaHuman-based character, whether it's the default MetaHuman or a custom one.
Key Features
- Real-time lip sync from microphone input
- Offline audio processing support
- Cross-platform compatibility: Windows, Mac, Android, MetaQuest
How It Works
The plugin processes audio input in the following way:
- Audio data is received as float PCM format with specified channels and sample rate
- The plugin processes the audio to generate visemes (phonemes)
- These visemes drive the lip sync animation using the MetaHuman's pose asset
- The animation is applied to the MetaHuman character in real-time
Quick Start
Here's a basic setup for enabling lip sync on your MetaHuman character:
- Ensure the MetaHuman plugin is enabled and you have a MetaHuman character in your project
- Modify your MetaHuman's Face Animation Blueprint
- Set up audio input processing (such as in the Event Graph)
- Connect the Blend Runtime MetaHuman Lip Sync node in the Anim Graph
- Play audio and see your character speak!
For detailed implementation steps, see the How to use the plugin page.
Additional Resources
- Get it on Fab
- Download Demo (Windows)
- Discord support server
- Video tutorial
- Custom Development: [email protected] (tailored solutions for teams & organizations)