Overview
Please note that the plugin is temporarily unavailable on Fab. If you need to acquire the plugin directly for now, please contact me at [email protected]
Runtime MetaHuman Lip Sync is a plugin that enables real-time, offline, and cross-platform lip sync for both MetaHuman and custom characters. It allows you to animate a character's lips in response to audio input from various sources, including:
- Microphone input via Runtime Audio Importer's capturable sound wave
- Synthesized speech from Runtime Text To Speech or Runtime AI Chatbot Integrator
- Any audio data in float PCM format (an array of floating-point samples)
The plugin internally generates visemes (visual representations of phonemes) based on the audio input. Since it works directly with audio data rather than text, the plugin supports multilingual input including but not limited to English, Spanish, French, German, Japanese, Chinese, Korean, Russian, Italian, Portuguese, Arabic, and Hindi. Literally any language is supported as the lip sync is generated from audio phonemes rather than language-specific text processing.
Character Compatibility
Despite its name, Runtime MetaHuman Lip Sync works with a wide range of characters beyond just MetaHumans:
Popular Commercial Character Systems
- Daz Genesis 8/9 characters
- Reallusion Character Creator 3/4 (CC3/CC4) characters
- Mixamo characters
- ReadyPlayerMe avatars
Animation Standards Support
- FACS-based blendshape systems
- Apple ARKit blendshape standard
- Preston Blair phoneme sets
- 3ds Max phoneme systems
- Any character with custom morph targets for facial expressions
For detailed instructions on using the plugin with non-MetaHuman characters, see the Custom Character Setup Guide.
Animation Preview
Check out these short animations to see the quality of lip sync animation produced by the plugin across different character types and models:




Key Features
- Real-time lip sync from microphone input
- Offline audio processing support
- Support for multiple character systems and animation standards
- Flexible viseme mapping for custom characters
- Universal language support - works with any spoken language through audio analysis
You can choose the appropriate model based on your project requirements for performance, character compatibility, and visual quality.
While both models support various audio input methods, the Realistic model has limited compatibility with local TTS due to ONNX runtime conflicts. For text-to-speech functionality with the Realistic model, external TTS services (OpenAI, ElevenLabs) are recommended.
How It Works
The plugin processes audio input in the following way:
- Audio data is received as float PCM format with specified channels and sample rate
- The plugin processes the audio to generate visemes (phonemes)
- These visemes drive the lip sync animation using the character's pose asset
- The animation is applied to the character in real-time
Quick Start
Here's a basic setup for enabling lip sync on your character:
- For MetaHuman characters, follow the MetaHuman Setup Guide
- For custom characters, follow the Custom Character Setup Guide
- Set up audio input processing (such as in the Event Graph)
- Connect the Blend Runtime MetaHuman Lip Sync node in the Anim Graph
- Play audio and see your character speak!
Additional Resources
📦 Downloads & Links
🎥 Video Tutorials
Featured Demo:
Realistic Model (High-Quality) Tutorials:
- High-Quality Lip Sync with ElevenLabs & OpenAI TTS ⭐ NEW
- High-Quality Live Microphone Lip Sync ⭐ NEW
General Setup:
💬 Support
- Discord support server
- Custom Development: [email protected] (tailored solutions for teams & organizations)