Skip to main content

Overview

Runtime MetaHuman Lip Sync Documentation

Runtime MetaHuman Lip Sync is a plugin that enables real-time, offline, and cross-platform lip sync for MetaHuman characters. It allows you to animate a character's lips in response to audio input from various sources, including:

The plugin internally generates visemes (visual representations of phonemes) based on the audio input and performs lip sync animation using a predefined pose asset.

Animation Preview

Check out this short animation to see the quality of real-time lip sync animation produced by the plugin. The animation can be applied to any MetaHuman-based character, whether it's the default MetaHuman or a custom one.

Lip Sync Example

Key Features

  • Real-time lip sync from microphone input
  • Offline audio processing support
  • Cross-platform compatibility: Windows, Mac, Android, MetaQuest

How It Works

The plugin processes audio input in the following way:

  1. Audio data is received as float PCM format with specified channels and sample rate
  2. The plugin processes the audio to generate visemes (phonemes)
  3. These visemes drive the lip sync animation using the MetaHuman's pose asset
  4. The animation is applied to the MetaHuman character in real-time

Quick Start

Here's a basic setup for enabling lip sync on your MetaHuman character:

  1. Ensure the MetaHuman plugin is enabled and you have a MetaHuman character in your project
  2. Modify your MetaHuman's Face Animation Blueprint
  3. Set up audio input processing (such as in the Event Graph)
  4. Connect the Blend Runtime MetaHuman Lip Sync node in the Anim Graph
  5. Play audio and see your character speak!

For detailed implementation steps, see the How to use the plugin page.

Additional Resources