Skip to main content

How to use the plugin

This guide walks you through the process of setting up Runtime MetaHuman Lip Sync for your MetaHuman characters.

Prerequisites

Before getting started, ensure:

  1. The MetaHuman plugin is enabled in your project
  2. You have at least one MetaHuman character downloaded and available in your project
  3. The Runtime MetaHuman Lip Sync plugin is installed

Additional Plugins:

Setup Process

Step 1: Locate and modify the face animation Blueprint

You need to modify an Animation Blueprint that will be used for your MetaHuman character's facial animations. The default MetaHuman face Animation Blueprint is located at:

Content/MetaHumans/Common/Face/Face_AnimBP

Face Animation Blueprint

You have several options for implementing the lip sync functionality:

Open the default Face_AnimBP directly and make your modifications. Any changes will affect all MetaHuman characters using this Animation Blueprint.

Note: This approach is convenient but will impact all characters using the default Animation Blueprint.

Important: The Runtime MetaHuman Lip Sync blending can be implemented in any Animation Blueprint asset that has access to a pose containing the facial bones present in the default MetaHuman's Face_Archetype_Skeleton. You're not limited to the options above - these are just common implementation approaches.

Step 2: Event Graph setup

Open your Face Animation Blueprint and switch to the Event Graph. You'll need to create a Runtime Viseme Generator that will process audio data and generate visemes.

  1. Add the Event Blueprint Begin Play node if it doesn't exist already
  2. Add the Create Runtime Viseme Generator node and connect it to the Begin Play event
  3. Save the output as a variable (e.g. "VisemeGenerator") for use in other parts of the graph

Creating Runtime Viseme Generator

Step 3: Set up audio input processing

You need to set up a method to process audio input. There are several ways to do this depending on your audio source.

This approach performs lip sync in real-time while speaking into the microphone:

  1. Create a Capturable Sound Wave using Runtime Audio Importer
  2. Before starting to capture audio, bind to the OnPopulateAudioData delegate
  3. In the bound function, call ProcessAudioData from your Runtime Viseme Generator
  4. Start capturing audio from the microphone

Copyable nodes.

Lip Sync During Audio Capture

Step 4: Anim Graph setup

After setting up the Event Graph, switch to the Anim Graph to connect the viseme generator to the character's animation:

  1. Locate the pose that contains the MetaHuman face (typically from Use cached pose 'Body Pose')
  2. Add the Blend Runtime MetaHuman Lip Sync node
  3. Connect the pose to the Source Pose of the Blend Runtime MetaHuman Lip Sync node
  4. Connect your RuntimeVisemeGenerator variable to the Viseme Generator pin
  5. Connect the output of the Blend Runtime MetaHuman Lip Sync node to the Result pin of the Output Pose

Blend Runtime MetaHuman Lip Sync

Note: The lip sync plugin is designed to work non-destructively with your existing animation setup. It only affects the specific facial bones needed for lip movement, leaving other facial animations intact. This means you can safely integrate it at any point in your animation chain - either before other facial animations (allowing those animations to override lip sync) or after them (letting lip sync blend on top of your existing animations). This flexibility lets you combine lip sync with eye blinking, eyebrow movements, emotional expressions, and other facial animations without conflicts.

Configuration

The Blend Runtime MetaHuman Lip Sync node has configuration options in its properties panel:

PropertyDefaultDescription
Interpolation Speed25Controls how quickly the lip movements transition between visemes. Higher values result in faster more abrupt transitions.
Reset Time0.2The duration in seconds after which the lip sync is reset. This is useful to prevent the lip sync from continuing after the audio has stopped.