Skip to main content

How to use the plugin

This guide walks you through the process of setting up Runtime MetaHuman Lip Sync for your MetaHuman characters.

Note: Runtime MetaHuman Lip Sync works with both MetaHuman and custom characters. The plugin supports various character types including:

  • Popular commercial characters (Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe, etc)
  • Characters with FACS-based blendshapes
  • Models using ARKit blendshape standards
  • Characters with Preston Blair phoneme sets
  • 3ds Max phoneme systems
  • Any character with custom morph targets for facial expressions

For detailed instructions on setting up custom characters, including viseme mapping references for all the above standards, see the Custom character setup guide.

Prerequisites

Before getting started, ensure:

  1. The MetaHuman plugin is enabled in your project
  2. You have at least one MetaHuman character downloaded and available in your project
  3. The Runtime MetaHuman Lip Sync plugin is installed

Additional Plugins:

Setup Process

Step 1: Locate and modify the face animation Blueprint

You need to modify an Animation Blueprint that will be used for your MetaHuman character's facial animations. The default MetaHuman face Animation Blueprint is located at:

Content/MetaHumans/Common/Face/Face_AnimBP

Face Animation Blueprint

You have several options for implementing the lip sync functionality:

Open the default Face_AnimBP directly and make your modifications. Any changes will affect all MetaHuman characters using this Animation Blueprint.

Note: This approach is convenient but will impact all characters using the default Animation Blueprint.

Important: The Runtime MetaHuman Lip Sync blending can be implemented in any Animation Blueprint asset that has access to a pose containing the facial bones present in the default MetaHuman's Face_Archetype_Skeleton. You're not limited to the options above - these are just common implementation approaches.

Step 2: Event Graph setup

Open your Face Animation Blueprint and switch to the Event Graph. You'll need to create a Runtime Viseme Generator that will process audio data and generate visemes.

  1. Add the Event Blueprint Begin Play node if it doesn't exist already
  2. Add the Create Runtime Viseme Generator node and connect it to the Begin Play event
  3. Save the output as a variable (e.g. "VisemeGenerator") for use in other parts of the graph

Creating Runtime Viseme Generator

Step 3: Set up audio input processing

You need to set up a method to process audio input. There are several ways to do this depending on your audio source.

This approach performs lip sync in real-time while speaking into the microphone:

  1. Create a Capturable Sound Wave using Runtime Audio Importer
  2. Before starting to capture audio, bind to the OnPopulateAudioData delegate
  3. In the bound function, call ProcessAudioData from your Runtime Viseme Generator
  4. Start capturing audio from the microphone

Copyable nodes.

Lip Sync During Audio Capture

Step 4: Anim Graph setup

After setting up the Event Graph, switch to the Anim Graph to connect the viseme generator to the character's animation:

Lip Sync

  1. Locate the pose that contains the MetaHuman face (typically from Use cached pose 'Body Pose')
  2. Add the Blend Runtime MetaHuman Lip Sync node
  3. Connect the pose to the Source Pose of the Blend Runtime MetaHuman Lip Sync node
  4. Connect your RuntimeVisemeGenerator variable to the Viseme Generator pin
  5. Connect the output of the Blend Runtime MetaHuman Lip Sync node to the Result pin of the Output Pose

Blend Runtime MetaHuman Lip Sync

When lip sync is detected in the audio, your character will dynamically animate accordingly:

Lip Sync

Laughter Animation

You can also add laughter animations that will dynamically respond to laughter detected in the audio:

  1. Add the Blend Runtime MetaHuman Laughter node
  2. Connect your RuntimeVisemeGenerator variable to the Viseme Generator pin
  3. If you're already using lip sync:
    • Connect the output from the Blend Runtime MetaHuman Lip Sync node to the Source Pose of the Blend Runtime MetaHuman Laughter node
    • Connect the output of the Blend Runtime MetaHuman Laughter node to the Result pin of the Output Pose
  4. If using only laughter without lip sync:
    • Connect your source pose directly to the Source Pose of the Blend Runtime MetaHuman Laughter node
    • Connect the output to the Result pin

Blend Runtime MetaHuman Laughter

When laughter is detected in the audio, your character will dynamically animate accordingly:

Laughter

Combining with Body Animations

To apply lip sync and laughter alongside existing body animations without overriding them:

  1. Add a Layered blend per bone node between your body animations and the final output. Make sure Use Attached Parent is true.
  2. Configure the layer setup:
    • Add 1 item to the Layer Setup array
    • Add 3 items to the Branch Filters for the layer, with the following Bone Names:
      • FACIAL_C_FacialRoot
      • FACIAL_C_Neck2Root
      • FACIAL_C_Neck1Root
  3. Make the connections:
    • Existing animations (such as BodyPose) → Base Pose input
    • Facial animation output (from lip sync and/or laughter nodes) → Blend Poses 0 input
    • Layered blend node → Final Result pose

Layered Blend Per Bone

Why this works: The branch filters isolate facial animation bones, allowing lip sync and laughter to blend exclusively with facial movements while preserving original body animations. This matches the MetaHuman facial rig structure, ensuring natural integration.

Note: The lip sync and laughter features are designed to work non-destructively with your existing animation setup. They only affect the specific facial bones needed for mouth movement, leaving other facial animations intact. This means you can safely integrate them at any point in your animation chain - either before other facial animations (allowing those animations to override lip sync/laughter) or after them (letting lip sync/laughter blend on top of your existing animations). This flexibility lets you combine lip sync and laughter with eye blinking, eyebrow movements, emotional expressions, and other facial animations without conflicts.

Configuration

Lip Sync Configuration

The Blend Runtime MetaHuman Lip Sync node has configuration options in its properties panel:

PropertyDefaultDescription
Interpolation Speed25Controls how quickly the lip movements transition between visemes. Higher values result in faster more abrupt transitions.
Reset Time0.2The duration in seconds after which the lip sync is reset. This is useful to prevent the lip sync from continuing after the audio has stopped.

Laughter Configuration

The Blend Runtime MetaHuman Laughter node has its own configuration options:

PropertyDefaultDescription
Interpolation Speed25Controls how quickly the lip movements transition between laughter animations. Higher values result in faster more abrupt transitions.
Reset Time0.2The duration in seconds after which the laughter is reset. This is useful to prevent the laughter from continuing after the audio has stopped.
Max Laughter Weight0.7Scales the maximum intensity of the laughter animation (0.0 - 1.0).