Skip to main content

How to use the plugin with custom characters

This guide walks you through the process of setting up Runtime MetaHuman Lip Sync for non-MetaHuman characters. This process requires familiarity with animation concepts and rigging. If you need assistance implementing this for your specific character, you can reach out for professional support at [email protected].

Prerequisites

Before getting started, ensure your character meets these requirements:

  • Has a valid skeleton
  • Contains morph targets (blend shapes) for facial expressions
  • Ideally has 10+ morph targets defining visemes (more visemes = better lip sync quality)

The plugin requires mapping your character's morph targets to the following standard visemes:

Sil -> Silence
PP -> Bilabial plosives (p, b, m)
FF -> Labiodental fricatives (f, v)
TH -> Dental fricatives (th)
DD -> Alveolar plosives (t, d)
KK -> Velar plosives (k, g)
CH -> Postalveolar affricates (ch, j)
SS -> Sibilants (s, z)
NN -> Nasal (n)
RR -> Approximant (r)
AA -> Open vowel (aa)
E -> Mid vowel (e)
IH -> Close front vowel (ih)
OH -> Close-mid back vowel (oh)
OU -> Close back vowel (ou)

Note: If your character has a different set of visemes (which is likely), you don't need exact matches for each viseme. Approximations are often sufficient—for example, mapping your character's SH viseme to the plugin's CH viseme would work effectively since they're closely related postalveolar sounds.

Viseme mapping reference

Here are mappings between common viseme systems and the plugin's required visemes:

ARKit provides a comprehensive set of blendshapes for facial animation, including several mouth shapes. Here's how to map them to the RuntimeMetaHumanLipSync visemes:

RuntimeMetaHumanLipSync VisemeARKit EquivalentNotes
SilmouthCloseThe neutral/rest position
PPmouthPressLeft + mouthPressRightFor bilabial sounds, use both press shapes together
FFlowerLipBiteLeft + lowerLipBiteRightThese create the "f/v" lip-to-teeth contact
THtongueOutARKit has direct tongue control
DDjawOpen (mild) + tongueUpCombine slight jaw opening with tongue position
KKmouthLeft or mouthRight (mild)Subtle mouth corner pull approximates velar sounds
CHjawOpen (mild) + mouthFunnel (mild)Combine for postalveolar sounds
SSmouthFrownUse a slight frown for sibilants
NNjawOpen (very mild) + mouthCloseAlmost closed mouth with slight jaw opening
RRmouthPucker (mild)Subtle rounding for r-sounds
AAjawOpen + mouthOpenCombined for wide open vowel sounds
EjawOpen (mild) + mouthSmileMid-open position with slight smile
IHmouthSmile (mild)Slight spreading of lips
OHmouthFunnelRounded open shape
OUmouthPuckerTightly rounded lips

Creating a custom Pose Asset

Follow these steps to create a custom pose asset for your character that will be used with the Blend Runtime MetaHuman Lip Sync node:

1. Locate your character's Skeletal Mesh

Find the skeletal mesh that contains the morph targets (blend shapes) you want to use for lip sync animation. This might be a full-body mesh or just a face mesh, depending on your character's design.

2. Verify available Morph Targets

Ensure your skeletal mesh has appropriate morph targets that can be used as visemes for lip sync animation. Most characters with facial animation support should have some phoneme/viseme morph targets.

Example of morph targets in a character

3. Create a Reference Pose Animation

  1. Go to Create Asset -> Create Animation -> Reference Pose
  2. Enter a descriptive name for the animation sequence and save it in an appropriate location
  3. The created Animation Sequence will open automatically, showing an empty animation playing in a loop

Creating a reference pose asset Naming the reference pose asset

  1. Click the Pause button to stop the animation playback for easier editing

Pausing animation playback

4. Edit the Animation Sequence

  1. Click on Edit in Sequencer -> Edit with FK Control Rig
  2. In the Bake to Control Rig dialog, click the Bake to Control Rig button without changing any settings

Editing with FK Control Rig Baking to Control Rig

  1. The editor will switch to Animation Mode with the Sequencer tab open
  2. Set the View Range End Time to 0016 (which will automatically set Working Range End to 0016 as well)
  3. Drag the slider's right edge to the right end of the sequencer window

5. Prepare the Animation Curves

  1. Return to the Animation Sequence asset and locate the morph targets in the Curves list (if they're not visible, close and reopen the Animation Sequence asset)
  2. Remove any morph targets that aren't related to visemes or mouth movements you want to use for lip sync

6. Plan your viseme mapping

Create a mapping plan to match your character's visemes to the plugin's required set. For example:

Sil -> Sil
PP -> FV
FF -> FV
TH -> TH
DD -> TD
KK -> KG
CH -> CH
SS -> SZ
NN -> NL
RR -> RR
AA -> AA
E -> E
IH -> IH
OH -> O
OU -> U

Note that it's acceptable to have repeated mappings when your character's viseme set doesn't have exact matches for every required viseme.

7. Animate each viseme

  1. For each viseme, animate the relevant morph target curves from 0.0 to 1.0
  2. Start each viseme animation on a different frame
  3. Configure additional curves as needed (jaw/mouth opening, tongue position, etc.) to create natural-looking viseme shapes

8. Create a Pose Asset

  1. Go to Create Asset -> Pose Asset -> Current Animation
  2. Enter a descriptive name for the Pose Asset and save it in an appropriate location
  3. The created Pose Asset will open automatically, showing poses like Pose_0, Pose_1, etc., each corresponding to a viseme
  4. Preview the viseme weights to ensure they work as expected

Creating a pose asset Naming the pose asset Pose asset with visemes

9. Finalize the Pose Asset

  1. Rename each pose to match the viseme names from the Prerequisites section
  2. Delete any unused poses

Setting up audio handling and blending

Once your pose asset is ready, you need to set up the audio handling and blending nodes:

  1. Locate or create your character's Animation Blueprint
  2. Set up the audio handling and blending following the same steps as documented in the standard plugin setup guide
  3. In the Blend Runtime MetaHuman Lip Sync node, select your custom Pose Asset instead of the default MetaHuman pose asset

Selecting the custom pose asset

Combining with body animations

If you want to perform lip sync alongside other body animations:

  1. Follow the same steps as documented in the standard plugin guide
  2. Make sure to provide the correct bone names for your character's neck skeleton instead of using the MetaHuman bone names

Results

Here are examples of custom characters using this setup:

Example 1: Lip sync with custom character

Example 2: Lip sync with different viseme system

Example 3: Lip sync with different viseme system

The quality of lip sync largely depends on the specific character and how well its visemes are set up. The examples above demonstrate the plugin working with different types of custom characters with distinct viseme systems.