Ynnk Voice Lipsync
https://www.unrealengine.com/marketplace/en-US/product/ynnk-voice-lipsync
Create lip-sync animation from audio.
This plugin uses voice recognition engine to generate lip-sync animation from SoundWave assets or PCM audio data. Animation is saved as curves in data assets and can be played in runtime together with audio. This approach allows to achieve well-looking lip-sync animation easily without subtitles.
Additional feature: recognize input from microphone (speech-to-text) in runtime.
Unlike text-to-lipsync solution, this is true voice-to-lipsync plugin. You don't need subtitles to get lips animatied, and resulted animation is much closer to speech then in case of subtitles-based solution.
Lip-sync can be generated in runtime, but not in real-time. I. e. it doesn't work with microphone or other streamed audio.
Fully supported languages: English, Chinese. Also supported: Russian, Italian, German, French, Spanish, Portuguese, Polish.
Technical Details
Features:
- subtitles aren't needed;
- can generate lip-sync in runtime for loaded/TTS audio;
- can generate Anim Sequence assets with lip-sync in curves;
- lip-sync for animation curves (universal) or morph targets (when possible);
- asynchronous audio recognition and building lip-sync;
- (beta!) create lip-sync in runtime on PC and Android using remote server;
- additional feature: recognize microphone input (Speech-to-text) in runtime (Windows only).
Code Modules:
- YnnkVoiceLipsync (Runtime)
- YnnkVoiceLipsyncUncooked (UncookedOnly)
Number of Blueprints: 0
Number of C++ Classes: 8
Network Replicated: No
Supported Development Platforms: Windows x64
Supported Target Build Platforms: Windows x64, Android