An Unreal Engine plugin that helps you use AI and ML techniques in your unreal engine project.
News | Document | Download | Demo Project | M4U Remoting(Android App) | Speech Model Packages
Free Edtion vs Commercial Edition
MediaPipe4U provides a suite of libraries and tools that allow you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques to your Unreal Engine projects. You can integrate these solutions into your UE project immediately and customize them to meet your needs. The suite includes motion capture, facial expression capture for your 3D avatar, text-to-speech (TTS), speech recognition (ASR), and more. All features are real-time, offline, low-latency, and easy to use.
- [new] 🌈 The free version can now package all features, including voice and facial expression capture.
- [new] 🌈 Ollama Support: Integrated with Ollama for large language model (LLM) inference, enabling support for various LLMs such as DeepSeek, LLaMA, Phi, Qwen, QWQ, and more.
- [new] 🌈 Conversation Component
LLMSpeechChatRuntime
: Integrates LLM, TTS, and ASR, making it easy to implement chatbot functionality in Blueprints. - [new] 🌈 New TTS Support: Added support for Kokoro and Melo.
- [new] 🌈 New ASR Support: Added support for FunASR(with hotword), FireRedASR (an ASR model open-sourced by Xiaohongshu), MoonShine(English).
- [new] 🌈 Transformer-based TTS Model: Added support for F5-TTS, featuring voice cloning capabilities (runs on DirectML, compatible with AMD/Nvidia GPUs).
- [new] 🌈 Voice Wake-up: Introduced lightweight voice wake-up inference, supporting custom wake words for ASR activation, as well as independent voice command wake words.
- [new] 🔥 Updated Google MediaPipe to the latest version.
- [new] 🔥 Added support for Unreal Engine 5.5.
- [new] Added C++ Interface, allowing C++ developers to implement their own pose estimation algorithms to replace Google MediaPipe.
- [new] 🌈 Integrated NvAR Pose Tracking, enabling switching between MediaPipe and Nvidia Maxine algorithms.
The update includes numerous features and complex testing, please be patient.
- The new Google Holistic Task API does not support GPU inference. As a result, the Android platform relies on CPU inference, while Windows continues to use CPU inference as usual.
- Starting from Unreal Engine 5.4, the built-in
OpenCV
plugin no longer includes precompiled libraries (DLL files). Upon the first launch of the UE Editor, since M4U depends on theOpenCV
plugin, the editor will attempt to download theOpenCV
source code online and compile it on your machine. This process may take a significant amount of time, potentially giving the impression that the engine is stuck at 75% during loading. Please be patient and check the logs in the Saved directory under the root folder to verify whether the process has completed. For users in China, you may need a VPN connection. Alternatively, you can follow the steps outlined in #166 to manually resolve this issue.
For the release notes, ref below:
💚All features are pure C++, no Python or external programs required.
- Motion Capture
- Motion of the body
- Motion of the fingers
- Movement
- Drive 3D avatar
- Real-time
- RGB webcam supported
- ControlRig supported
- Face Capture
- Facial expression.
- Arkit Blendshape compatible (52 expression)
- Live link compatible
- Real-time
- RGB webcam supported
- Multi-source Capture
- RGB WebCam
- Video File
- Image
- Living Stream (RTMP/SMTP)
- Android Device (M4U Remoting)
- LLM
- Offline
- CPU/GPU Inference
- Multiple models
- LLaMA/LLaMA2
- ChatGLM (work in progress)
- TTS
- Offline
- Real-time
- Lip-Sync
- Multiple models
- Paddle Speech: Chinese, English
- Bark: 13 languages (work in progress)
- ASR
- Offline
- Real-time
- Multiple models
- FunASR: Chinese
- Whisper: 99 languages
- Animation Data Export
- BVH export
- Pure plugins
- No external programs required
- All in Unreal Engine
Unreal Engine | China Site | Global Site | Update |
---|---|---|---|
UE 5.0 | 奶牛快传 | One Drive | 2023-10-10 |
UE 5.1 | 百度网盘 | One Drive | 2023-05-24 |
UE 5.2 | 百度网盘 | One Drive | 2023-05-24 |
UE 5.4 | 百度网盘 | One Drive | 2023-05-24 |
Because the plugin is precompiled and contains a large number of C++ link symbols and debug symbols, it will cost 10G disk space after decompression (most files are UE-generated binaries in Intermediate).
Don't need to worry about disk usage, this is just disk usage during development, after the project is packaged, the plug-in disk usage is 300M only (most files are GStreamer dynamic library and speech models).
Now, M4U support Android and Windows (Linux is coming soom)
Plugins (Modules) | Windows | Android | Linux |
---|---|---|---|
MediaPipe4U | ✔️ | ✔️ | Coming Soon |
MediaPipe4ULiveLink | ✔️ | ✔️ | Coming Soon |
GStreamer | ✔️ | ❌ | Coming Soon |
MediaPipe4UGStreamer | ✔️ | ❌ | Coming Soon |
MediaPipe4UBVH | ✔️ | ❌ | Coming Soon |
MediaPipe4USpeech | ✔️ | ❌ | Coming Soon |
The license file will be published in the discussion, and the plugin package file will automatically include an license file.
Android Version | Download Link | Update |
---|---|---|
Android 7.0 or later | Download | 2023-04-21 |
About M4U Remoting
Note
This is a commercial license exclusive feature: capturing facial expressions from android device.
Free license only supports using in UE Editor, cannot be packaged this feature.
M4U Remoting Document
Please clone this repository to get demo project:
Use the git client to get the demo project (require git and git lfs) :
git lfs clone https://gitlab.com/endink/mediapipe4u-demo.git
The demo project does not contain plugins, you need to download the plugin and copy content to the project's plugins folder to run.
Video Tutorials (English)
Video Tutorials (Chinese)
If you have any questiongs, please check FAQ first. The problems listed there may be also yours. If you can’t find an answer in the FAQ, please post an issue. Private message or emal may cause the question to be mised .
Since the Windows version of MediaPipe does not support GPU inference, Windows relies on the CPU to inferring human pose estimation (see MediaPipe offical site for more details).
Evaluation
Frame Rate: 18-24 fps
CPU usage:20% (Based on DEMO project)
Testing Evnrioment
CPU: AMD 3600 CPU
RAM: 32GB
GPU: Nvidia 1660s
We acknowledge the contributions of the following open-source projects and frameworks, which have significantly influenced the development of M4U:
- M4U utilizes MediaPipe for motion and facial capture.
- M4U utilizes the NVIDIA Maxine AR SDK for advanced facial tracking and capture.
- M4U utilizes PaddleSpeech for text-to-speech (TTS) synthesis.
- M4U utilizes FunASR for automatic speech recognition (ASR).
- M4U utilizes whisper.cpp as an ASR solution.
- M4U utilizes Sherpa Onnx to enhance ASR capabilities.
- M4U utilizes F5-TTS-ONNX for exporting the F5-TTS model.
- M4U utilizes GStreamer to facilitate video processing and real-time streaming.
- M4U utilizes code from PowerIK for inverse kinematics (IK) and ground adaptation.
- M4U utilizes concepts from Kalidokit in the domain of motion capture.
- M4U utilizes code from wongfei to enhance GStreamer and MediaPipe interoperability.
We extend our gratitude to the developers and contributors of these projects for their valuable innovations and open-source contributions, which have greatly facilitated the development of MediaPipe4U.