Table Of Content
- What is WAN 2.2 Animate?
- Overview Table
- Key Features of WAN 2.2 Animate
- 1. Two Powerful Modes
- 2. Realistic Facial Expressions
- 3. Accurate Motion Replication
- 4. Environmental Matching
- 5. Open Source and Flexible
- How to Use WAN 2.2 Animate
- 1. Installation
- 2. Preprocessing the Input
- 3. Run in Animation Mode
- 4. Run in Replacement Mode
- FAQs
- Final Thoughts

WAN 2.2 Animate: Character Animation and Replacement
Table Of Content
- What is WAN 2.2 Animate?
- Overview Table
- Key Features of WAN 2.2 Animate
- 1. Two Powerful Modes
- 2. Realistic Facial Expressions
- 3. Accurate Motion Replication
- 4. Environmental Matching
- 5. Open Source and Flexible
- How to Use WAN 2.2 Animate
- 1. Installation
- 2. Preprocessing the Input
- 3. Run in Animation Mode
- 4. Run in Replacement Mode
- FAQs
- Final Thoughts
What is WAN 2.2 Animate?
WAN 2.2 Animate is a powerful tool developed by Tongyi Lab (Alibaba) that allows you to animate any character or replace characters in an existing video. It works by capturing a performer’s facial expressions and body movements from a reference video and applying them to a static character image. This results in smooth, realistic character animations.
Additionally, it can replace characters in a video while preserving their original expressions, lighting, and color tones, ensuring that the new character blends naturally into the scene. This makes WAN 2.2 Animate highly useful for creating animated videos, entertainment content, marketing clips, and more.

Overview Table
| Feature | Description |
|---|---|
| Developer | Tongyi Lab, Alibaba |
| Primary Function | Character animation and character replacement |
| Input Types | Static character image + reference video |
| Modes | Animation Mode, Replacement Mode |
| Output Quality | High-fidelity videos with natural movement and expressions |
| Lighting Adjustment | Automatic matching of original scene lighting |
| Integration | Available on Hugging Face, ModelScope, GitHub |
Key Features of WAN 2.2 Animate
1. Two Powerful Modes
- Animation Mode Generates an animated video by applying a performer’s movements to a static character image.
- Replacement Mode Replaces a character in an existing video with another character while preserving lighting and environmental tone.
2. Realistic Facial Expressions
Captures intricate facial details from a reference video and applies them to the animated character for natural-looking results.
3. Accurate Motion Replication
Uses a spatially aligned skeleton system to replicate body movements with precision.
4. Environmental Matching
Includes a Relighting LoRA module to automatically adjust the lighting and colors of the generated character, so it blends seamlessly with the video background.
5. Open Source and Flexible
- Open-source model weights and code are available.
- Works with multiple platforms such as Hugging Face and ModelScope.
- Can be integrated into workflows using Python scripts.
How to Use WAN 2.2 Animate
1. Installation
Clone the repository and install dependencies:
git clone https://github.com/Wan-Video/Wan2.2.git
cd Wan2.2
pip install -r requirements.txt2. Preprocessing the Input
Before running the animation, preprocess the video and image.
For Animation Mode:
python ./wan/modules/animate/preprocess/preprocess_data.py \
--ckpt_path ./Wan2.2-Animate-14B/process_checkpoint \
--video_path ./examples/wan_animate/animate/video.mp4 \
--refer_path ./examples/wan_animate/animate/image.jpeg \
--save_path ./examples/wan_animate/animate/process_results \
--resolution_area 1280 720 \
--retarget_flag \
--use_fluxFor Replacement Mode:
python ./wan/modules/animate/preprocess/preprocess_data.py \
--ckpt_path ./Wan2.2-Animate-14B/process_checkpoint \
--video_path ./examples/wan_animate/replace/video.mp4 \
--refer_path ./examples/wan_animate/replace/image.jpeg \
--save_path ./examples/wan_animate/replace/process_results \
--resolution_area 1280 720 \
--iterations 3 \
--replace_flag3. Run in Animation Mode
Generate animated video output:
python generate.py --task animate-14B --ckpt_dir ./Wan2.2-Animate-14B/ \
--src_root_path ./examples/wan_animate/animate/process_results/ --refert_num 14. Run in Replacement Mode
Generate character replacement video output:
python generate.py --task animate-14B --ckpt_dir ./Wan2.2-Animate-14B/ \
--src_root_path ./examples/wan_animate/replace/process_results/ \
--refert_num 1 --replace_flag --use_relighting_loraFAQs
Q1. What is the difference between Animation Mode and Replacement Mode?
- Animation Mode adds movement to a static character image using a reference video.
- Replacement Mode swaps a character in the original video with a new one while preserving lighting and motion.
Q2. What kind of input do I need? You need:
- A reference video with the desired movement and expressions.
- A static character image that will be animated or used for replacement.
Q3. Can WAN 2.2 Animate work on consumer-grade GPUs? Yes. It is optimized to run on GPUs like the RTX 4090 for 720p video generation at 24fps.
Q4. Is it open-source? Yes, WAN 2.2 Animate is open-source with full model weights and inference code available on GitHub, Hugging Face, and ModelScope.
Q5. Does it support lighting adjustments automatically? Yes. The Relighting LoRA module ensures the generated characters match the scene’s original lighting and tone for natural integration.
Final Thoughts
WAN 2.2 Animate provides an efficient way to create realistic character animations and perform character replacement in videos. With its dual modes and environmental matching features, it is suitable for creators, marketers, and developers looking to produce high-quality animated content. Its open-source availability makes it accessible for experimentation and integration into diverse workflows.
Related Posts

Chroma 4B: Exploring End-to-End Virtual Human Dialogue Models
Chroma 4B: Exploring End-to-End Virtual Human Dialogue Models

Qwen3-TTS: Create Custom Voices from Text Descriptions Easily
Qwen3-TTS: Create Custom Voices from Text Descriptions Easily

How to Fix Google AI Studio Failed To Generate Content Permission Denied?
How to Fix Google AI Studio Failed To Generate Content Permission Denied?

