HunyuanWorld-Voyager is a next-generation video diffusion framework developed by Tencent-Hunyuan for generating world-consistent 3D scene videos from a single input image. By leveraging user-defined camera paths, it enables immersive scene exploration and supports controllable video synthesis with high realism. The system jointly produces aligned RGB and depth video sequences, making it directly applicable to 3D reconstruction tasks. At its core, Voyager integrates a world-consistent video diffusion model with an efficient long-range world exploration engine powered by auto-regressive inference. To support training, the team built a scalable data engine that automatically curates large video datasets with camera pose estimation and metric depth prediction. As a result, Voyager delivers state-of-the-art performance on world exploration benchmarks while maintaining photometric, style, and 3D consistency.
Features
- Single-image to video generation with controllable camera trajectories for world exploration.
- RGB-D aligned outputs for direct 3D reconstruction and point-cloud export.
- World-consistent video diffusion ensuring global coherence across frames.
- Long-range exploration engine with world cache and smooth auto-regressive inference.
- Scalable training data engine that automatically processes large-scale videos without manual 3D annotation.
- Benchmark-leading performance on WorldScore with superior content alignment, style, and 3D consistency.
- Multi-GPU parallel inference support via xDiT for efficient large-scale video generation.
- Interactive Gradio demo allowing users to upload images, define camera paths, and generate explorable 3D videos in real time.