consistency_models is the repository for Consistency Models, a new family of generative models introduced by OpenAI that aim to generate high-quality samples by mapping noise directly into data — circumventing the need for lengthy diffusion chains. It builds on and extends diffusion model frameworks (e.g. based on the guided-diffusion codebase), adding techniques like consistency distillation and consistency training to enable fast, often one-step, sample generation. The repo is implemented in PyTorch and includes support for large-scale experiments on datasets like ImageNet-64 and LSUN variants. It also contains checkpointed models, evaluation scripts, and variants of sampling / editing algorithms described in the paper. Because consistency models reduce the number of inference steps, they are promising for real-time or low-latency generative systems.
Features
- Direct noise → data mapping for one-step or few-step generation
- Implementation of consistency distillation and consistency training
- Support for sampling and editing algorithms (image editing, interpolation)
- Checkpoints and evaluation scripts for datasets like ImageNet and LSUN
- Modular PyTorch architecture built over earlier diffusion frameworks
- Model cards and documentation for intended use, limitations, and benchmarking