[go: up one dir, main page]

Search Results for "stable diffusion webui" - Page 2

Showing 94 open source projects for "stable diffusion webui"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • The Secure And Reliable File Transfer Solution That You Control. Icon
    The Secure And Reliable File Transfer Solution That You Control.

    Helping IT professionals responsibly secure the world's data

    Cerberus offers a variety of secure file transfer solutions to fit businesses of any size or business sector, including finance, technology, education, publishing, law offices, local, state, and federal government agencies, hospitals and many more.
    Learn More
  • 1
    Stable Diffusion in Docker

    Stable Diffusion in Docker

    Run the Stable Diffusion releases in a Docker container

    Run the Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint. Run the Stable Diffusion releases on Huggingface in a GPU-accelerated Docker container. By default, the pipeline uses the full model and weights which requires a CUDA capable GPU with 8GB+ of VRAM. It should take a few seconds to create one image.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    LatentSync

    LatentSync

    Taming Stable Diffusion for Lip Sync

    LatentSync is an open-source framework from ByteDance that produces high-quality lip-synchronization for video by using an audio-conditioned latent diffusion model, bypassing traditional intermediate motion representations. In effect, given a source video (with masked or reference frames) and an audio track, LatentSync directly generates frames whose lip motions and expressions align with the audio, producing convincing talking-head or animated lip-sync output. The system leverages a U-Net...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    Dream Textures

    Dream Textures

    Stable Diffusion built-in to Blender

    ...Inpaint to fix up images and convert existing textures into seamless ones automatically. Outpaint to increase the size of an image by extending it in any direction. Perform style transfer and create novel animations with Stable Diffusion as a post processing step. Dream Textures has been tested with CUDA and Apple Silicon GPUs. Over 4GB of VRAM is recommended.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    Aidea

    Aidea

    Flutter-based cross-platform app integrating major AI models

    AIdea is a comprehensive Flutter-based cross-platform app integrating major AI models—OpenAI GPT, Chinese models Tongyi Qianwen and Wenxin Yiyan, plus image models like Stable Diffusion for text-to-image, image-to-image, SDXL 1.0, super-resolution, and colorization. It includes a client app, server backend, and Docker deployment scripts for hosted setups.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Effortlessly manage macOS, iOS, iPadOS and tvOS devices across multiple clients and locations. Icon
    Effortlessly manage macOS, iOS, iPadOS and tvOS devices across multiple clients and locations.

    The Most Powerful Apple Device Management Tool for MSPs and IT Teams

    Addigy solutions accelerate Apple adoption in any environment.
    Learn More
  • 5
    PersonaLive

    PersonaLive

    Expressive Portrait Image Animation for Live Streaming

    ...PersonaLive’s architecture balances visual quality and efficiency by combining motion encoding, temporal modules, and hybrid implicit control signals to preserve identity and stable expression through long sequences.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    tinygrad

    tinygrad

    Deep learning framework

    This may not be the best deep learning framework, but it is a deep learning framework. Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    VoxCPM

    VoxCPM

    TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning

    VoxCPM is a tokenizer-free text-to-speech system that models speech in a continuous space, aiming for extremely realistic, context-aware synthesis and true-to-life zero-shot voice cloning. Instead of converting speech into discrete tokens, it uses an end-to-end diffusion-autoregressive architecture built on the MiniCPM-4 backbone, combining hierarchical language modeling, finite scalar quantization (FSQ), and local Diffusion Transformers. This design helps decouple semantic and acoustic information while preserving fine-grained prosody, leading to more stable and expressive generation than many discrete-token systems. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    InvokeAI

    InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models

    InvokeAI is an implementation of Stable Diffusion, the open source text-to-image and image-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM. InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike.
    Downloads: 26 This Week
    Last Update:
    See Project
  • 9
    StabilityMatrix

    StabilityMatrix

    Multi-Platform Package Manager for Stable Diffusion

    StabilityMatrix is a project that helps organize, evaluate, and compare generative AI models and their behavior across prompts, datasets, or configuration settings. It provides a framework to run experiments systematically—capturing inputs, model configurations, outputs, and metrics—so researchers and practitioners can reason about differences in quality, robustness, and failure modes. The repository often bundles tooling for automated prompt sweeping, scoring heuristics (such as diversity,...
    Downloads: 165 This Week
    Last Update:
    See Project
  • E-commerce Fulfillment For Scaling Brands Icon
    E-commerce Fulfillment For Scaling Brands

    Ecommerce and omnichannel brands seeking scalable fulfillment solutions that integrate with popular sales channels

    Flowspace delivers fulfillment excellence by pairing powerful software and on-the-ground logistics know-how. Our platform provides automation, real-time control, and reliability beyond traditional 3PL capabilities—so you can scale smarter, faster, and easier.
    Learn More
  • 10
    OnnxStream

    OnnxStream

    Lightweight inference library for ONNX files, written in C++

    The challenge is to run Stable Diffusion 1.5, which includes a large transformer model with almost 1 billion parameters, on a Raspberry Pi Zero 2, which is a microcomputer with 512MB of RAM, without adding more swap space and without offloading intermediate results on disk. The recommended minimum RAM/VRAM for Stable Diffusion 1.5 is typically 8GB. Generally, major machine learning frameworks and libraries are focused on minimizing inference latency and/or maximizing throughput, all of which at the cost of RAM usage. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    AI-Flow

    AI-Flow

    UI application to connect multiple AI models together

    Open-source tool to seamlessly connect multiple AI models in repeatable flow. While a live demo is available for convenience, for the best experience we recommend running the application directly on your local machine. AI Flow is an open source, user-friendly UI application that empowers you to seamlessly connect multiple AI models together, specifically leveraging the capabilities of ChatGPT. This unique tool paves the way to creating interactive networks of different AI models, fostering a...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 12
    Lama Cleaner

    Lama Cleaner

    Image inpainting tool powered by SOTA AI Model

    Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, or people from your pictures or erase and replace(powered by stable diffusion) anything on your pictures. Lama Cleaner is a free, open-source and fully self-hostable inpainting tool powered by state-of-the-art AI models. You can use it to remove any unwanted object, defect, or people from your pictures or erase and replace anything on your pictures. Many AICG creators are using Lama Cleaner to clean-up their work. ...
    Downloads: 21 This Week
    Last Update:
    See Project
  • 13
    IOPaint

    IOPaint

    Image inpainting tool powered by SOTA AI Model

    IOPaint is a powerful open-source image editing tool focused on inpainting, outpainting, object removal, and general image manipulation driven by state-of-the-art AI models, delivering these capabilities through both local and hosted workflows. Designed to be fully self-hosted and flexible, IOPaint supports a variety of underlying generators and inpaint models — from LaMa erase networks to Stable Diffusion-based replace/object generation — giving users multiple ways to refine or reconstruct images by removing unwanted elements or expanding artwork beyond its original boundaries. Its feature set includes erasing people, watermarks, or defects, adding or replacing objects, applying text-aware edits, and extending images outward (outpainting) to fill contours or expand compositions.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 14
    HunyuanWorld 1.0

    HunyuanWorld 1.0

    Generating Immersive, Explorable, and Interactive 3D Worlds

    HunyuanWorld-1.0 is an open-source, simulation-capable 3D world generation model developed by Tencent Hunyuan that creates immersive, explorable, and interactive 3D environments from text or image inputs. It combines the strengths of video-based diversity and 3D-based geometric consistency through a novel framework using panoramic world proxies and semantically layered 3D mesh representations. This approach enables 360° immersive experiences, seamless mesh export for graphics pipelines, and...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 15
    DeepSpeed MII

    DeepSpeed MII

    MII makes low-latency and high-throughput inference possible

    ...The Deep Learning (DL) open-source community has seen tremendous growth in the last few months. Incredibly powerful text generation models such as the Bloom 176B, or image generation model such as Stable Diffusion are now available to anyone with access to a handful or even a single GPU through platforms such as Hugging Face. While open-sourcing has democratized access to AI capabilities, their application is still restricted by two critical factors: inference latency and cost. DeepSpeed-MII is a new open-source python library from DeepSpeed, aimed towards making low-latency, low-cost inference of powerful models not only feasible but also easily accessible. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 16
    Flow Matching

    Flow Matching

    A PyTorch library for implementing flow matching algorithms

    flow_matching is a PyTorch library implementing flow matching algorithms in both continuous and discrete settings, enabling generative modeling via matching vector fields rather than diffusion. The underlying idea is to parameterize a flow (a time-dependent vector field) that transports samples from a simple base distribution to a target distribution, and train via matching of flows without requiring score estimation or noisy corruption—this can lead to more efficient or stable generative training. The library supports both continuous-time flows (via differential equations) and discrete-time analogues, giving flexibility in design and tradeoffs. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Grounded-Segment-Anything

    Grounded-Segment-Anything

    Marrying Grounding DINO with Segment Anything & Stable Diffusion

    Grounded-Segment-Anything is a research-oriented project that combines powerful open-set object detection with pixel-level segmentation and subsequent creative workflows, effectively enabling detection, segmentation, and high-level vision tasks guided by free-form text prompts. The core idea behind the project is to pair Grounding DINO — a zero-shot object detector that can locate objects described by natural language — with Segment Anything Model (SAM), which can produce detailed masks for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Luna AI

    Luna AI

    Virtual AI anchor that combines state-of-the-art technology

    Luna AI is a virtual AI streamer framework designed to power an interactive VTuber that can go live on major platforms and chat with viewers in real time. It is built around a core assistant persona called “Luna AI,” which can be driven by a wide range of large language models and platforms, including GPT-style APIs, Claude, LangChain-based backends, ChatGLM, Kimi, Ollama, and many others. The project supports multiple rendering backends for the avatar, such as Live2D, Unreal Engine (UE),...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 19
    Matcha-TTS

    Matcha-TTS

    A fast TTS architecture with conditional flow matching

    ...It models speech as an ODE-based generative process, and conditional flow matching lets it reach high-quality audio in only a few synthesis steps, which greatly reduces latency compared to score-matching diffusion approaches. The model is fully probabilistic, so it can generate diverse realizations of the same text while still sounding stable and intelligible. The repository provides an end-to-end TTS pipeline: a PyTorch/Lightning training stack, configuration files, pre-trained checkpoints, a command-line interface, and a Gradio app for interactive testing. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    WhisperSpeech

    WhisperSpeech

    An Open Source text-to-speech system built by inverting Whisper

    WhisperSpeech is an open-source text-to-speech system created by “inverting” OpenAI’s Whisper, reusing its strengths as a semantic audio model to generate speech instead of only transcribing it. The project aims to be for speech what Stable Diffusion is for images: powerful, hackable, and safe for commercial use, with code under Apache-2.0/MIT and models trained only on properly licensed data. Its architecture follows a token-based, multi-stage pipeline inspired by AudioLM and SPEAR-TTS: Whisper is used to produce semantic tokens, EnCodec compresses the waveform into acoustic tokens, and Vocos reconstructs high-fidelity audio from those tokens. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    stable-diffusion-webui-colab

    stable-diffusion-webui-colab

    Stable diffusion webui colab

    Stable Diffusion webui colab. lite has a stable WebUI and stable installed extensions. stable has ControlNet, a stable WebUI, and stable installed extensions. Nightly has ControlNet, the latest WebUI, and daily installed extension updates. If you want to use more models, you can download your model into Colab, which has an empty 50GB space.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 22
    AnimateDiff

    AnimateDiff

    Plug-n-play module turning text-to-image models into animation

    AnimateDiff is an open-source project designed to enhance text-to-image diffusion models by adding animation capabilities. It allows users to turn static images generated by popular text-to-image models into animated sequences without requiring additional model training. This plug-and-play tool is compatible with a wide range of community models and facilitates the generation of animation directly from pre-existing text-to-image models. It supports various configurations to create animations...
    Leader badge">
    Downloads: 24 This Week
    Last Update:
    See Project
  • 23
    Prompt-to-Prompt

    Prompt-to-Prompt

    Latent Diffusion and Stable Diffusion Implementation

    Prompt-to-Prompt is a research codebase that demonstrates how to edit images generated by diffusion models using only changes to the text prompt. Instead of retraining or heavy fine-tuning, it manipulates the model’s cross-attention maps so the structure of the original image is largely preserved while semantics shift according to the revised prompt. The method supports gentle edits (e.g., style, color, lighting) as well as stronger semantic substitutions, and it can localize edits to...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    PromptSniffer

    PromptSniffer

    View Extract & Remove AI generation metadata with right click

    A powerful tool for reading, extracting, and removing AI generation metadata from image files. Specifically designed to handle metadata from AI image generation tools like ComfyUI, Stable Diffusion, SwarmUI, InvokeAI, and more. Core Functionality Read EXIF/Metadata: Extract and display comprehensive metadata from images AI Metadata Detection: Automatically identify and highlight AI generation metadata Metadata Removal: Strip AI generation metadata while preserving image quality Batch Processing: Handle multiple files with wildcard patterns Cross-Platform: Works on Windows, macOS, and Linux AI Tool Support ComfyUI: Detects and extracts workflow JSON data Stable Diffusion: Identifies prompts, parameters, and generation settings SwarmUI/StableSwarmUI: Handles JSON-formatted metadata Midjourney, DALL-E, NovelAI: Recognizes generation signatures Automatic1111, InvokeAI: Extracts generation parameters
    Downloads: 9 This Week
    Last Update:
    See Project
  • 25
    ConsistencyDecoder

    ConsistencyDecoder

    Consistency Distilled Diff VAE

    ConsistencyDecoder is a Python package from OpenAI that introduces an improved decoding method for variational autoencoders (VAEs) used in Stable Diffusion pipelines. Instead of relying solely on the standard GAN or VAE decoder, this approach leverages a Consistency Distilled Diff VAE, designed to produce higher-quality and more stable outputs from encoded latents. The project provides a simple API for encoding with a Stable Diffusion VAE and decoding using the new consistency model, allowing for side-by-side comparisons with traditional decoders. ...
    Downloads: 2 This Week
    Last Update:
    See Project