[go: up one dir, main page]

Showing 204 open source projects for "video ai"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Apify is a full-stack web scraping and automation platform helping anyone get value from the web. Icon
    Apify is a full-stack web scraping and automation platform helping anyone get value from the web.

    Get web data. Build automations.

    Actors are serverless cloud programs that extract data, automate web tasks, and run AI agents. Developers build them using JavaScript, Python, or Crawlee, Apify's open-source library. Build once, publish to Store, and earn when others use it. Thousands of developers do this - Apify handles infrastructure, billing, and monthly payouts.
    Learn More
  • 1
    E2B

    E2B

    Secure open source cloud runtime for AI apps & AI agents

    E2B's Code Interpreter SDK allows you to add code-interpreting capabilities to your AI apps. E2B Sandbox is a secure sandboxed cloud environment made for AI agents and AI apps. Sandboxes allow AI agents and apps to have long-running cloud secure environments. In these environments, large language models can use the same tools as humans do.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 2
    LTX-2

    LTX-2

    Python inference and LoRA trainer package for the LTX-2 audio–video

    LTX-2 is a powerful, open-source toolkit developed by Lightricks that provides a modular, high-performance base for building real-time graphics and visual effects applications. It is architected to give developers low-level control over rendering pipelines, GPU resource management, shader orchestration, and cross-platform abstractions so they can craft visually compelling experiences without starting from scratch. Beyond basic rendering scaffolding, LTX-2 includes optimized math libraries,...
    Downloads: 60 This Week
    Last Update:
    See Project
  • 3
    Vidi2

    Vidi2

    Large Multimodal Models for Video Understanding and Editing

    Vidi is a family of large multimodal models developed for deep video understanding and editing tasks, integrating vision, audio, and language to allow sophisticated querying and manipulation of video content. It’s designed to process long-form, real-world videos and answer complex queries such as “when in this clip does X happen?” or “where in the frame is object Y during that moment?” — offering temporal retrieval, spatio-temporal grounding (i.e. locating objects over time + space), and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Generative AI for Beginners (Version 3)

    Generative AI for Beginners (Version 3)

    21 Lessons, Get Started Building with Generative AI

    ...The course covers everything from model selection, prompt engineering, and chat/text/image app patterns to secure development practices and UX for AI. It also walks through modern application techniques such as function calling, RAG with vector databases, working with open source models, agents, fine-tuning, and using SLMs. Each lesson includes a short video, a written guide, runnable samples for Azure OpenAI, the GitHub Marketplace Model Catalog, and the OpenAI API, plus a “Keep Learning” section for deeper study.
    Downloads: 1 This Week
    Last Update:
    See Project
  • TelemetryTV content management and device management Icon
    TelemetryTV content management and device management

    Simple and intuitive digital signage software.

    <section class="row"> <div class="small-12 columns"> <p class="description">TelemetryTV is a powerful digital signage platform built for the modern communicator who needs to engage audiences, generate awareness, or give their community a voice. TelemetryTV allows users to broadcast dynamic content easily by streaming video, images, social feeds, turnkey apps, and data-driven dashboards to all of your displays wherever they are. TelemetryTV powers marketing and internal communications at Starbucks, New York Public Library, Stanford University, and more.</p> </div> </section>
    Learn More
  • 5
    CogVideo

    CogVideo

    text and image to video generation: CogVideoX (2024) and CogVideo

    CogVideo is an open source text-/image-/video-to-video generation project that hosts the CogVideoX family of diffusion-transformer models and end-to-end tooling. The repo includes SAT and Diffusers implementations, turnkey demos, and fine-tuning pipelines (including LoRA) designed to run across a wide range of NVIDIA GPUs, from desktop cards (e.g., RTX 3060) to data-center hardware (A100/H100). Current releases cover CogVideoX-2B, CogVideoX-5B, and the upgraded CogVideoX1.5-5B variants, plus...
    Downloads: 23 This Week
    Last Update:
    See Project
  • 6
    Paper2GUI

    Paper2GUI

    Convert AI papers to GUI

    Convert AI papers to GUI,Make it easy and convenient for everyone to use artificial intelligence technology。让每个人都简单方便的使用前沿人工智能技术 Paper2GUI: An AI desktop APP toolbox for ordinary people. It can be used immediately without installation. It already supports 40+ AI models, covering AI painting, speech synthesis, video frame complementing, video super-resolution, object detection, and image stylization. , OCR recognition and other fields. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    Memvid

    Memvid

    Video-based AI memory library. Store millions of text chunks in MP4

    Memvid encodes text chunks as QR codes within MP4 frames to build a portable “video memory” for AI systems. This innovative approach uses standard video containers and offers millisecond-level semantic search across large corpora with dramatically less storage than vector DBs. It's self-contained—no DB needed—and supports features like PDF indexing, chat integration, and cloud dashboards.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 8
    HunyuanVideo-Foley

    HunyuanVideo-Foley

    Multimodal Diffusion with Representation Alignment

    HunyuanVideo-Foley is a multimodal diffusion model from Tencent Hunyuan for high-fidelity Foley (sound effects) audio generation synchronized to video scenes. It is designed to generate audio that matches both visual content and textual semantic cues, for use in video production, film, advertising, games, etc. The model architecture aligns audio, video, and text representations to produce realistic synchronized soundtracks. Produces high-quality 48 kHz audio output suitable for professional...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Remotion

    Remotion

    Make videos programmatically with React

    Remotion is a cutting-edge library that lets developers create real videos programmatically using React components, transforming familiar UI paradigms into a flexible, code-driven video production workflow. Instead of traditional timeline editors, Remotion leverages HTML, CSS, and JavaScript to define video frames, animations, and transitions, which means developers can use states, props, loops, and component hierarchies to automate complex motion graphics. Because it integrates with the...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Cortex: Boost Developer Coding Skills Icon
    Cortex: Boost Developer Coding Skills

    Cortex makes coding easier and faster for developers. See how our portal connects tools and cuts busywork.

    Cortex is a simple portal that helps developers work smarter by linking all your tools, setting clear rules, and slashing repetitive tasks. It speeds up onboarding, updates old code, and fixes issues fast. Over 100 big companies use it to save time and get better results.
    Try it now!
  • 10
    Recurrent Interface Network (RIN)

    Recurrent Interface Network (RIN)

    Implementation of Recurrent Interface Network (RIN)

    Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorch. The author unawaredly reinvented the induced set-attention block from the set transformers paper. They also combine this with the self-conditioning technique from the Bit Diffusion paper, specifically for the latents. The last ingredient seems to be a new noise function based around the sigmoid, which the author claims is better than cosine...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    StoryMem

    StoryMem

    Official code for StoryMem: Multi-shot Long Video Storytelling

    StoryMem is a narrative-focused memory accumulation system that lets users build, store, and reference past conversational context or story elements with an AI, effectively enabling the AI to maintain and recall personalized story memories or character arcs over time. Instead of treating each interaction as stateless, it tracks user-defined memory nodes, tags, and story threads so that future interactions can draw on established narrative context like character traits, past events, or...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    SlowFast

    SlowFast

    Video understanding codebase from FAIR for reproducing video models

    SlowFast is a video understanding framework that captures both spatial semantics and temporal dynamics efficiently by processing video frames at two different temporal resolutions. The slow pathway encodes semantic context by sampling frames sparsely, while the fast pathway captures motion and fine temporal cues by operating on densely sampled frames with fewer channels. Together, these two pathways complement each other, allowing the network to model both appearance and motion without...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    SeedVR2 Upscaler ComfyUI

    SeedVR2 Upscaler ComfyUI

    Official SeedVR2 Video Upscaler for ComfyUI

    ComfyUI-SeedVR2 Video Upscaler is an open-source integration node for the ComfyUI workflow environment that brings the advanced SeedVR2 video upscaling and restoration model directly into visual AI pipelines. This project packages the SeedVR2 architecture as a custom node for ComfyUI, letting users upscale low-resolution video or imagery inside a node-based interface without needing to write code manually.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 14
    HunyuanVideo-Avatar

    HunyuanVideo-Avatar

    Tencent Hunyuan Multimodal diffusion transformer (MM-DiT) model

    HunyuanVideo-Avatar is a multimodal diffusion transformer (MM-DiT) model by Tencent Hunyuan for animating static avatar images into dynamic, emotion-controllable, and multi-character dialogue videos, conditioned on audio. It addresses challenges of motion realism, identity consistency, and emotional alignment. Innovations include a character image injection module, an Audio Emotion Module for transferring emotion cues, and a Face-Aware Audio Adapter to isolate audio effects on faces,...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    SeedVR

    SeedVR

    Repo for SeedVR2 & SeedVR

    SeedVR (from the ByteDance-Seed organization) is an open-source research and implementation repository focused on cutting-edge video restoration using diffusion transformer architectures. The project includes both the original SeedVR and its successor SeedVR2 models, which are designed to restore degraded or low-quality video content by learning to reconstruct high-fidelity frames with temporal coherence. These models leverage advanced techniques such as adaptive attention mechanisms and...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 16
    ComfyUI

    ComfyUI

    The most powerful and modular diffusion model GUI, api and backend

    The most powerful and modular diffusion model is GUI and backend. This UI will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart-based interface. We are a team dedicated to iterating and improving ComfyUI, supporting the ComfyUI ecosystem with tools like node manager, node registry, cli, automated testing, and public documentation. Open source AI models will win in the long run against closed models and we are only at the beginning. Our core mission...
    Downloads: 191 This Week
    Last Update:
    See Project
  • 17
    video2robot

    video2robot

    End-to-end pipeline converting generative videos

    video2robot is an end-to-end open-source pipeline that converts generative video or prompt-driven motion content into executable humanoid robot motion sequences, enabling researchers and developers to go from high-level action descriptions or videos to robot-ready motion data. The pipeline supports both prompt-to-video generation using models like Veo/Sora and video upload processing, followed by human pose extraction through a 3D pose model and retargeting of that motion to robot joints...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Wan Move

    Wan Move

    Motion-controllable Video Generation via Latent Trajectory Guidance

    Wan Move is an open-source research codebase for motion-controllable video generation that focuses on enabling fine-grained control of motion within generative video models. It is designed to guide the temporal evolution of visual content by leveraging latent trajectory guidance, allowing users to manipulate how objects move over time without modifying the underlying generative architecture. By representing motion information as dense point trajectories and integrating them into the latent...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    SALMONN family

    SALMONN family

    A suite of advanced multi-modal LLMs

    SALMONN is a family of advanced multi-modal large language models (LLMs) developed by ByteDance — designed to handle and integrate multiple data modalities (e.g. text, audio, video) rather than just plain text. The repository bundles different branches targeting specialized tasks (e.g. video-SALMONN, speech-quality assessment, general multimodal tasks), suggesting that the project is modular and extensible across domains. SALMONN aims to push the frontier of multi-modal AI by allowing models to process and reason over diverse inputs, which can be useful for applications such as video understanding, speech analytics, cross-modal retrieval, and general AI capable of interpreting rich, multi-sensory data. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Waifu2x-Extension-GUI

    Waifu2x-Extension-GUI

    Video, Image and GIF upscale/enlarge(Super-Resolution)

    ...Beta builds are more unstable than the stable builds because the beta builds have not been fully tested before release. Multimedia support: Supports processing Image & GIF&APNG & Video at the same time. Full image style support: Multiple built-in algorithms, 2D anime, or your daily photos & videos, this software can handle all of them. Video frame interpolation: Automatically use AI to interpolate frames after enlarge the video.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 21
    Norish

    Norish

    A realtime, self-hosted recipe app for families & friends

    ...The project emphasizes simplicity and immediacy, syncing changes across multiple clients using WebSockets so that updates to recipes, groceries, or meal plans appear instantly for all users. The app allows users to import recipes from URLs and even fallback to AI parsing when needed, with multimedia support that includes video import from popular platforms when AI is configured. Norish also handles grocery list management with recurring items and calendar meal planning, giving household groups an organized view of upcoming meals. Built with modern web technologies and designed for self-hosting, the app includes support for single sign-on via OIDC providers, a mobile-friendly interface, and user permissions for editing or viewing recipes and lists.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    FastRTC

    FastRTC

    The python library for real-time communication

    FastRTC is a Python library designed to simplify real-time communication (RTC), especially for audio and video streaming applications. It abstracts away much of the complexity that typically comes with implementing WebRTC by providing a simple interface — e.g. a Stream class — that can be mounted within a web backend (for example a FastAPI application). This makes it particularly well suited for building real-time voice (or video) interfaces for applications such as AI assistants, live chat, or collaborative audio/video tools. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    LingBot-World

    LingBot-World

    Advancing Open-source World Models

    ...The project is fully open-access, releasing both code and models to help bridge the gap between closed and open world-model systems. LingBot-World empowers researchers and developers in areas such as content creation, gaming, robotics, and embodied AI learning.
    Downloads: 59 This Week
    Last Update:
    See Project
  • 24
    LiveAvatar

    LiveAvatar

    Streaming Real-time Audio-Driven Avatar Generation

    LiveAvatar is an open-source research and implementation project that provides a unified framework for real-time, streaming, interactive avatar video generation driven by audio and other control signals. It implements techniques from state-of-the-art diffusion-based avatar modeling to support infinite-length continuous video generation with low latency, enabling interactive AI avatars that maintain continuity and realism over extended sessions. The project co-designs algorithms and system optimizations, such as block-wise autoregressive processing and fast sampling strategies, to deliver real-time frame rates (e.g., ~45 FPS on appropriate GPU clusters) while handling non-stop generation without quality degradation. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    Gemini CLI

    Gemini CLI

    Open source AI agent CLI tool to bring Gemini into your terminal

    Gemini CLI is an open‑source AI agent that brings the capabilities of Google’s Gemini 2.5 Pro large‑language model directly into your terminal, enabling tasks ranging from coding and debugging to content creation and research via natural‑language prompts, with support for multimodal outputs like image and video generation. Gemini CLI integrates with external tools and MCP servers, enabling media generation and enhanced workflow automation.
    Downloads: 20 This Week
    Last Update:
    See Project