[go: up one dir, main page]

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • The All-in-One Commerce Platform for Businesses - Shopify Icon
    The All-in-One Commerce Platform for Businesses - Shopify

    Shopify offers plans for anyone that wants to sell products online and build an ecommerce store, small to mid-sized businesses as well as enterprise

    Shopify is a leading all-in-one commerce platform that enables businesses to start, build, and grow their online and physical stores. It offers tools to create customized websites, manage inventory, process payments, and sell across multiple channels including online, in-person, wholesale, and global markets. The platform includes integrated marketing tools, analytics, and customer engagement features to help merchants reach and retain customers. Shopify supports thousands of third-party apps and offers developer-friendly APIs for custom solutions. With world-class checkout technology, Shopify powers over 150 million high-intent shoppers worldwide. Its reliable, scalable infrastructure ensures fast performance and seamless operations at any business size.
    Learn More
  • 1
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. It achieves an impressive 80.1% top-1 accuracy on ImageNet-1k without any fine-tuning, showcasing its robustness in open-domain settings. Its training dataset is uncurated and web-sourced, meaning it reflects the biases and risks of large-scale internet data. The model is intended for research use and is not recommended for real-world deployment without domain-specific testing and safety evaluations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    CSM (Conversational Speech Model)

    CSM (Conversational Speech Model)

    A Conversational Speech Generation Model

    The CSM (Conversational Speech Model) is a speech generation model developed by Sesame AI that creates RVQ audio codes from text and audio inputs. It uses a Llama backbone and a smaller audio decoder to produce audio codes for realistic speech synthesis. The model has been fine-tuned for interactive voice demos and is hosted on platforms like Hugging Face for testing. CSM offers a flexible setup and is compatible with CUDA-enabled GPUs for efficient execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and generation, offering improved accuracy and user experience compared to earlier models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    ChatGPT Retrieval Plugin

    ChatGPT Retrieval Plugin

    The ChatGPT Retrieval Plugin lets you easily find personal documents

    The chatgpt-retrieval-plugin repository implements a semantic retrieval backend that lets ChatGPT (or GPT-powered tools) access private or organizational documents in natural language by combining vector search, embedding models, and plugin infrastructure. It can serve as a custom GPT plugin or function-calling backend so that a chat session can “look up” relevant documents based on user queries, inject those results into context, and respond more knowledgeably about a private knowledge base. The repo provides code for ingestion pipelines (embedding documents), APIs for querying, local server components, and privacy / PII detection modules. It also contains plugin manifest files (OpenAPI spec, plugin JSON) so that the retrieval backend can be registered in a plugin ecosystem. Because retrieval is often needed to make LLMs “know what’s in your docs” without leaking everything, this plugin aims to be a secure, flexible building block for retrieval-augmented generation (RAG) systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Simple, Secure Domain Registration Icon
    Simple, Secure Domain Registration

    Get your domain at wholesale price. Cloudflare offers simple, secure registration with no markups, plus free DNS, CDN, and SSL integration.

    Register or renew your domain and pay only what we pay. No markups, hidden fees, or surprise add-ons. Choose from over 400 TLDs (.com, .ai, .dev). Every domain is integrated with Cloudflare's industry-leading DNS, CDN, and free SSL to make your site faster and more secure. Simple, secure, at-cost domain registration.
    Sign up for free
  • 5
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which significantly improves the model's ability to understand and execute instructions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Claude Code Action

    Claude Code Action

    Claude Code action for GitHub PRs

    Claude Code Action is a general-purpose GitHub Action that brings Anthropic’s Claude Code into pull requests and issues to answer questions, review changes, and even implement code edits. It can wake up automatically when someone mentions @claude, when a PR or issue meets certain conditions, or when a workflow step provides an explicit prompt. The action is designed to understand diffs and surrounding context, so its comments and suggestions are grounded in what actually changed rather than the whole repository. Teams can configure how and when it participates, including authentication via Anthropic’s API as well as cloud providers like Bedrock or Vertex, and control whether it posts inline comments, summary reviews, or pushes commits. It supports streaming responses and longer interactions so that reviewers can iterate naturally in the same PR thread.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Clay Foundation Model

    Clay Foundation Model

    The Clay Foundation Model - An open source AI model and interface

    The Clay Foundation Model is an open-source AI model and interface designed to provide comprehensive data and insights about Earth. It aims to serve as a foundational tool for environmental monitoring, research, and decision-making by integrating various data sources and offering an accessible platform for analysis.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    CodeGeeX2

    CodeGeeX2

    CodeGeeX2: A More Powerful Multilingual Code Generation Model

    CodeGeeX2 is the second-generation multilingual code generation model from ZhipuAI, built upon the ChatGLM2-6B architecture and trained on 600B code tokens. Compared to the first generation, it delivers a significant boost in programming ability across multiple languages, outperforming even larger models like StarCoder-15B in some benchmarks despite having only 6B parameters. The model excels at code generation, translation, summarization, debugging, and comment generation, and it supports over 100 programming languages. With improved inference efficiency, quantization options, and multi-query/flash attention, CodeGeeX2 achieves faster generation speeds and lightweight deployment, requiring as little as 6GB GPU memory at INT4 precision. Its backend powers the CodeGeeX IDE plugins for VS Code, JetBrains, and other editors, offering developers interactive AI assistance with features like infilling and cross-file completion.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    CogVLM is an open-source visual–language model suite—and its GUI-oriented sibling CogAgent—aimed at image understanding, grounding, and multi-turn dialogue, with optional agent actions on real UI screenshots. The flagship CogVLM-17B combines ~10B visual parameters with ~7B language parameters and supports 490×490 inputs; CogAgent-18B extends this to 1120×1120 and adds plan/next-action outputs plus grounded operation coordinates for GUI tasks. The repo provides multiple ways to run models (CLI, web demo, and OpenAI-Vision–style APIs), along with quantization options that reduce VRAM needs (e.g., 4-bit). It includes checkpoints for chat, base, and grounding variants, plus recipes for model-parallel inference and LoRA fine-tuning. The documentation covers task prompts for general dialogue, visual grounding (box→caption, caption→box, caption+boxes), and GUI agent workflows that produce structured actions with bounding boxes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    CogView4

    CogView4

    CogView4, CogView3-Plus and CogView3(ECCV 2024)

    CogView4 is the latest generation in the CogView series of vision-language foundation models, developed as a bilingual (Chinese and English) open-source system for high-quality image understanding and generation. Built on top of the GLM framework, it supports multimodal tasks including text-to-image synthesis, image captioning, and visual reasoning. Compared to previous CogView versions, CogView4 introduces architectural upgrades, improved training pipelines, and larger-scale datasets, enabling stronger alignment between textual prompts and generated visual content. It emphasizes bilingual usability, making it well-suited for cross-lingual multimodal applications. The model also supports fine-tuning and downstream customization, extending its applicability to creative content generation, human–computer interaction, and research on vision-language alignment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Consistency Models

    Consistency Models

    Official repo for consistency models

    consistency_models is the repository for Consistency Models, a new family of generative models introduced by OpenAI that aim to generate high-quality samples by mapping noise directly into data — circumventing the need for lengthy diffusion chains. It builds on and extends diffusion model frameworks (e.g. based on the guided-diffusion codebase), adding techniques like consistency distillation and consistency training to enable fast, often one-step, sample generation. The repo is implemented in PyTorch and includes support for large-scale experiments on datasets like ImageNet-64 and LSUN variants. It also contains checkpointed models, evaluation scripts, and variants of sampling / editing algorithms described in the paper. Because consistency models reduce the number of inference steps, they are promising for real-time or low-latency generative systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ConvNeXt V2 is an evolution of the ConvNeXt architecture that co-designs convolutional networks alongside self-supervised learning. The V2 version introduces a fully convolutional masked autoencoder (FCMAE) framework where parts of the image are masked and the network reconstructs the missing content, marrying convolutional inductive bias with powerful pretraining. A key innovation is a new Global Response Normalization (GRN) layer added to the ConvNeXt backbone, which enhances feature competition across channels. The result is a convnet that competes strongly with transformer architectures on recognition benchmarks while being efficient and hardware-friendly. The repository provides official PyTorch implementations for multiple model sizes (Atto, Femto, Pico, up through Huge), conversion from JAX weights, code for pretraining/fine-tuning, and pretrained checkpoints. It supports both self-supervised pretraining and supervised fine-tuning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    DeepGEMM

    DeepGEMM

    Clean and efficient FP8 GEMM kernels with fine-grained scaling

    DeepGEMM is a specialized CUDA library for efficient, high-performance general matrix multiplication (GEMM) operations, with particular focus on low-precision formats such as FP8 (and experimental support for BF16). The library is designed to work cleanly and simply, avoiding overly templated or heavily abstracted code, while still delivering performance that rivals expert-tuned libraries. It supports both standard and “grouped” GEMMs, which is useful for architectures like Mixture of Experts (MoE) that require segmented matrix multiplications. One distinguishing aspect is that DeepGEMM compiles its kernels at runtime (via a lightweight Just-In-Time (JIT) module), so users don’t need to precompile CUDA kernels before installation. Despite its lean design, it includes scaling strategies (fine-grained scaling) and optimizations inspired by cutting edge systems (drawing from ideas in CUTLASS, CuTe) but in a more streamlined form.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    DeepSDF

    DeepSDF

    Learning Continuous Signed Distance Functions for Shape Representation

    DeepSDF is a deep learning framework for continuous 3D shape representation using Signed Distance Functions (SDFs), as presented in the CVPR 2019 paper DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation by Park et al. The framework learns a continuous implicit function that maps 3D coordinates to their corresponding signed distances from object surfaces, allowing compact, high-fidelity shape modeling. Unlike traditional discrete voxel grids or meshes, DeepSDF encodes shapes as continuous neural representations that can be smoothly interpolated and used for reconstruction, generation, and analysis. The repository provides complete tooling for preprocessing mesh datasets (e.g., ShapeNet), training DeepSDF models, reconstructing meshes from learned latent codes, and quantitatively evaluating results with metrics such as Chamfer Distance and Earth Mover’s Distance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    DeepSeek-MoE (“DeepSeek MoE”) is the DeepSeek open implementation of a Mixture-of-Experts (MoE) model architecture meant to increase parameter efficiency by activating only a subset of “expert” submodules per input. The repository introduces fine-grained expert segmentation and shared expert isolation to improve specialization while controlling compute cost. For example, their MoE variant with 16.4B parameters claims comparable or better performance to standard dense models like DeepSeek 7B or LLaMA2 7B using about 40% of the total compute. The repo publishes both Base and Chat variants of the 16B MoE model (deepseek-moe-16b) and provides evaluation results across benchmarks. It also includes a quick start with inference instructions (using Hugging Face Transformers) and guidance on fine-tuning (DeepSpeed, hyperparameters, quantization). The licensing is MIT for code, with a “Model License” applied to the models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DeepSeek Prover V2

    DeepSeek Prover V2

    Advancing Formal Mathematical Reasoning via Reinforcement Learning

    DeepSeek-Prover-V2 is DeepSeek’s specialized model for formal theorem proving, particularly targeting proof in Lean 4. The repository describes how they use recursive proof decomposition by prompting DeepSeek-V3 to break complex theorems into subgoals, synthesize proof sketches, and then combine them to bootstrap training data. They then fine-tune via reinforcement learning with binary correct/incorrect feedback to integrate informal reasoning with formal proof behavior. The repo releases two model sizes (7B and 671B) and provides evaluation performance (e.g. pass rates on MiniF2F, results on ProverBench) as well as prompt / usage examples for proof generation in Lean 4. It also includes a PDF of the paper or project overview and sample formalization datasets. Because theorem proving is a cutting-edge area in LLM research, Prover-V2 is positioned as a pushing-forward effort in formal reasoning for LLMs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    DeepSeek VL

    DeepSeek VL

    Towards Real-World Vision-Language Understanding

    DeepSeek-VL is DeepSeek’s initial vision-language model that anchors their multimodal stack. It enables understanding and generation across visual and textual modalities—meaning it can process an image + a prompt, answer questions about images, caption, classify, or reason about visuals in context. The model is likely used internally as the visual encoder backbone for agent use cases, to ground perception in downstream tasks (e.g. answering questions about a screenshot). The repository includes model weights (or pointers to them), evaluation metrics on standard vision + language benchmarks, and configuration or architecture files. It also supports inference tools for forwarding image + prompt through the model to produce text output. DeepSeek-VL is a predecessor to their newer VL2 model, and presumably shares core design philosophy but with earlier scaling, fewer enhancements, or capability tradeoffs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    DeepSeek-V3.1-Terminus

    DeepSeek-V3.1-Terminus

    685B model with improved agents and consistency

    DeepSeek-V3.1-Terminus is an updated release in the DeepSeek-V3.1 series, maintaining the original model’s large-scale reasoning and generative capabilities while addressing several key user-reported issues. It improves language consistency, reducing mixed Chinese-English outputs and eliminating abnormal characters, enhancing reliability in multilingual scenarios. The update also refines agentic capabilities, especially for the Code Agent and Search Agent, leading to better tool integration and query handling. Benchmarks show small but notable gains, such as raising MMLU-Pro from 84.8 to 85.0, GPQA-Diamond from 80.1 to 80.7, and SWE Verified from 66.0 to 68.4, along with significant improvements in agent benchmarks like BrowseComp (30.0 → 38.5) and Terminal-bench (31.3 → 36.7). The model structure remains the same as DeepSeek-V3, ensuring compatibility with existing deployment methods, with updated inference demos provided for community use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    DeiT (Data-efficient Image Transformers)
    DeiT (Data-efficient Image Transformers) shows that Vision Transformers can be trained competitively on ImageNet-1k without external data by using strong training recipes and knowledge distillation. Its key idea is a specialized distillation strategy—including a learnable “distillation token”—that lets a transformer learn effectively from a CNN or transformer teacher on modest-scale datasets. The project provides compact ViT variants (Tiny/Small/Base) that achieve excellent accuracy–throughput trade-offs, making transformers practical beyond massive pretraining regimes. Training involves carefully tuned augmentations, regularization, and optimization schedules to stabilize learning and improve sample efficiency. The repo offers pretrained checkpoints, reference scripts, and ablation studies that clarify which ingredients matter most for data-efficient ViT training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Denoiser

    Denoiser

    Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)

    Denoiser is a real-time speech enhancement model operating directly on raw waveforms, designed to clean noisy audio while running efficiently on CPU. It uses a causal encoder-decoder architecture with skip connections, optimized with losses defined both in the time domain and frequency domain to better suppress noise while preserving speech. Unlike models that operate on spectrograms alone, this design enables lower latency and coherent waveform output. The implementation includes data augmentation techniques applied to the raw waveforms (e.g. noise mixing, reverberation) to improve model robustness and generalization to diverse noise types. The project supports both offline denoising (batch inference) and live audio processing (e.g. via loopback audio interfaces), making it practical for real-time use in calls or recording. The codebase includes training and evaluation scripts, configuration management via Hydra, and pretrained models on standard noise datasets.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DiT (Diffusion Transformers)

    DiT (Diffusion Transformers)

    Official PyTorch Implementation of "Scalable Diffusion Models"

    DiT (Diffusion Transformer) is a powerful architecture that applies transformer-based modeling directly to diffusion generative processes for high-quality image synthesis. Unlike CNN-based diffusion models, DiT represents the diffusion process in the latent space and processes image tokens through transformer blocks with learned positional encodings, offering scalability and superior sample quality. The model architecture parallels large language models but for image tokens—each block refines noisy latent representations toward cleaner outputs through iterative denoising steps. DiT achieves strong results on benchmarks like ImageNet and LSUN while being architecturally simple and highly modular. It supports variable resolution, conditioning on class or text embeddings, and integration with latent autoencoders (like those used in Stable Diffusion).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Dia-1.6B

    Dia-1.6B

    Dia-1.6B generates lifelike English dialogue and vocal expressions

    Dia-1.6B is a 1.6 billion parameter text-to-speech model by Nari Labs that generates high-fidelity dialogue directly from transcripts. Designed for realistic vocal performance, Dia supports expressive features like emotion, tone control, and non-verbal cues such as laughter, coughing, or sighs. The model accepts speaker conditioning through audio prompts, allowing limited voice cloning and speaker consistency across generations. It is optimized for English and built for real-time performance on enterprise GPUs, though CPU and quantized versions are planned. The format supports [S1]/[S2] tags to differentiate speakers and integrates easily into Python workflows. While not tuned to a specific voice, user-provided audio can guide output style. Licensed under Apache 2.0, Dia is intended for research and educational use, with explicit restrictions on misuse like identity mimicry or deceptive content.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    DreamCraft3D

    DreamCraft3D

    Official implementation of DreamCraft3D

    DreamCraft3D is DeepSeek’s generative 3D modeling framework / model family that likely extends their earlier 3D efforts (e.g. Shap-E or Point-E style models) with more capability, control, or expression. The name suggests a “dream crafting” metaphor—users probably supply textual or image prompts and generate 3D assets (point clouds, meshes, scenes). The repository includes model code, inference scripts, sample prompts, and possibly dataset preparation pipelines. It may integrate rendering or post-processing modules (e.g. mesh smoothing, texturing) to make the outputs more output-ready. Because 3D generation is hardware‐intensive, the repository likely also includes optimizations like quantization, pruning, or inference accelerations (e.g. using FlashMLA or DeepEP) to make the generation pipeline faster or more efficient. DreamCraft3D may also support style or attribute control (e.g. “make this object metallic,” “add textures”) via prompt conditioning or guides.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    FLUX.1-Krea-dev

    FLUX.1-Krea-dev

    Text-to-image model optimized for artistic quality and safe generation

    FLUX.1-Krea-dev is a 12 billion parameter rectified flow transformer for text-to-image generation, developed by Black Forest Labs in collaboration with Krea. It delivers aesthetic, high-quality outputs focused on photography and visual coherence, making it a strong competitor to closed-source models. Trained using guidance distillation, it offers efficient inference while preserving creative fidelity. The model is distributed under a non-commercial license, with conditions to prevent misuse and support ethical AI development. FLUX.1-Krea-dev is available via Diffusers and ComfyUI, and integrates with the FluxPipeline for streamlined usage. Developers can use it for personal or scientific projects, but must comply with safety filters and content restrictions. Extensive pre- and post-training mitigations were applied to minimize risks like NSFW or abusive content generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    FastVLM

    FastVLM

    This repository contains the official implementation of FastVLM

    FastVLM is an efficiency-focused vision-language modeling stack that introduces FastViTHD, a hybrid vision encoder engineered to emit fewer visual tokens and slash encoding time, especially for high-resolution images. Instead of elaborate pruning stages, the design trades off resolution and token count through input scaling, simplifying the pipeline while maintaining strong accuracy. Reported results highlight dramatic speedups in time-to-first-token and competitive quality versus contemporary open VLMs, including comparisons across small and larger variants. The repository documents model variants, showcases head-to-head numbers against known baselines, and explains how the encoder integrates with common LLM backbones. Apple’s research brief frames FastVLM as targeting real-time or latency-sensitive scenarios, where lowering visual token pressure is critical to interactive UX. In short, it’s a practical recipe to make VLMs fast without exotic token-selection heuristics.
    Downloads: 0 This Week
    Last Update:
    See Project