[go: up one dir, main page]

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    AI Models

    AI Models

    A repository of trained models

    All models (at least currently) are supported by chaiNNer, an upscaling GUI that allows for both very simple and very complex tasks to be completed in a nice manner where you "chain" nodes together. Highly recommended for images. If you're looking to upscale videos using the models then use enhancr simply due to the fact that it supports TensorRT, which will allow you to upscale videos at incredible speeds! The GUI is one of the best looking applications out there and is personally my go to option. While yes builds are paid, it is well worth your money and beats any other GUI out there currently.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    AICommand

    AICommand

    ChatGPT integration with Unity Editor

    AICommand is a proof-of-concept integration that lets you control the Unity Editor using natural language via ChatGPT. Instead of manually hunting through menus or writing editor scripts, you can prompt the editor to perform tasks, generate snippets, and automate actions. The project showcases an emerging workflow where LLMs augment game and tooling development by understanding intent and producing editor-side outcomes. It provides a minimal setup that connects your OpenAI API key and surfaces a command window right inside Unity. The aim is to experiment with agentic assistance inside the editor loop, turning repetitive steps into promptable actions. While positioned as experimental, it demonstrates the potential of pairing Unity’s extensibility with AI-driven command execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    AlphaGenome

    AlphaGenome

    Programmatic access to the AlphaGenome model

    The AlphaGenome API provides access to AlphaGenome, Google DeepMind’s unifying model for deciphering the regulatory code within DNA sequences. This repository contains client-side code, examples, and documentation to help you use the AlphaGenome API. AlphaGenome offers multimodal predictions, encompassing diverse functional outputs such as gene expression, splicing patterns, chromatin features, and contact maps. The model analyzes DNA sequences of up to 1 million base pairs in length and can deliver predictions at single-base-pair resolution for most outputs. AlphaGenome achieves state-of-the-art performance across a range of genomic prediction benchmarks, including numerous diverse variant effect prediction tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Apple Neural Engine (ANE) Transformers

    Apple Neural Engine (ANE) Transformers

    Reference implementation of the Transformer architecture optimized

    ANE Transformers is a reference PyTorch implementation of Transformer components optimized for Apple Neural Engine on devices with A14 or newer and on Macs with M1 or newer chips. It demonstrates how to structure attention and related layers to achieve substantial speedups and lower peak memory compared to baseline implementations when deployed to ANE. The repository targets practitioners who want to keep familiar PyTorch modeling while preparing models for Core ML/ANE execution paths. Documentation highlights reported improvements in throughput and memory residency, while releases track incremental fixes and packaging updates. The project sits alongside related Apple ML repos that focus on deploying attention-based models efficiently to ANE-equipped hardware. In short, it’s a practical blueprint for adapting Transformers to Apple’s dedicated ML accelerator without rewriting entire model stacks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Simple, Secure Domain Registration Icon
    Simple, Secure Domain Registration

    Get your domain at wholesale price. Cloudflare offers simple, secure registration with no markups, plus free DNS, CDN, and SSL integration.

    Register or renew your domain and pay only what we pay. No markups, hidden fees, or surprise add-ons. Choose from over 400 TLDs (.com, .ai, .dev). Every domain is integrated with Cloudflare's industry-leading DNS, CDN, and free SSL to make your site faster and more secure. Simple, secure, at-cost domain registration.
    Sign up for free
  • 5
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    BLEURT-20-D12 is a PyTorch implementation of BLEURT, a model designed to assess the semantic similarity between two text sequences. It serves as an automatic evaluation metric for natural language generation tasks like summarization and translation. The model predicts a score indicating how similar a candidate sentence is to a reference sentence, with higher scores indicating greater semantic overlap. Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch transformer library. It requires installing the model-specific library from GitHub to function properly. Once set up, it can be used to compute similarity scores with minimal code. BLEURT-20-D12 enables more flexible deployment in PyTorch-based workflows for evaluating language generation outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Bio_ClinicalBERT

    Bio_ClinicalBERT

    ClinicalBERT model trained on MIMIC notes for clinical NLP tasks

    Bio_ClinicalBERT is a domain-specific language model tailored for clinical natural language processing (NLP), extending BioBERT with additional training on clinical notes. It was initialized from BioBERT-Base v1.0 and further pre-trained on all clinical notes from the MIMIC-III database (~880M words), which includes ICU patient records. The training focused on improving performance in tasks like named entity recognition and natural language inference within the healthcare domain. Notes were processed using rule-based sectioning and tokenized with SciSpacy. Training was done for 150,000 steps using a batch size of 32, max sequence length of 128, and a masked language modeling objective with a 0.15 mask probability. Bio_ClinicalBERT is available through Hugging Face's Transformers library for easy integration. It supports medical AI research and applications involving electronic health record understanding, clinical decision support, and biomedical information extraction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    CLIP

    CLIP

    CLIP, Predict the most relevant text snippet given an image

    CLIP (Contrastive Language-Image Pretraining) is a neural model that links images and text in a shared embedding space, allowing zero-shot image classification, similarity search, and multimodal alignment. It was trained on large sets of (image, caption) pairs using a contrastive objective: images and their matching text are pulled together in embedding space, while mismatches are pushed apart. Once trained, you can give it any text labels and ask it to pick which label best matches a given image—even without explicit training for that classification task. The repository provides code for model architecture, preprocessing transforms, evaluation pipelines, and example inference scripts. Because it generalizes to arbitrary labels via text prompts, CLIP is a powerful tool for tasks that involve interpreting images in terms of descriptive language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. It achieves an impressive 80.1% top-1 accuracy on ImageNet-1k without any fine-tuning, showcasing its robustness in open-domain settings. Its training dataset is uncurated and web-sourced, meaning it reflects the biases and risks of large-scale internet data. The model is intended for research use and is not recommended for real-world deployment without domain-specific testing and safety evaluations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    CSM (Conversational Speech Model)

    CSM (Conversational Speech Model)

    A Conversational Speech Generation Model

    The CSM (Conversational Speech Model) is a speech generation model developed by Sesame AI that creates RVQ audio codes from text and audio inputs. It uses a Llama backbone and a smaller audio decoder to produce audio codes for realistic speech synthesis. The model has been fine-tuned for interactive voice demos and is hosted on platforms like Hugging Face for testing. CSM offers a flexible setup and is compatible with CUDA-enabled GPUs for efficient execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • FusionAuth: Authentication and User Management Software Icon
    FusionAuth: Authentication and User Management Software

    Offer your users flexible authentication options, including passwords, passwordless, single sign-on (SSO), and multi-factor authentication (MFA).

    FusionAuth adds login, registration, SSO, MFA, and a bazillion other features to your app in days - not months.
    Learn More
  • 10
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and generation, offering improved accuracy and user experience compared to earlier models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    ChatGPT Retrieval Plugin

    ChatGPT Retrieval Plugin

    The ChatGPT Retrieval Plugin lets you easily find personal documents

    The chatgpt-retrieval-plugin repository implements a semantic retrieval backend that lets ChatGPT (or GPT-powered tools) access private or organizational documents in natural language by combining vector search, embedding models, and plugin infrastructure. It can serve as a custom GPT plugin or function-calling backend so that a chat session can “look up” relevant documents based on user queries, inject those results into context, and respond more knowledgeably about a private knowledge base. The repo provides code for ingestion pipelines (embedding documents), APIs for querying, local server components, and privacy / PII detection modules. It also contains plugin manifest files (OpenAPI spec, plugin JSON) so that the retrieval backend can be registered in a plugin ecosystem. Because retrieval is often needed to make LLMs “know what’s in your docs” without leaking everything, this plugin aims to be a secure, flexible building block for retrieval-augmented generation (RAG) systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which significantly improves the model's ability to understand and execute instructions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Claude Code Action

    Claude Code Action

    Claude Code action for GitHub PRs

    Claude Code Action is a general-purpose GitHub Action that brings Anthropic’s Claude Code into pull requests and issues to answer questions, review changes, and even implement code edits. It can wake up automatically when someone mentions @claude, when a PR or issue meets certain conditions, or when a workflow step provides an explicit prompt. The action is designed to understand diffs and surrounding context, so its comments and suggestions are grounded in what actually changed rather than the whole repository. Teams can configure how and when it participates, including authentication via Anthropic’s API as well as cloud providers like Bedrock or Vertex, and control whether it posts inline comments, summary reviews, or pushes commits. It supports streaming responses and longer interactions so that reviewers can iterate naturally in the same PR thread.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Clay Foundation Model

    Clay Foundation Model

    The Clay Foundation Model - An open source AI model and interface

    The Clay Foundation Model is an open-source AI model and interface designed to provide comprehensive data and insights about Earth. It aims to serve as a foundational tool for environmental monitoring, research, and decision-making by integrating various data sources and offering an accessible platform for analysis.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    CodeGeeX2

    CodeGeeX2

    CodeGeeX2: A More Powerful Multilingual Code Generation Model

    CodeGeeX2 is the second-generation multilingual code generation model from ZhipuAI, built upon the ChatGLM2-6B architecture and trained on 600B code tokens. Compared to the first generation, it delivers a significant boost in programming ability across multiple languages, outperforming even larger models like StarCoder-15B in some benchmarks despite having only 6B parameters. The model excels at code generation, translation, summarization, debugging, and comment generation, and it supports over 100 programming languages. With improved inference efficiency, quantization options, and multi-query/flash attention, CodeGeeX2 achieves faster generation speeds and lightweight deployment, requiring as little as 6GB GPU memory at INT4 precision. Its backend powers the CodeGeeX IDE plugins for VS Code, JetBrains, and other editors, offering developers interactive AI assistance with features like infilling and cross-file completion.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    CogVLM is an open-source visual–language model suite—and its GUI-oriented sibling CogAgent—aimed at image understanding, grounding, and multi-turn dialogue, with optional agent actions on real UI screenshots. The flagship CogVLM-17B combines ~10B visual parameters with ~7B language parameters and supports 490×490 inputs; CogAgent-18B extends this to 1120×1120 and adds plan/next-action outputs plus grounded operation coordinates for GUI tasks. The repo provides multiple ways to run models (CLI, web demo, and OpenAI-Vision–style APIs), along with quantization options that reduce VRAM needs (e.g., 4-bit). It includes checkpoints for chat, base, and grounding variants, plus recipes for model-parallel inference and LoRA fine-tuning. The documentation covers task prompts for general dialogue, visual grounding (box→caption, caption→box, caption+boxes), and GUI agent workflows that produce structured actions with bounding boxes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    CogView4

    CogView4

    CogView4, CogView3-Plus and CogView3(ECCV 2024)

    CogView4 is the latest generation in the CogView series of vision-language foundation models, developed as a bilingual (Chinese and English) open-source system for high-quality image understanding and generation. Built on top of the GLM framework, it supports multimodal tasks including text-to-image synthesis, image captioning, and visual reasoning. Compared to previous CogView versions, CogView4 introduces architectural upgrades, improved training pipelines, and larger-scale datasets, enabling stronger alignment between textual prompts and generated visual content. It emphasizes bilingual usability, making it well-suited for cross-lingual multimodal applications. The model also supports fine-tuning and downstream customization, extending its applicability to creative content generation, human–computer interaction, and research on vision-language alignment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Consistency Models

    Consistency Models

    Official repo for consistency models

    consistency_models is the repository for Consistency Models, a new family of generative models introduced by OpenAI that aim to generate high-quality samples by mapping noise directly into data — circumventing the need for lengthy diffusion chains. It builds on and extends diffusion model frameworks (e.g. based on the guided-diffusion codebase), adding techniques like consistency distillation and consistency training to enable fast, often one-step, sample generation. The repo is implemented in PyTorch and includes support for large-scale experiments on datasets like ImageNet-64 and LSUN variants. It also contains checkpointed models, evaluation scripts, and variants of sampling / editing algorithms described in the paper. Because consistency models reduce the number of inference steps, they are promising for real-time or low-latency generative systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ConvNeXt V2 is an evolution of the ConvNeXt architecture that co-designs convolutional networks alongside self-supervised learning. The V2 version introduces a fully convolutional masked autoencoder (FCMAE) framework where parts of the image are masked and the network reconstructs the missing content, marrying convolutional inductive bias with powerful pretraining. A key innovation is a new Global Response Normalization (GRN) layer added to the ConvNeXt backbone, which enhances feature competition across channels. The result is a convnet that competes strongly with transformer architectures on recognition benchmarks while being efficient and hardware-friendly. The repository provides official PyTorch implementations for multiple model sizes (Atto, Femto, Pico, up through Huge), conversion from JAX weights, code for pretraining/fine-tuning, and pretrained checkpoints. It supports both self-supervised pretraining and supervised fine-tuning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    DeepGEMM

    DeepGEMM

    Clean and efficient FP8 GEMM kernels with fine-grained scaling

    DeepGEMM is a specialized CUDA library for efficient, high-performance general matrix multiplication (GEMM) operations, with particular focus on low-precision formats such as FP8 (and experimental support for BF16). The library is designed to work cleanly and simply, avoiding overly templated or heavily abstracted code, while still delivering performance that rivals expert-tuned libraries. It supports both standard and “grouped” GEMMs, which is useful for architectures like Mixture of Experts (MoE) that require segmented matrix multiplications. One distinguishing aspect is that DeepGEMM compiles its kernels at runtime (via a lightweight Just-In-Time (JIT) module), so users don’t need to precompile CUDA kernels before installation. Despite its lean design, it includes scaling strategies (fine-grained scaling) and optimizations inspired by cutting edge systems (drawing from ideas in CUTLASS, CuTe) but in a more streamlined form.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DeepSDF

    DeepSDF

    Learning Continuous Signed Distance Functions for Shape Representation

    DeepSDF is a deep learning framework for continuous 3D shape representation using Signed Distance Functions (SDFs), as presented in the CVPR 2019 paper DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation by Park et al. The framework learns a continuous implicit function that maps 3D coordinates to their corresponding signed distances from object surfaces, allowing compact, high-fidelity shape modeling. Unlike traditional discrete voxel grids or meshes, DeepSDF encodes shapes as continuous neural representations that can be smoothly interpolated and used for reconstruction, generation, and analysis. The repository provides complete tooling for preprocessing mesh datasets (e.g., ShapeNet), training DeepSDF models, reconstructing meshes from learned latent codes, and quantitatively evaluating results with metrics such as Chamfer Distance and Earth Mover’s Distance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    DeepSeek-MoE (“DeepSeek MoE”) is the DeepSeek open implementation of a Mixture-of-Experts (MoE) model architecture meant to increase parameter efficiency by activating only a subset of “expert” submodules per input. The repository introduces fine-grained expert segmentation and shared expert isolation to improve specialization while controlling compute cost. For example, their MoE variant with 16.4B parameters claims comparable or better performance to standard dense models like DeepSeek 7B or LLaMA2 7B using about 40% of the total compute. The repo publishes both Base and Chat variants of the 16B MoE model (deepseek-moe-16b) and provides evaluation results across benchmarks. It also includes a quick start with inference instructions (using Hugging Face Transformers) and guidance on fine-tuning (DeepSpeed, hyperparameters, quantization). The licensing is MIT for code, with a “Model License” applied to the models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    DeepSeek Prover V2

    DeepSeek Prover V2

    Advancing Formal Mathematical Reasoning via Reinforcement Learning

    DeepSeek-Prover-V2 is DeepSeek’s specialized model for formal theorem proving, particularly targeting proof in Lean 4. The repository describes how they use recursive proof decomposition by prompting DeepSeek-V3 to break complex theorems into subgoals, synthesize proof sketches, and then combine them to bootstrap training data. They then fine-tune via reinforcement learning with binary correct/incorrect feedback to integrate informal reasoning with formal proof behavior. The repo releases two model sizes (7B and 671B) and provides evaluation performance (e.g. pass rates on MiniF2F, results on ProverBench) as well as prompt / usage examples for proof generation in Lean 4. It also includes a PDF of the paper or project overview and sample formalization datasets. Because theorem proving is a cutting-edge area in LLM research, Prover-V2 is positioned as a pushing-forward effort in formal reasoning for LLMs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    DeepSeek VL

    DeepSeek VL

    Towards Real-World Vision-Language Understanding

    DeepSeek-VL is DeepSeek’s initial vision-language model that anchors their multimodal stack. It enables understanding and generation across visual and textual modalities—meaning it can process an image + a prompt, answer questions about images, caption, classify, or reason about visuals in context. The model is likely used internally as the visual encoder backbone for agent use cases, to ground perception in downstream tasks (e.g. answering questions about a screenshot). The repository includes model weights (or pointers to them), evaluation metrics on standard vision + language benchmarks, and configuration or architecture files. It also supports inference tools for forwarding image + prompt through the model to produce text output. DeepSeek-VL is a predecessor to their newer VL2 model, and presumably shares core design philosophy but with earlier scaling, fewer enhancements, or capability tradeoffs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    DeepSeek-V3.1-Terminus

    DeepSeek-V3.1-Terminus

    685B model with improved agents and consistency

    DeepSeek-V3.1-Terminus is an updated release in the DeepSeek-V3.1 series, maintaining the original model’s large-scale reasoning and generative capabilities while addressing several key user-reported issues. It improves language consistency, reducing mixed Chinese-English outputs and eliminating abnormal characters, enhancing reliability in multilingual scenarios. The update also refines agentic capabilities, especially for the Code Agent and Search Agent, leading to better tool integration and query handling. Benchmarks show small but notable gains, such as raising MMLU-Pro from 84.8 to 85.0, GPQA-Diamond from 80.1 to 80.7, and SWE Verified from 66.0 to 68.4, along with significant improvements in agent benchmarks like BrowseComp (30.0 → 38.5) and Terminal-bench (31.3 → 36.7). The model structure remains the same as DeepSeek-V3, ensuring compatibility with existing deployment methods, with updated inference demos provided for community use.
    Downloads: 0 This Week
    Last Update:
    See Project