[go: up one dir, main page]

30 projects for "nvidia%20gpu%20mod" with 1 filter applied:

  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Easy-to-use online form builder for every business. Icon
    Easy-to-use online form builder for every business.

    Create online forms and publish them. Get an email for each response. Collect data.

    Easy-to-use online form builder for every business. Create online forms and publish them. Get an email for each response. Collect data. Design professional looking forms with JotForm Online Form Builder. Customize with advanced styling options to match your branding. Speed up and simplify your daily work by automating complex tasks with JotForm’s industry leading features. Securely and easily sell products. Collect subscription fees and donations. Being away from your computer shouldn’t stop you from getting the information you need. No matter where you work, JotForm Mobile Forms lets you collect data offline with powerful forms you can manage from your phone or tablet. Get the full power of JotForm at your fingertips. JotForm PDF Editor automatically turns collected form responses into professional, secure PDF documents that you can share with colleagues and customers. Easily generate custom PDF files online!
    Learn More
  • 1
    NVIDIA AgentIQ

    NVIDIA AgentIQ

    The NVIDIA AgentIQ toolkit is an open-source library

    NVIDIA AgentIQ is an open-source toolkit designed to efficiently connect, evaluate, and accelerate teams of AI agents. It provides a framework-agnostic platform that integrates seamlessly with various data sources and tools, enabling developers to build composable and reusable agentic workflows. By treating agents, tools, and workflows as simple function calls, AgentIQ facilitates rapid development and optimization of AI-driven applications, enhancing collaboration and efficiency in complex tasks. ​
    Downloads: 10 This Week
    Last Update:
    See Project
  • 2
    NVIDIA Earth2Studio

    NVIDIA Earth2Studio

    Open-source deep-learning framework

    NVIDIA Earth2Studio is an open-source Python package and framework designed to accelerate the development and deployment of AI-driven weather and climate science workflows. It provides a unified API that lets researchers, data scientists, and engineers build complex forecasting and analysis pipelines by combining modular prognostic and diagnostic AI models with a diverse range of real-world data sources such as global forecast systems, reanalysis datasets, and satellite feeds. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    ...For more information, see NVIDIA Merlin on the NVIDIA developer website. Transform data (ETL) for preprocessing and engineering features. Accelerate your existing training pipelines in TensorFlow, PyTorch, or FastAI by leveraging optimized, custom-built data loaders. Scale large deep learning recommender models by distributing large embedding tables that exceed available GPU and CPU memory.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Contractor Foreman is the most affordable all-in-one construction management software for contractors and is trusted by contractors in more than 75 countries. Icon
    Contractor Foreman is the most affordable all-in-one construction management software for contractors and is trusted by contractors in more than 75 countries.

    For Residential, Commercial and Public Works Contractors

    Starting at $49/m for the WHOLE company, Contractor Foreman is the most affordable all-in-one construction management system for contractors. Our customers in 75+ countries and industry awards back it up. And it's all backed by a 100 day guarantee.
    Learn More
  • 5
    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T N1.5 is the world's first open foundation model

    NVIDIA Isaac‑GR00T N1.5 is an open-source foundation model engineered for generalized humanoid robot reasoning and manipulation skills. It accepts multimodal inputs—such as language and images—and uses a diffusion transformer architecture built upon vision-language encoders, enabling adaptive robot behaviors across diverse environments.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    ...Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    ...Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 66 This Week
    Last Update:
    See Project
  • 8
    FlashMLA

    FlashMLA

    FlashMLA: Efficient Multi-head Latent Attention Kernels

    FlashMLA is a high-performance decoding kernel library designed especially for Multi-Head Latent Attention (MLA) workloads, targeting NVIDIA Hopper GPU architectures. It provides optimized kernels for MLA decoding, including support for variable-length sequences, helping reduce latency and increase throughput in model inference systems using that attention style. The library supports both BF16 and FP16 data types, and includes a paged KV cache implementation with a block size of 64 to efficiently manage memory during decoding. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    waifu2x ncnn Vulkan

    waifu2x ncnn Vulkan

    waifu2x converter ncnn version, run fast GPU with vulkan

    ncnn implementation of waifu2x converter. Runs fast on Intel/AMD/Nvidia/Apple-Silicon with Vulkan API. waifu2x-ncnn-vulkan uses ncnn project as the universal neural network inference framework.
    Downloads: 9 This Week
    Last Update:
    See Project
  • Our xDM platform turns business users into data champions. Icon
    Our xDM platform turns business users into data champions.

    Discover the Intelligent Data Hub unique platform for Master Data Management

    It empowers organizations of any size to build trusted data applications quickly, with fast time to value using a single software platform for governance, master data, reference data, data quality, enrichment, and workflows.
    Learn More
  • 10
    WhisperLive

    WhisperLive

    A nearly-live implementation of OpenAI's Whisper

    ...It runs as a server–client system in which the server hosts a Whisper backend and clients stream audio to be transcribed with very low delay. The project supports multiple inference backends, including Faster-Whisper, NVIDIA TensorRT, and OpenVINO, allowing you to target GPUs and different CPU architectures efficiently. It can handle microphone input, pre-recorded audio files, and network streams such as RTSP and HLS, making it flexible for live events, monitoring, or accessibility workflows. Configuration options let you control the number of clients, maximum connection time, and threading behavior so the server can be tuned for different deployment environments. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 11
    SimpleLLM

    SimpleLLM

    950 line, minimal, extensible LLM inference engine built from scratch

    ...It provides the core components of an LLM runtime—such as tokenization, batching, and asynchronous execution—without the abstraction overhead of more complex engines, making it easier for developers and researchers to understand and modify. Designed to run efficiently on high-end GPUs like NVIDIA H100 with support for models such as OpenAI/gpt-oss-120b, Simple-LLM implements continuous batching and event-driven inference loops to maximize hardware utilization and throughput. Its straightforward code structure allows anyone experimenting with custom kernels, new batching strategies, or inference optimizations to trace execution from input to output with minimal cognitive overhead.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    OuteTTS

    OuteTTS

    Interface for OuteTTS models

    ...The project supports multiple backends including llama.cpp (Python bindings and server), Hugging Face Transformers, ExLlamaV2, VLLM and a JavaScript interface via Transformers.js, allowing it to run on CPUs, NVIDIA CUDA GPUs, AMD ROCm, Vulkan-capable GPUs, and Apple Metal. It also includes a notion of speaker profiles: you can create a speaker from a short audio sample, save it as JSON, and reuse it for consistent voice identity across generations and sessions. For best quality, the model is designed to work with a reference speaker clip and will inherit emotion, style, and accent from that reference.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Transformers4Rec

    Transformers4Rec

    Transformers4Rec is a flexible and efficient library

    Transformers4Rec is an advanced recommendation system library that leverages Transformer models for sequential and session-based recommendations. The library works as a bridge between natural language processing (NLP) and recommender systems (RecSys) by integrating with one of the most popular NLP frameworks, Hugging Face Transformers (HF). Transformers4Rec makes state-of-the-art transformer architectures available for RecSys researchers and industry practitioners. Traditional recommendation...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 14
    Parallel WaveGAN

    Parallel WaveGAN

    Unofficial Parallel WaveGAN

    ...Its main goal is to provide a real-time neural vocoder that can turn mel spectrograms into high-quality speech audio efficiently. The repository is designed to work hand-in-hand with ESPnet-TTS and NVIDIA Tacotron2-style front ends, so you can build complete TTS or singing voice synthesis pipelines. It includes a large collection of “Kaldi-style” recipes for many datasets such as LJSpeech, LibriTTS, VCTK, JSUT, CMU Arctic, and multiple singing voice corpora in Japanese, Mandarin, Korean, and more. The project provides pre-trained models, Colab demos, and example configurations, allowing researchers to quickly evaluate vocoder quality or adapt models to new datasets.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    HiFi-GAN

    HiFi-GAN

    Generative Adversarial Networks for Efficient and High Fidelity Speech

    ...In experiments on LJSpeech, HiFi-GAN was shown to achieve mean opinion scores close to human recordings while synthesizing 22.05 kHz audio up to ~168× faster than real time on an NVIDIA V100 GPU. A smaller configuration trades a bit of quality for even higher speed and can run more than 13× faster than real time on CPU, making it suitable for deployment scenarios without powerful GPUs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    OpenSeq2Seq

    OpenSeq2Seq

    Toolkit for efficient experimentation with Speech Recognition

    ...It supports multi-GPU and multi-node data-parallel training, and integrates with Horovod to scale out across large GPU clusters. Mixed-precision support (float16) is optimized for NVIDIA Volta and Turing GPUs, allowing significant speedups and memory savings without sacrificing model quality. The project comes with configuration-driven training scripts, documentation, and examples that demonstrate how to set up pipelines for tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Tiramisu

    Tiramisu

    Polyhedral compiler for expressing fast and portable data algorithms

    ...The Tiramisu compiler is based on the polyhedral model thus it can express a large set of loop optimizations and data layout transformations. Currently, it targets (1) multicore X86 CPUs, (2) Nvidia GPUs, (3) Xilinx FPGAs (Vivado HLS) and (4) distributed machines (using MPI). It is designed to enable easy integration of code generators for new architectures.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Mocha.jl

    Mocha.jl

    Deep Learning framework for Julia

    Mocha.jl is a deep learning framework for Julia, inspired by the C++ Caffe framework. It offers efficient implementations of gradient descent solvers and common neural network layers, supports optional unsupervised pre-training, and allows switching to a GPU backend for accelerated performance. The development of Mocha.jl happens in relative early days of Julia. Now that both Julia and the ecosystem has evolved significantly, and with some exciting new tech such as writing GPU kernels...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19

    OpenPCTV

    openpctv is a Linux distribution based enigma2/VDR/XBMC

    OpenPCTV is a Linux distribution based Enigma2/VDR/KODI
    Downloads: 27 This Week
    Last Update:
    See Project
  • 20

    Accelerated Feature Extraction Tool

    A fast GPU accelerated feature extraction software for speech analysis

    ...It incorporates standard MFCC, PLP, and TRAPS features. The tool is a specially designed to process very large audio data sets. It uses GPU acceleration if compatible GPU available (CUDA as weel as OpenCL, NVIDIA, AMD, and Intel GPUs are supported). CPU SSE intrinsic instruction set is used in cases where no compatible GPU present. The output files are stored in HTK format. The software is developed at Department of Cybernetics at University of West Bohemia in Pilsen.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    osgXI is a general interface of GPU effects, resource managers, and game developing components. It also includes a NVIDIA CG module and an Autodesk Maya exporter for use. osgXI is based on the OpenSceneGraph project.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    This Eclipse plugin help you to create, edit and verify your GLSL (OpenGL Shading Language) and NVidia CG vertex and pixel shaders directly inside the Eclipse IDE. Each shader can be edit in an editor with syntax color with error/warning markers.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 23
    osgNV is a C++ cross-platform library written for the latest OpenSceneGraph (OSG, www.openscenegraph.org) using CMake as the build system. It brings the power of nVIDIA Cg shaders and other nVIDIA OpenGL extensions to your OSG applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    GPU library for simple development of OpenGL-based GPGPU applications, offscreen rendering and shading techniques.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Java language bindings for NVidia CUDA Driver API and CUDA Runtime API
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next