[go: up one dir, main page]

Showing 253 open source projects for "nvidia%20gpu%20mod"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • The fastest way to host, scale and get paid on WordPress Icon
    The fastest way to host, scale and get paid on WordPress

    For developers searching for a web hosting solution

    Lightning-fast hosting, AI-assisted site management, and enterprise payments all in one platform designed for agencies and growth-focused businesses.
    Learn More
  • 1
    NVIDIA AgentIQ

    NVIDIA AgentIQ

    The NVIDIA AgentIQ toolkit is an open-source library

    NVIDIA AgentIQ is an open-source toolkit designed to efficiently connect, evaluate, and accelerate teams of AI agents. It provides a framework-agnostic platform that integrates seamlessly with various data sources and tools, enabling developers to build composable and reusable agentic workflows. By treating agents, tools, and workflows as simple function calls, AgentIQ facilitates rapid development and optimization of AI-driven applications, enhancing collaboration and efficiency in complex tasks. ​
    Downloads: 10 This Week
    Last Update:
    See Project
  • 2
    NVIDIA Earth2Studio

    NVIDIA Earth2Studio

    Open-source deep-learning framework

    NVIDIA Earth2Studio is an open-source Python package and framework designed to accelerate the development and deployment of AI-driven weather and climate science workflows. It provides a unified API that lets researchers, data scientists, and engineers build complex forecasting and analysis pipelines by combining modular prognostic and diagnostic AI models with a diverse range of real-world data sources such as global forecast systems, reanalysis datasets, and satellite feeds. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    ...For more information, see NVIDIA Merlin on the NVIDIA developer website. Transform data (ETL) for preprocessing and engineering features. Accelerate your existing training pipelines in TensorFlow, PyTorch, or FastAI by leveraging optimized, custom-built data loaders. Scale large deep learning recommender models by distributing large embedding tables that exceed available GPU and CPU memory.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Cloud-based observability solution that helps businesses track and manage workload and performance on a unified dashboard. Icon
    Cloud-based observability solution that helps businesses track and manage workload and performance on a unified dashboard.

    For developers, engineers, and operational teams in organizations of all sizes

    Monitor everything you run in your cloud without compromising on cost, granularity, or scale. groundcover is a full stack cloud-native APM platform designed to make observability effortless so that you can focus on building world-class products. By leveraging our proprietary sensor, groundcover unlocks unprecedented granularity on all your applications, eliminating the need for costly code changes and development cycles to ensure monitoring continuity.
    Learn More
  • 5
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    NVIDIA Isaac Sim

    NVIDIA Isaac Sim

    NVIDIA Isaac Sim is an open-source application on NVIDIA Omniverse

    NVIDIA Isaac Sim is a high-fidelity robotics simulation platform built on NVIDIA Omniverse to develop, test, and validate AI-driven robots in physically accurate virtual environments. It supports a wide array of robotics formats (URDF, MJCF, CAD), includes GPU-accelerated physics, and features immersive RTX rendering and multisensory simulation.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 7
    NVIDIA GPU Exporter

    NVIDIA GPU Exporter

    Nvidia GPU exporter for prometheus using nvidia-smi binary

    Nvidia GPU exporter for prometheus, using nvidia-smi binary to gather metrics. There are many Nvidia GPU exporters out there however they have problems such as not being maintained, not providing pre-built binaries, having a dependency to Linux and/or Docker, targeting enterprise setups (DCGM) and so on.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 8
    NVIDIA NeMo Framework

    NVIDIA NeMo Framework

    Scalable generative AI framework built for researchers and developers

    NVIDIA NeMo is a scalable, cloud-native generative AI framework aimed at researchers and PyTorch developers working on large language models, multimodal models, and speech AI (ASR and TTS), with growing support for computer vision. It provides collections of domain-specific modules and reference implementations that make it easier to pre-train, fine-tune, and deploy very large models on multi-GPU and multi-node infrastructure.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    NVIDIA Isaac Lab

    NVIDIA Isaac Lab

    Unified framework for robot learning built on NVIDIA Isaac Sim

    Isaac Lab is an open-source modular robotics learning framework built atop Isaac Sim. It simplifies research workflows across reinforcement learning, imitation learning, and motion planning by offering robust, GPU-accelerated simulation with realistic sensor and physics fidelity—ideal for sim-to-real robot training. Compatible and optimized for use with Isaac Sim versions (e.g., Sim 5.0 and 4.5). GPU-accelerated, high-fidelity physics and sensor simulation suitable for complex learning...
    Downloads: 3 This Week
    Last Update:
    See Project
  • File Synchronization, File Replication and File Archiving software solutions. Icon
    File Synchronization, File Replication and File Archiving software solutions.

    SIMPLIFY CRITICAL FILE TRANSFERS

    SureSync is a file replication and synchronization application that provides one-way and multi-way processing in both scheduled and real-time modes.
    Learn More
  • 10
    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T

    NVIDIA Isaac GR00T N1.5 is the world's first open foundation model

    NVIDIA Isaac‑GR00T N1.5 is an open-source foundation model engineered for generalized humanoid robot reasoning and manipulation skills. It accepts multimodal inputs—such as language and images—and uses a diffusion transformer architecture built upon vision-language encoders, enabling adaptive robot behaviors across diverse environments.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    NVIDIA GPU Operator

    NVIDIA GPU Operator

    NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes

    ...These components include the NVIDIA drivers (to enable CUDA), Kubernetes device plugin for GPUs, the NVIDIA Container Runtime, automatic node labeling, DCGM-based monitoring, and others.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    Zenith

    Zenith

    Sort of like top or htop but with zoom-able charts, CPU, GPU

    ...Install "musl-tools" package on debian/ubuntu derivatives, "musl-gcc" on fedora and equivalent on other distributions from their standard repos. If one needs to build with NVIDIA support in a virtual environment, then it requires some more setup since typically the VM software is unable to directly expose NVIDIA GPU. Unlike the runtime zenith script, the Makefile has been setup to detect only the presence of required NVIDIA libraries, so it is possible to build with NVIDIA support even when without NVIDIA GPU.
    Downloads: 32 This Week
    Last Update:
    See Project
  • 13
    Sunshine

    Sunshine

    Self-hosted game stream host for Moonlight

    Sunshine is an open-source self‑hosted cloud gaming server that implements NVIDIA’s GameStream protocol. Compatible with Moonlight clients across platforms, it supports low‑latency streaming via software or hardware encoding (AMD/Intel/NVIDIA) and offers a browser‑based control UI for pairing.
    Downloads: 976 This Week
    Last Update:
    See Project
  • 14
    AimAhead

    AimAhead

    The fastest AI powered Aimbot

    AimAhead is an AI-powered aim assist tool designed for high-speed target acquisition. It captures the screen, processes the image through a selected AI model to detect enemies, and then aims towards them. Optimized for NVIDIA graphics cards, AimAhead converts ONNX models to TensorRT engine files for enhanced performance, achieving between 100 to 200 cycles per second depending on the model used.
    Downloads: 383 This Week
    Last Update:
    See Project
  • 15
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Nvitop

    Nvitop

    An interactive NVIDIA-GPU process viewer and beyond

    nvitop is an interactive NVIDIA device and process monitoring tool. It has a colorful and informative interface that continuously updates the status of the devices and processes. As a resource monitor, it includes many features and options, such as tree-view, environment variable viewing, process filtering, process metrics monitoring, etc. Beyond that, the package also ships a CUDA device selection tool nvisel for deep learning researchers.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 17
    HyDE Linux

    HyDE Linux

    Aesthetic, dynamic and minimal dots for Arch hyprland

    ...While installing HyDE alongside another DE/WM should work, due to it being a heavily customized setup, it will conflict with your GTK/Qt theming, Shell, SDDM, GRUB, etc., and is at your own risk. The install script will auto-detect an NVIDIA card and install nvidia-dkms drivers for your kernel.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 18
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 19
    GameMode

    GameMode

    Optimise Linux system performance on demand

    GameMode is a daemon/lib combo for Linux that allows games to request a set of optimizations be temporarily applied to the host OS and/or a game process. GameMode was designed primarily as a stop-gap solution to problems with the Intel and AMD CPU power save or on-demand governors but is now host to a range of optimization features and configurations.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 20
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. With new NVIDIA Ampere Architecture GPUs, TensorRT also leverages sparse tensor cores providing an additional performance boost.
    Downloads: 23 This Week
    Last Update:
    See Project
  • 21
    Isaac ROS Visual SLAM

    Isaac ROS Visual SLAM

    Visual SLAM/odometry package based on NVIDIA-accelerated cuVSLAM

    Discover a faster, easier way to build advanced AI robotics applications with the NVIDIA Isaac™ ROS collection of accelerated computing packages and AI models, bringing NVIDIA acceleration to ROS developers everywhere. Isaac ROS Visual SLAM provides a high-performance, best-in-class ROS 2 package for VSLAM (visual simultaneous localization and mapping). This package uses one or more stereo cameras and optionally an IMU to estimate odometry as an input to navigation.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    TensorRT Node for ComfyUI

    TensorRT Node for ComfyUI

    Enables the best performance on NVIDIA RTX Graphics Cards

    ...This is particularly attractive for power users who run many generations or who host ComfyUI on dedicated hardware and want to squeeze out every bit of GPU performance. In short, it’s about taking ComfyUI from “it runs” to “it runs fast” on NVIDIA GPUs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    ...Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    BioNeMo

    BioNeMo

    BioNeMo Framework: For building and adapting AI models

    BioNeMo is an AI-powered framework developed by NVIDIA for protein and molecular generation using deep learning models. It provides researchers and developers with tools to design, analyze, and optimize biological molecules, aiding in drug discovery and synthetic biology applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ...ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 63 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next