[go: up one dir, main page]

Showing 598 open source projects for "train"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Awardco Employee Recognition Icon
    Awardco Employee Recognition

    For companies looking to recognize and reward their employees

    Everything you love about Amazon is now available for rewards and recognition. Awardco has partnered with Amazon Business to bring millions of reward choices, lower vendor fees and dollar-for-dollar recognition spend to your organization. More choice, more capability, and less spend - all in one simple platform.
    Learn More
  • 1
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 2
    OpenVINO Training Extensions

    OpenVINO Training Extensions

    Trainable models and NN optimization tools

    OpenVINO™ Training Extensions provide a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. When ote_cli is installed in the virtual environment, you can use the ote command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. ote train trains a model (a particular model template) on a dataset and saves results in two files. ote optimize optimizes a pre-trained model using NNCF or POT depending on the model format. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    Phenaki - Pytorch

    Phenaki - Pytorch

    Implementation of Phenaki Video, which uses Mask GIT

    ...It will also combine another technique involving a token critic for potentially even better generations. A new paper suggests that instead of relying on the predicted probabilities of each token as a measure of confidence, one can train an extra critic to decide what to iteratively mask during sampling. This repository will also endeavor to allow the researcher to train on text-to-image and then text-to-video. Similarly, for unconditional training, the researcher should be able to first train on images and then fine tune on video.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    Ludwig

    Ludwig

    A codeless platform to train and test deep learning models

    Ludwig is a toolbox built on top of TensorFlow that allows to train and test deep learning models without the need to write code. All you need to provide is a CSV file containing your data, a list of columns to use as inputs, and a list of columns to use as outputs, Ludwig will do the rest. Simple commands can be used to train models both locally and in a distributed way, and to use them to predict on new data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • The Apple Device Management and Security Platform Icon
    The Apple Device Management and Security Platform

    For IT teams at organizations that run on Apple

    Achieve harmony across your Apple device fleet with Kandji's unmatched management and security capabilities.
    Learn More
  • 5
    Habitat-Lab

    Habitat-Lab

    A modular high-level library to train embodied AI agents

    Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks. Allowing users to train agents in a wide variety of single and multi-agent tasks (e.g. navigation, rearrangement, instruction following, question answering, human following), as well as define novel tasks. ...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 6
    Porcupine

    Porcupine

    On-device wake word detection powered by deep learning

    ...Linux (x86_64), macOS (x86_64, arm64), and Windows (x86_64). Scalable. It can detect multiple always-listening voice commands with no added runtime footprint. Self-service. Developers can train custom wake word models using Picovoice Console. Porcupine is the right product if you need to detect one or a few static (always-listening) voice commands. If you want to create voice experiences similar to Alexa or Google, see the Picovoice platform.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    GluonTS

    GluonTS

    Probabilistic time series modeling in Python

    GluonTS is a Python package for probabilistic time series modeling, focusing on deep learning based models. GluonTS requires Python 3.6 or newer, and the easiest way to install it is via pip. We train a DeepAR-model and make predictions using the simple "airpassengers" dataset. The dataset consists of a single time-series, containing monthly international passengers between the years 1949 and 1960, a total of 144 values (12 years * 12 months). We split the dataset into train and test parts, by removing the last three years (36 months) from the train data. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Hivemind

    Hivemind

    Decentralized deep learning in PyTorch. Built to train models

    ...Fault-tolerant backpropagation: forward and backward passes succeed even if some nodes are unresponsive or take too long to respond. Decentralized parameter averaging: iteratively aggregate updates from multiple workers without the need to synchronize across the entire network. Train neural networks of arbitrary size: parts of their layers are distributed across the participants with the Decentralized Mixture-of-Experts. If you have succesfully trained a model or created a downstream repository with the help of our library, feel free to submit a pull request that adds your project to the list.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    SageMaker Python SDK

    SageMaker Python SDK

    Training and deploying machine learning models on Amazon SageMaker

    SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the SDK, you can train and deploy models using popular deep learning frameworks Apache MXNet and TensorFlow. You can also train and deploy models with Amazon algorithms, which are scalable implementations of core machine learning algorithms that are optimized for SageMaker and GPU training. If you have your own algorithms built into SageMaker-compatible Docker containers, you can train and host models using these as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Create a personalized AI chatbot for each team in minutes Icon
    Create a personalized AI chatbot for each team in minutes

    Get better, faster answers for your whole team with an AI chatbot trained on your company documents.

    QueryPal is the lifeline your team needs. Our AI chatbot integrates seamlessly with your communication channels, using advanced language understanding to identify and auto-answer repetitive questions — in seconds.
    Learn More
  • 10
    LLM Datasets

    LLM Datasets

    Curated list of datasets and tools for post-training

    ...Quality is a recurring theme: examples and utilities help filter low-value samples, enforce length limits, and split train/validation consistently so results are comparable. Licensing and provenance are surfaced to encourage compliant usage and to guide dataset selection in commercial settings. For practitioners, the repo is a practical “starting pantry” that accelerates experimentation and helps keep data wrangling from dominating the project timeline.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Determined

    Determined

    Determined, deep learning training platform

    ...Determined’s cluster scheduling offers first-class support for deep learning and seamless spot instance support. Check out examples of how you can use Determined to train popular deep learning models at scale.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Tokenizers

    Tokenizers

    Fast State-of-the-Art Tokenizers optimized for Research and Production

    ...Tokenizers provides an implementation of today’s most used tokenizers, with a focus on performance and versatility. These tokenizers are also used in Transformers. Train new vocabularies and tokenize, using today’s most used tokenizers. Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server’s CPU. Easy to use, but also extremely versatile. Designed for both research and production. Full alignment tracking. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 13
    Autodistill

    Autodistill

    Images to inference with no labeling

    Autodistill uses big, slower foundation models to train small, faster supervised models. Using autodistill, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between. You can use Autodistill on your own hardware, or use the Roboflow hosted version of Autodistill to label images in the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14

    LightGBM

    Gradient boosting framework based on decision tree algorithms

    LightGBM or Light Gradient Boosting Machine is a high-performance, open source gradient boosting framework based on decision tree algorithms. Compared to other boosting frameworks, LightGBM offers several advantages in terms of speed, efficiency and accuracy. Parallel experiments have shown that LightGBM can attain linear speed-up through multiple machines for training in specific settings, all while consuming less memory. LightGBM supports parallel and GPU learning, and can handle...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 15
    MiniMind

    MiniMind

    Train a 26M-parameter GPT from scratch in just 2h

    minimind is a framework that enables users to train a 26-million-parameter GPT (Generative Pre-trained Transformer) model from scratch in approximately two hours. It provides a streamlined process for data preparation, model training, and evaluation, making it accessible for individuals and organizations to develop their own language models without extensive computational resources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    GPT-SoVITS

    GPT-SoVITS

    1 min voice data can also be used to train a good TTS model

    GPT‑SoVITS is a state-of-the-art voice conversion and TTS system that enables zero‑shot and few‑shot synthesis based on a short vocal sample (e.g., 5 seconds). It supports cross‑lingual speech synthesis across English, Chinese, Japanese, Korean, Cantonese, and more. It's powered by VITS architecture enhanced for few‑sample adaptation and real‑time usability.
    Downloads: 53 This Week
    Last Update:
    See Project
  • 18
    SuperDuperDB

    SuperDuperDB

    Integrate, train and manage any AI models and APIs with your database

    ...SuperDuperDB enables vector search in your existing database. Integrate and combine models from Sklearn, PyTorch, HuggingFace with AI APIs such as OpenAI to build even the most complex AI applications and workflows. Train models on your data in your datastore simply by querying without additional ingestion and pre-processing.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 19
    nanoGPT

    nanoGPT

    The simplest, fastest repository for training/finetuning models

    ...It distills the GPT architecture into a few hundred lines of Python code, making it far easier to understand than large, production-scale implementations. The repo is organized with a training pipeline (dataset preprocessing, model definition, optimizer, training loop) and inference script so you can train a small GPT on text datasets like Shakespeare or custom corpora. It emphasizes readability and clarity: the training loop is cleanly written, and the code avoids heavy abstractions, letting students follow the architecture step by step. While simple, it can still train non-trivial models on modern GPUs and generate coherent text. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    SurvivalManual

    SurvivalManual

    Libre Survival Manual for Android with offline in mind

    ...But it doesn't have to be used only in emergency situations, it can also be useful for outdoor trips, walks, camps, and learning about nature and yourself truly. This is not only fun, but you can also train skills (fire, build shelter, ...) that you may need in a catastrophe. Some things work best with practice in a relaxed environment, so you also have time for some experiments. The refugees also are welcome to use this application to prepare and guide you for your dangerous journey. Although I hope that we as humans will come to feel and stop wars and end climate injustice so that people do not have to flee and be afraid.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    lightning AI

    lightning AI

    The most intuitive, flexible, way for researchers to build models

    ...Download the code and type 'lightning run app'. Feel free to ssh into any machine and run from there as well. In research, we often have multiple separate scripts to train models, finetune them, collect results and more.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 22
    Imagen - Pytorch

    Imagen - Pytorch

    Implementation of Imagen, Google's Text-to-Image Neural Network

    Implementation of Imagen, Google's Text-to-Image Neural Network that beats DALL-E2, in Pytorch. It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 23

    PaddleOCR

    Awesome multilingual OCR toolkits based on PaddlePaddle

    PaddleOCR offers exceptional, multilingual, and practical Optical Character Recognition (OCR) tools that can help users train better models and apply them into practice. Inspired by PaddlePaddle, PaddleOCR is an ultra lightweight OCR system, with multilingual recognition, digit recognition, vertical text recognition, as well as long text recognition. It features a PPOCR series of high-quality pre-trained models, which includes: ultra lightweight ppocr_mobile series models, general ppocr_server series models, and ultra lightweight compression ppocr_mobile_slim series models. ...
    Downloads: 70 This Week
    Last Update:
    See Project
  • 24
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    ...DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models. Sparse attention of DeepSpeed powers an order-of-magnitude longer input sequence and obtains up to 6x faster execution comparing with dense transformers.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    Transformers

    Transformers

    State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX

    Transformers provides APIs and tools to easily download and train state-of-the-art pre-trained models. Using pre-trained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities. Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.
    Downloads: 4 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next