[go: up one dir, main page]

Showing 24 open source projects for "yolov5s-fp16.tflite"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Outplacement, Executive Coaching and Career Development | Careerminds Icon
    Outplacement, Executive Coaching and Career Development | Careerminds

    Careerminds outplacement includes personalized coaching and a high-tech approach to help transition employees back to work faster.

    By helping to avoid the potential risks of RIFs or layoffs through our global outplacement services, companies can move forward with their goals while preserving their internal culture, employer brand, and bottom lines.
    Learn More
  • 1
    rwkv.cpp

    rwkv.cpp

    INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model

    Besides the usual FP32, it supports FP16, quantized INT4, INT5 and INT8 inference. This project is focused on CPU, but cuBLAS is also supported. RWKV is a novel large language model architecture, with the largest model in the family having 14B parameters. In contrast to Transformer with O(n^2) attention, RWKV requires only state from the previous step to calculate logits.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    LiteRT

    LiteRT

    LiteRT is the new name for TensorFlow Lite (TFLite)

    LiteRT is an experimental, real-time inference runtime built by Google AI Edge to run lightweight ML models on edge devices with ultra-low latency. It focuses on delivering predictable and consistent performance for models used in time-critical applications like robotics, AR/VR, and IoT. LiteRT is designed to be hardware-agnostic, with minimal dependencies and tight control over execution scheduling.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    FlashMLA

    FlashMLA

    FlashMLA: Efficient Multi-head Latent Attention Kernels

    ...It provides optimized kernels for MLA decoding, including support for variable-length sequences, helping reduce latency and increase throughput in model inference systems using that attention style. The library supports both BF16 and FP16 data types, and includes a paged KV cache implementation with a block size of 64 to efficiently manage memory during decoding. On very compute-bound settings, it can reach up to ~660 TFLOPS on H800 SXM5 hardware, while in memory-bound configurations it can push memory throughput to ~3000 GB/s. The team regularly updates it with performance improvements; for example, a 2025 update claims 5 % to 15 % gains on compute-bound workloads while maintaining API compatibility.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Orpheus TTS

    Orpheus TTS

    Towards Human-Sounding Speech

    ...Inference is provided through a Python package that uses vLLM under the hood for high-throughput, low-latency generation, including streaming examples that show how to generate audio chunks in real time. The maintainers provide Colab notebooks, a standardized prompting format, and one-click deployment via Baseten for production-grade, FP8/FP16 optimized inference with ~200 ms streaming latency.
    Downloads: 2 This Week
    Last Update:
    See Project
  • MaintainX is the world-leading mobile-first workflow management platform for industrial and frontline workers. Icon
    MaintainX is the world-leading mobile-first workflow management platform for industrial and frontline workers.

    Trusted by Operational Leaders Across the Globe

    Your day-to-day maintenance tasks, simplified. MaintainX eliminates the paperwork, so you can spend less time on your clipboard and more time getting things done.
    Learn More
  • 5
    LLaMA-Factory

    LLaMA-Factory

    Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

    LLaMA-Factory is a fine-tuning and training framework for Meta's LLaMA language models. It enables researchers and developers to train and customize LLaMA models efficiently using advanced optimization techniques.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 6
    Flower

    Flower

    Flower: A Friendly Federated Learning Framework

    ...Different machine learning frameworks have different strengths. Flower can be used with any machine learning framework, for example, PyTorch, TensorFlow, Hugging Face Transformers, PyTorch Lightning, scikit-learn, JAX, TFLite, MONAI, fastai, MLX, XGBoost, Pandas for federated analytics, or even raw NumPy for users who enjoy computing gradients by hand.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 7
    Compute Library

    Compute Library

    The Compute Library is a set of computer vision and machine learning

    The Compute Library is a set of computer vision and machine learning functions optimized for both Arm CPUs and GPUs using SIMD technologies. The library provides superior performance to other open-source alternatives and immediate support for new Arm® technologies e.g. SVE2.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 8
    tf2onnx

    tf2onnx

    Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX

    tf2onnx converts TensorFlow (tf-1.x or tf-2.x), keras, tensorflow.js and tflite models to ONNX via command line or python API. Note: tensorflow.js support was just added. While we tested it with many tfjs models from tfhub, it should be considered experimental. TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues. tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    ...After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT’s suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.
    Downloads: 8 This Week
    Last Update:
    See Project
  • Time tracking software for the global workforce Icon
    Time tracking software for the global workforce

    Teams of all sizes and in various industries that want the best time tracking and employee monitoring solution.

    It's easy with Hubstaff, a time-tracking and workforce management platform that automates almost every aspect of running or growing a business. Teams can track time to projects and to-dos using Hubstaff's desktop, web, or mobile applications. You'll be able to see how much time your team spends on different tasks, plus productivity metrics like activity rates and app usage through Hubstaff's online dashboard. Most of the available features are customizable on a per-user basis, so you can create the team management tool you need.
    Learn More
  • 10
    CogVideo

    CogVideo

    text and image to video generation: CogVideoX (2024) and CogVideo

    ...The repo includes SAT and Diffusers implementations, turnkey demos, and fine-tuning pipelines (including LoRA) designed to run across a wide range of NVIDIA GPUs, from desktop cards (e.g., RTX 3060) to data-center hardware (A100/H100). Current releases cover CogVideoX-2B, CogVideoX-5B, and the upgraded CogVideoX1.5-5B variants, plus image-to-video (I2V) models, with options for BF16/FP16/FP32—and INT8 quantized inference via TorchAO for memory-constrained setups. The codebase emphasizes practical deployment: prompt-optimization utilities (LLM-assisted long-prompt expansion), Colab notebooks, a Gradio web app, and multiple performance knobs (tiling/slicing, CPU offload, torch.compile, multi-GPU, and FA3 backends via partner projects).
    Downloads: 23 This Week
    Last Update:
    See Project
  • 11
    ChatGLM2-6B

    ChatGLM2-6B

    ChatGLM2-6B: An Open Bilingual Chat LLM

    ChatGLM2-6B is the second-gen Chinese-English conversational LLM from ZhipuAI/Tsinghua. It upgrades the base model with GLM’s hybrid pretraining objective, 1.4 TB bilingual data, and preference alignment—delivering big gains on MMLU, CEval, GSM8K, and BBH. The context window extends up to 32K (FlashAttention), and Multi-Query Attention improves speed and memory use. The repo includes Python APIs, CLI & web demos, OpenAI-style/FASTAPI servers, and quantized checkpoints for lightweight local...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers,...
    Downloads: 23 This Week
    Last Update:
    See Project
  • 13
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    ...The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc. The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit integers (INT16), and 8-bit integers (INT8). The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    MNN

    MNN

    MNN is a blazing fast, lightweight deep learning framework

    MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 15
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    CUTLASS

    CUTLASS

    CUDA Templates for Linear Algebra Subroutines

    ...To support a wide variety of applications, CUTLASS provides extensive support for mixed-precision computations, providing specialized data-movement and multiply-accumulate abstractions for half-precision floating point (FP16), BFloat16 (BF16), Tensor Float 32 (TF32), etc.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Arcan

    Arcan

    Powerful development framework for creating virtually anything

    ...For the main engine there has been quite some refactoring to reduce input latency; better accommodate variable-refresh rate display; prepare for asymmetric uncooperative multi-GPU and GPU handover; explicit synchronization and runtime transitions back and forth between low (16-bit) to standard (32-bit) to high-definition rendering (10-bit + fp16/fp32).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18

    UniSIMD-assembler

    SIMD macro assembler unified for ARM, MIPS, PPC and x86

    UniSIMD assembler is a high-level C/C++ macro assembler framework unified across ARM, MIPS, POWER and x86 architectures. It establishes a subset of both BASE and SIMD instruction sets with clearly defined common API, so that application logic can be written and maintained in one place without code replication. The assembler itself isn't a separate tool, but rather a collection of C/C++ header files, which applications need to include directly in order to use. At present, Intel...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 19
    TF2DeepFloorplan

    TF2DeepFloorplan

    TF2 Deep FloorPlan Recognition using a Multi-task Network

    TF2 Deep FloorPlan Recognition using a Multi-task Network with Room-boundary-Guided Attention. Enable tensorboard, quantization, flask, tflite, docker, github actions and google colab. This repo contains a basic procedure to train and deploy the DNN model suggested by the paper 'Deep Floor Plan Recognition using a Multi-task Network with Room-boundary-Guided Attention'. It rewrites the original codes from zlzeng/DeepFloorplan into newer versions of Tensorflow and Python.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 20
    NanoDet-Plus

    NanoDet-Plus

    Lightweight anchor-free object detection model

    Super fast and high accuracy lightweight anchor-free object detection model. Real-time on mobile devices. NanoDet is a FCOS-style one-stage anchor-free object detection model which using Generalized Focal Loss as classification and regression loss. In NanoDet-Plus, we propose a novel label assignment strategy with a simple assign guidance module (AGM) and a dynamic soft label assigner (DSLA) to solve the optimal label assignment problem in lightweight model training. We also introduce a...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 21
    Tensorflow Transformers

    Tensorflow Transformers

    State of the art faster Transformer with Tensorflow 2.0

    ...Images, for tasks like image classification, object detection, and segmentation. Audio, for tasks like speech recognition and audio classification. Faster AutoReggressive Decoding, TFlite support, creating TFRecords is simple. Auto-Batching tf.data.dataset or tf.ragged tensors. Everything is dictionary (inputs and outputs) Multiple mask modes like causal, user-defined, prefix. tensorflow-text tokenizer support. Supports GPU, TPU, multi-GPU trainer with wandb, multiple callbacks, auto tensorboard.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Catalyst

    Catalyst

    Accelerated deep learning R&D

    Catalyst is a PyTorch framework for accelerated Deep Learning research and development. It allows you to write compact but full-featured Deep Learning pipelines with just a few lines of code. With Catalyst you get a full set of features including a training loop with metrics, model checkpointing and more, all without the boilerplate. Catalyst is focused on reproducibility, rapid experimentation, and codebase reuse so you can break the cycle of writing another regular train loop and make...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    TensorRT Pro

    TensorRT Pro

    C++ library based on tensorrt integration

    High-level interface for C++/Python. Simplify the implementation of the custom plugin. And serialization and deserialization have been encapsulated for easier usage. Simplify the compilation of fp32, fp16 and int8 for facilitating the deployment with C++/Python in server or embedded device. Models ready for use also with examples are RetinaFace, Scrfd, YoloV5, YoloX, Arcface, AlphaPose, CenterNet and DeepSORT(C++).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    TTS

    TTS

    Deep learning for text to speech

    ...TTS comes with pre-trained models, tools for measuring dataset quality, and is already used in 20+ languages for products and research projects. Released models in PyTorch, Tensorflow and TFLite. Tools to curate Text2Speech datasets underdataset_analysis. Demo server for model testing. Notebooks for extensive model benchmarking. Modular (but not too much) code base enabling easy testing for new ideas. Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech). Speaker Encoder to compute speaker embeddings efficiently. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next