LiteRT is an experimental, real-time inference runtime built by Google AI Edge to run lightweight ML models on edge devices with ultra-low latency. It focuses on delivering predictable and consistent performance for models used in time-critical applications like robotics, AR/VR, and IoT. LiteRT is designed to be hardware-agnostic, with minimal dependencies and tight control over execution scheduling.
Features
- Real-time inference execution for edge ML models
- Ultra-low latency and jitter optimization
- Works with small, performance-critical models
- Hardware-agnostic and lightweight runtime
- Deterministic execution with predictable scheduling
- Designed for robotics, AR/VR, and embedded use cases
Categories
Artificial IntelligenceLicense
Apache License V2.0Follow LiteRT
You Might Also Like
MongoDB Atlas runs apps anywhere
MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of LiteRT!