AutoGPTQ is an implementation of GPTQ (Quantized GPT) that optimizes large language models (LLMs) for faster inference by reducing their computational footprint while maintaining accuracy.
Features
- Efficient quantization for large language models
- Reduces memory usage without major performance loss
- Supports various precision levels (e.g., 4-bit, 8-bit)
- Compatible with Hugging Face Transformers
- Accelerates inference on GPUs and CPUs
- Helps deploy LLMs on resource-constrained hardware
License
MIT LicenseFollow AutoGPTQ
You Might Also Like
Gen AI apps are built with MongoDB Atlas
MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of AutoGPTQ!