Self-hosted, community-driven, local OpenAI compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. Free Open Source OpenAI alternative. No GPU is required. Runs ggml, GPTQ, onnx, TF compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others. LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer-grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.

Features

  • Local, OpenAI drop-in alternative REST API
  • NO GPU required
  • Supports multiple models
  • Once loaded the first time, it keep models loaded in memory for faster inference
  • Doesn’t shell-out, but uses C++ bindings for a faster inference and better performance
  • You own your data

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow LocalAI

LocalAI Web Site

You Might Also Like
Gen AI apps are built with MongoDB Atlas Icon
Gen AI apps are built with MongoDB Atlas

The database for AI-powered applications.

MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of LocalAI!

Additional Project Details

Programming Language

Go

Related Categories

Go Large Language Models (LLM), Go LLM Inference Tool

Registered

2023-08-21