The interactive file manager requires Javascript. Please enable it or use sftp or scp.
You may still browse the files here.
| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| LocalAI-v3.7.0-checksums.txt | 2025-10-31 | 473 Bytes | |
| LocalAI-v3.7.0-source.tar.gz | 2025-10-31 | 9.4 MB | |
| local-ai-v3.7.0-darwin-amd64 | 2025-10-31 | 77.2 MB | |
| local-ai-v3.7.0-darwin-arm64 | 2025-10-31 | 75.0 MB | |
| local-ai-v3.7.0-linux-amd64 | 2025-10-31 | 75.7 MB | |
| local-ai-v3.7.0-linux-arm64 | 2025-10-31 | 73.0 MB | |
| local-ai-launcher-linux.tar.xz | 2025-10-31 | 16.2 MB | |
| LocalAI.dmg | 2025-10-31 | 12.0 MB | |
| README.md | 2025-10-31 | 36.1 kB | |
| v3.7.0 source code.tar.gz | 2025-10-31 | 9.5 MB | |
| v3.7.0 source code.zip | 2025-10-31 | 9.8 MB | |
| Totals: 11 Items | 357.9 MB | 18 | |
π LocalAI 3.7.0
Welcome to LocalAI 3.7.0 :wave:
This release introduces Agentic MCP support with full WebUI integration, a brand-new neutts TTS backend, fuzzy model search, long-form TTS chunking for chatterbox, and a complete WebUI overhaul.
Weβve also fixed critical bugs, improved stability, and enhanced compatibility with OpenAIβs APIs.
π TL;DR β Whatβs New in LocalAI 3.7.0
| Feature | Summary |
|---|---|
| π€ Agentic MCP Support (WebUI-enabled) | Build AI agents that use real tools (web search, code exec). Fully-OpenAI compatible and integrated into the WebUI. |
| ποΈ neutts TTS Backend (Neuphonic-powered) | Generate natural, high-quality speech with low-latency audio β ideal for voice assistants. |
| πΌοΈ WebUI enhancements | Faster, cleaner UI with real-time updates and full YAML model control. |
| π¬ Long-Text TTS Chunking (Chatterbox) | Generate natural-sounding long-form audio by intelligently splitting text and preserving context. |
| π§© Advanced Agent Controls | Fine-tune agent behavior with new options for retries, reasoning, and re-evaluation. |
| πΈ New Video Creation Endpoint | We now support the OpenAI-compatible /v1/videos endpoint for text-to-video generation. |
| :snake: Enhanced Whisper compatibility | Whisper.cpp is now supported on various CPU variants (AVX, AVX2, etc.) to prevent illegal instruction crashes. |
| π Fuzzy Gallery Search | Find models in the gallery even with typos (e.g., gema finds gemma). |
| π¦ Easier Model & Backend Management | Import, edit, and delete models directly via clean YAML in the WebUI. |
| βΆοΈ Realtime Example | Check out the new realtime voice assistant example (multilingual). |
| β οΈ Security, Stability & API Compliance | Fixed critical crashes, deadlocks, session events, OpenAI compliance, and JSON schema panics. |
| :brain: Qwen 3 VL | Support for Qwen 3 VL with llama.cpp/gguf models |
π₯ Whatβs New in Detail
π€ Agentic MCP Support β Build Intelligent, Tool-Using AI Agents
We're proud to announce full Agentic MCP support a feature for building AI agents that can reason, plan, and execute actions using external tools like web search, code execution, and data retrieval. You can use standard chat/completions endpoint, but powered by an agent in the background.
Full documentation is available here
β Now in WebUI: A dedicated toggle appears in the chat interface when a model supports MCP. Just click to enable agent mode.
β¨ Key Features:
- New Endpoint:
POST /mcp/v1/chat/completions(OpenAI-compatible). - Flexible Tool Configuration:
yaml mcp: stdio: | { "mcpServers": { "searxng": { "command": "docker", "args": ["run", "-i", "--rm", "ghcr.io/mudler/mcps/duckduckgo:master"] } } } - Advanced Agent Control via
agentconfig:yaml agent: max_attempts: 3 max_iterations: 5 enable_reasoning: true enable_re_evaluation: true max_attempts: Retry failed tool calls up to N times.max_iterations: Limit how many times the agent can loop through reasoning.enable_reasoning: Allow step-by-step thought processes (e.g., chain-of-thought).enable_re_evaluation: Re-analyze decisions when tool results are ambiguous.
You can find some plug-n-play MCPs here: https://github.com/mudler/MCPs Under the hood, MCP functionality is powered by https://github.com/mudler/cogito
πΌοΈ WebUI enhancements
WebUI had a major overhaul:
- The chat view now has an MCP toggle in the chat for models that have
mcpsettings enabled in the model config file. - The Editor mask of the model has now been simplified to show/edit the YAML settings of the model
- More reactive, dropped HTMX in favor of Alpine.js and vanilla javascript
- Various fixes including deletion of models
ποΈ Introducing neutts TTS Backend β Natural Speech, Low Latency
Say hello to neutts a new, lightweight TTS backend powered by Neuphonic, delivering high-quality, natural-sounding speech with minimal overhead.
ποΈ Setup Example
:::yaml
name: neutts-english
backend: neutts
parameters:
model: neuphonic/neutts-air
tts:
audio_path: "./output.wav"
streaming: true
options:
# text transcription of the provided audio file
- ref_text: "So I'm live on radio..."
known_usecases:
- tts
:snake: Whisper.cpp enhancements
whisper.cpp CPU variants are now available for:
- avx
- avx2
- avx512
- fallback (no optimized instructions available)
These variants are optimized for specific instruction sets and reduce crashes on older or non-AVX CPUs.
π Smarter Gallery Search: Fuzzy & Case-Insensitive Matching
Searching for gemma now finds gemma-3, gemma2, etc. β even with typos like gemaa or gema.
π§© Improved Tool & Schema Handling β No More Crashes
Weβve fixed multiple edge cases that caused crashes or silent failures in tool usage.
β Fixes:
- Nullable JSON Schemas:
"type": ["string", "null"]now works without panics. - Empty Parameters: Tools with missing or empty
parametersnow handled gracefully. - Strict Mode Enforcement: When
strict_mode: true, the model must pick a tool β no more skipping. - Multi-Type Arrays: Safe handling of
["string", "null"]in function definitions.
π Interaction with Grammar Triggers:
strict_modeand grammar rules work together β if a tool is required and the function definition is invalid, the server returns a clear JSON error instead of crashing.
πΈ New Video Creation Endpoint: OpenAI-Compatible
LocalAI now supports OpenAIβs /v1/videos endpoint for generating videos from text prompts.
π Usage Example:
:::bash
curl http://localhost:8080/v1/videos \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-..." \
-d '{
"model": "sora",
"prompt": "A cat walking through a forest at sunset",
"size": "1024x576",
}'
:brain: Qwen 3 VL in llama.cpp
Support has been added for Qwen 3 VL in llama.cpp. We have updated llama.cpp to latest! As a reminder, Qwen 3 VL and multimodal models are also compatible with our vLLM and MLX backends. Qwen 3 VL models are already available in the model gallery:
qwen3-vl-30b-a3b-instructqwen3-vl-30b-a3b-thinkingqwen3-vl-4b-instructqwen3-vl-32b-instructqwen3-vl-4b-thinkingqwen3-vl-2b-thinkingqwen3-vl-2b-instruct
Note: upgrading the llama.cpp backend is necessary if you already have a LocalAI installation.
π (CI) Gallery Updater Agent: Auto-Detect & Suggest New Models
Weβve added an autonomous CI agent that scans Hugging Face daily for new models and opens PRs to update the gallery.
β¨ How It Works:
- Scans HF for new, trending models
- Extracts base model, quantization, and metadata.
- Uses cogito (our agentic framework) to assign the model to the correct family and to obtain the model informations.
- Opens a PR with:
- Suggested
name,family, andusecases - Link to HF model
- YAML snippet for import
π§ Critical Bug Fixes & Stability Improvements
| Issue | Fix | Impact |
|---|---|---|
| π WebUI Crash on Model Load | Fixed can't evaluate field Name in type string error |
Models now render even without config files |
| π Deadlock in Model Load/Idle Checks | Guarded against race conditions during model loading | Improved performance under load |
| π Realtime API Compliance | Added session.created event; removed redundant conversation.created |
Works with VoxInput, OpenAI clients, and more |
| π₯ MCP Response Formatting | Output wrapped in message field |
Matches OpenAI spec β better client compatibility |
| π JSON Error Responses | Now return clean JSON instead of HTML | Scripts and libraries no longer break on auth failures |
| π Session Registration | Fixed initial MCP calls failing due to cache issues | Reliable first-time use |
π§ kokoro TTS |
Returns full audio, not partial | Better for long-form TTS |
π The Complete Local Stack for Privacy-First AI
LocalAI |
The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required. |
LocalAGI |
A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI. |
LocalRecall |
A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI. |
β€οΈ Thank You!
A huge THANK YOU to our growing community! With over 35,000 stars, LocalAI is a true FOSS movement β built by people, for people, with no corporate backing.
If you love privacy-first AI and open source, please: - β Star the repo - π¬ Contribute code, docs, or feedback - π£ Share with others
Your support keeps this stack alive and evolving!
β Full Changelog
π Click to expand full changelog
## What's Changed ### Bug fixes :bug: * fix(chatterbox): chunk long text by @mudler in https://github.com/mudler/LocalAI/pull/6407 * fix(grammars): handle empty parameters on object types by @mudler in https://github.com/mudler/LocalAI/pull/6409 * fix(mcp): register sessions by @mudler in https://github.com/mudler/LocalAI/pull/6429 * fix(llama.cpp): correctly set grammar triggers by @mudler in https://github.com/mudler/LocalAI/pull/6432 * fix(mcp): make responses compliant to OpenAI APIs by @mudler in https://github.com/mudler/LocalAI/pull/6436 * fix(ui): models without config don't have a .Name field by @mudler in https://github.com/mudler/LocalAI/pull/6438 * fix(realtime): Add transcription session created event, match OpenAI behavior by @richiejp in https://github.com/mudler/LocalAI/pull/6445 * fix: guard from potential deadlock with requests in flight by @mudler in https://github.com/mudler/LocalAI/pull/6484 * fix: handle multi-type arrays in JSON schema to prevent panic by @robert-cronin in https://github.com/mudler/LocalAI/pull/6495 * fix: properly terminate llama.cpp kv_overrides array with empty key + updated doc by @blob42 in https://github.com/mudler/LocalAI/pull/6672 * fix: llama dockerfile make package by @blob42 in https://github.com/mudler/LocalAI/pull/6694 * feat: return complete audio for kokoro by @lukasdotcom in https://github.com/mudler/LocalAI/pull/6842 ### Exciting New Features π * feat: Add Agentic MCP support with a new chat/completion endpoint by @mudler in https://github.com/mudler/LocalAI/pull/6381 * fix: add strict mode check for no action function by @mudler in https://github.com/mudler/LocalAI/pull/6294 * feat: add agent options to model config by @mudler in https://github.com/mudler/LocalAI/pull/6383 * feat(ui): add button to enable Agentic MCP by @mudler in https://github.com/mudler/LocalAI/pull/6400 * feat(api): support both /v1 and not on openai routes by @mudler in https://github.com/mudler/LocalAI/pull/6403 * feat(ui): display in index when a model supports MCP by @mudler in https://github.com/mudler/LocalAI/pull/6406 * feat(neutts): add backend by @mudler in https://github.com/mudler/LocalAI/pull/6404 * feat(ui): use Alpine.js and drop HTMX by @mudler in https://github.com/mudler/LocalAI/pull/6418 * chore: change color palette such as is closer to the logo by @mudler in https://github.com/mudler/LocalAI/pull/6423 * chore(ui): simplify editing and importing models via YAML by @mudler in https://github.com/mudler/LocalAI/pull/6424 * chore(api): return json errors by @mudler in https://github.com/mudler/LocalAI/pull/6428 * chore(ui): display models and backends in tables by @mudler in https://github.com/mudler/LocalAI/pull/6430 * feat(ci): add gallery updater agent by @mudler in https://github.com/mudler/LocalAI/pull/6467 * feat(gallery): add fuzzy search by @mudler in https://github.com/mudler/LocalAI/pull/6481 * chore(gallery search): fuzzy with case insentivie by @mudler in https://github.com/mudler/LocalAI/pull/6490 * feat(ui): add system backend metadata and deletion in index by @mudler in https://github.com/mudler/LocalAI/pull/6546 * feat(api): OpenAI video create enpoint integration by @gmaOCR in https://github.com/mudler/LocalAI/pull/6777 * feat: add CPU variants for whisper.cpp by @mudler in https://github.com/mudler/LocalAI/pull/6855 * feat: do also text match by @mudler in https://github.com/mudler/LocalAI/pull/6891 ### π§ Models * chore(model gallery): add lemon07r_vellummini-0.1-qwen3-14b by @mudler in https://github.com/mudler/LocalAI/pull/6386 * chore(model gallery): add liquidai_lfm2-350m-extract by @mudler in https://github.com/mudler/LocalAI/pull/6387 * chore(model gallery): add liquidai_lfm2-1.2b-extract by @mudler in https://github.com/mudler/LocalAI/pull/6388 * chore(model gallery): add liquidai_lfm2-1.2b-rag by @mudler in https://github.com/mudler/LocalAI/pull/6389 * chore(model gallery): add liquidai_lfm2-1.2b-tool by @mudler in https://github.com/mudler/LocalAI/pull/6390 * chore(model gallery): add liquidai_lfm2-350m-math by @mudler in https://github.com/mudler/LocalAI/pull/6391 * chore(model gallery): add liquidai_lfm2-8b-a1b by @mudler in https://github.com/mudler/LocalAI/pull/6414 * chore(model gallery): add gliese-4b-oss-0410-i1 by @mudler in https://github.com/mudler/LocalAI/pull/6415 * chore(model gallery): add qwen3-deckard-large-almost-human-6b-i1 by @mudler in https://github.com/mudler/LocalAI/pull/6416 * chore(model gallery): add ai21labs_ai21-jamba-reasoning-3b by @mudler in https://github.com/mudler/LocalAI/pull/6417 * chore(ui): skip duplicated entries in search list by @mudler in https://github.com/mudler/LocalAI/pull/6425 * chore(model gallery): add yanolja_yanoljanext-rosetta-12b-2510 by @mudler in https://github.com/mudler/LocalAI/pull/6442 * chore(model gallery): add agentflow_agentflow-planner-7b by @mudler in https://github.com/mudler/LocalAI/pull/6443 * chore(model gallery): add gustavecortal_beck by @mudler in https://github.com/mudler/LocalAI/pull/6444 * chore(model gallery): add qwen3-4b-ra-sft by @mudler in https://github.com/mudler/LocalAI/pull/6458 * chore(model gallery): add demyagent-4b-i1 by @mudler in https://github.com/mudler/LocalAI/pull/6459 * chore(model gallery): add boomerang-qwen3-2.3b by @mudler in https://github.com/mudler/LocalAI/pull/6460 * chore(model gallery): add boomerang-qwen3-4.9b by @mudler in https://github.com/mudler/LocalAI/pull/6461 * gallery: :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6478 * gallery: :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6480 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6501 * chore(model gallery): add mira-v1.7-27b-i1 by @mudler in https://github.com/mudler/LocalAI/pull/6503 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6504 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6507 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6512 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6515 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6516 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6519 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6522 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6524 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6534 * chore(model gallery): :robot: add new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6536 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6557 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6566 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6581 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6597 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6636 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6640 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6646 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6658 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6664 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6691 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6697 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6706 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6721 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6767 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6776 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6784 * chore(model gallery): add allenai_olmocr-2-7b-1025 by @mudler in https://github.com/mudler/LocalAI/pull/6797 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6799 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6854 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6862 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6863 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6864 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6879 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6884 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6908 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6910 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6911 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6919 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6921 * chore(model gallery): :robot: add 1 new models via gallery agent by @localai-bot in https://github.com/mudler/LocalAI/pull/6940 * chore(model gallery): add qwen3-vl-30b-a3b-instruct by @mudler in https://github.com/mudler/LocalAI/pull/6960 * chore(model gallery): add huihui-qwen3-vl-30b-a3b-instruct-abliterated by @mudler in https://github.com/mudler/LocalAI/pull/6961 * chore(model gallery): add qwen3-vl-30b-a3b-thinking by @mudler in https://github.com/mudler/LocalAI/pull/6962 * chore(model gallery): add qwen3-vl-4b-instruct by @mudler in https://github.com/mudler/LocalAI/pull/6963 * chore(model gallery): add qwen3-vl-32b-instruct by @mudler in https://github.com/mudler/LocalAI/pull/6964 * chore(model gallery): add qwen3-vl-4b-thinking by @mudler in https://github.com/mudler/LocalAI/pull/6965 * chore(model gallery): add qwen3-vl-2b-thinking by @mudler in https://github.com/mudler/LocalAI/pull/6966 * chore(model gallery): add qwen3-vl-2b-instruct by @mudler in https://github.com/mudler/LocalAI/pull/6967 ### π Documentation and examples * chore(docs): add MCP example by @mudler in https://github.com/mudler/LocalAI/pull/6405 * chore(docs): enhancements and clarifications by @mudler in https://github.com/mudler/LocalAI/pull/6433 ### π Dependencies * chore(deps): bump actions/stale from 10.0.0 to 10.1.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6392 * chore(deps): bump github.com/rs/zerolog from 1.33.0 to 1.34.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6274 * chore(deps): bump github.com/nikolalohinski/gonja/v2 from 2.3.2 to 2.4.1 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6394 * chore(deps): bump github.com/docker/go-connections from 0.5.0 to 0.6.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6393 * chore: update cogito and simplify MCP logics by @mudler in https://github.com/mudler/LocalAI/pull/6413 * chore(deps): bump github.com/docker/docker from 28.3.3+incompatible to 28.5.0+incompatible by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6399 * chore(deps): bump github.com/multiformats/go-multiaddr from 0.16.0 to 0.16.1 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6277 * chore(deps): bump github.com/quic-go/quic-go from 0.54.0 to 0.54.1 in the go_modules group across 1 directory by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6431 * chore(deps): bump github/codeql-action from 3 to 4 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6451 * chore(deps): bump github.com/containerd/containerd from 1.7.27 to 1.7.28 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6448 * chore(deps): bump github.com/schollz/progressbar/v3 from 3.14.4 to 3.18.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6446 * chore(deps): bump dario.cat/mergo from 1.0.1 to 1.0.2 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6447 * chore(deps): bump github.com/ebitengine/purego from 0.8.4 to 0.9.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6450 * chore(deps): bump google.golang.org/grpc from 1.67.1 to 1.76.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6449 * feat(mcp): add planning and reevaluation by @mudler in https://github.com/mudler/LocalAI/pull/6541 * chore(deps): bump github.com/prometheus/client_golang from 1.23.0 to 1.23.2 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6600 * chore(deps): bump github.com/tmc/langchaingo from 0.1.13 to 0.1.14 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6604 * chore(deps): bump securego/gosec from 2.22.9 to 2.22.10 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6599 * chore(deps): bump github.com/gpustack/gguf-parser-go from 0.17.0 to 0.22.1 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6602 * chore(deps): bump github.com/onsi/ginkgo/v2 from 2.25.3 to 2.26.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6601 * chore(deps): bump github.com/gofrs/flock from 0.12.1 to 0.13.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6598 * chore(deps): bump cogito by @mudler in https://github.com/mudler/LocalAI/pull/6785 * chore(deps): bump github.com/gofiber/contrib/fiberzerolog from 1.0.2 to 1.0.3 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6816 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/coqui by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6822 * chore(deps): bump mxschmitt/action-tmate from 3.22 to 3.23 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6831 * chore(deps): bump github.com/gofiber/swagger from 1.0.0 to 1.1.1 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6825 * chore(deps): bump github.com/alecthomas/kong from 0.9.0 to 1.12.1 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6829 * chore(deps): bump actions/upload-artifact from 4 to 5 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6824 * chore(deps): bump github.com/klauspost/cpuid/v2 from 2.2.10 to 2.3.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6821 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/rerankers by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6819 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/common/template by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6830 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/bark by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6826 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/vllm by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6827 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/exllama2 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6836 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/transformers by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6828 * chore(deps): bump grpcio from 1.75.1 to 1.76.0 in /backend/python/diffusers by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6839 * chore(deps): bump actions/download-artifact from 5 to 6 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6837 * chore(deps): bump github.com/gofiber/template/html/v2 from 2.1.2 to 2.1.3 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6832 * chore(deps): bump fyne.io/fyne/v2 from 2.6.3 to 2.7.0 by @dependabot[bot] in https://github.com/mudler/LocalAI/pull/6840 ### Other Changes * docs: :arrow_up: update docs version mudler/LocalAI by @localai-bot in https://github.com/mudler/LocalAI/pull/6378 * chore: :arrow_up: Update ggml-org/llama.cpp to `128d522c04286e019666bd6ee4d18e3fbf8772e2` by @localai-bot in https://github.com/mudler/LocalAI/pull/6379 * chore: :arrow_up: Update ggml-org/llama.cpp to `86df2c9ae4f2f1ee63d2558a9dc797b98524639b` by @localai-bot in https://github.com/mudler/LocalAI/pull/6382 * feat(swagger): update swagger by @localai-bot in https://github.com/mudler/LocalAI/pull/6384 * chore: :arrow_up: Update ggml-org/llama.cpp to `ca71fb9b368e3db96e028f80c4c9df6b6b370edd` by @localai-bot in https://github.com/mudler/LocalAI/pull/6385 * chore: :arrow_up: Update ggml-org/llama.cpp to `3df2244df40c67dfd6ad548b40ccc507a066af2b` by @localai-bot in https://github.com/mudler/LocalAI/pull/6401 * chore: :arrow_up: Update ggml-org/whisper.cpp to `c8223a8548ad64435266e551385fc51aca9ee8ab` by @localai-bot in https://github.com/mudler/LocalAI/pull/6402 * chore: :arrow_up: Update ggml-org/llama.cpp to `aeaf8a36f06b5810f5ae4bbefe26edb33925cf5e` by @localai-bot in https://github.com/mudler/LocalAI/pull/6408 * chore: :arrow_up: Update ggml-org/llama.cpp to `9d0882840e6c3fb62965d03af0e22880ea90e012` by @localai-bot in https://github.com/mudler/LocalAI/pull/6410 * chore: :arrow_up: Update ggml-org/whisper.cpp to `8877dfc11a9322ce1990958494cf2e41c54657eb` by @localai-bot in https://github.com/mudler/LocalAI/pull/6411 * chore: :arrow_up: Update ggml-org/whisper.cpp to `98930fded1c06e601a38903607af262f04893880` by @localai-bot in https://github.com/mudler/LocalAI/pull/6420 * chore(deps): bump llama.cpp to '1deee0f8d494981c32597dca8b5f8696d399b0f2' by @mudler in https://github.com/mudler/LocalAI/pull/6421 * chore: :arrow_up: Update ggml-org/whisper.cpp to `85871a946971955c635f56bca24ea2a37fed6324` by @localai-bot in https://github.com/mudler/LocalAI/pull/6435 * chore: :arrow_up: Update ggml-org/llama.cpp to `e60f01d941bc5b7fae62dd57fee4cec76ec0ea6e` by @localai-bot in https://github.com/mudler/LocalAI/pull/6434 * chore: :arrow_up: Update ggml-org/llama.cpp to `11f0af5504252e453d57406a935480c909e3ff37` by @localai-bot in https://github.com/mudler/LocalAI/pull/6437 * chore: :arrow_up: Update ggml-org/whisper.cpp to `a91dd3be72f70dd1b3cb6e252f35fa17b93f596c` by @localai-bot in https://github.com/mudler/LocalAI/pull/6439 * chore: :arrow_up: Update ggml-org/llama.cpp to `a31cf36ad946a13b3a646bf0dadf2a481e89f944` by @localai-bot in https://github.com/mudler/LocalAI/pull/6440 * chore: :arrow_up: Update ggml-org/llama.cpp to `e60f241eacec42d3bd7c9edd37d236ebf35132a8` by @localai-bot in https://github.com/mudler/LocalAI/pull/6452 * chore: :arrow_up: Update ggml-org/llama.cpp to `fa882fd2b1bcb663de23af06fdc391489d05b007` by @localai-bot in https://github.com/mudler/LocalAI/pull/6454 * chore: :arrow_up: Update ggml-org/whisper.cpp to `4979e04f5dcaccb36057e059bbaed8a2f5288315` by @localai-bot in https://github.com/mudler/LocalAI/pull/6462 * chore: :arrow_up: Update ggml-org/llama.cpp to `466c1911ab736f0b7366127edee99f8ee5687417` by @localai-bot in https://github.com/mudler/LocalAI/pull/6463 * chore: :arrow_up: Update ggml-org/llama.cpp to `1bb4f43380944e94c9a86e305789ba103f5e62bd` by @localai-bot in https://github.com/mudler/LocalAI/pull/6488 * chore: :arrow_up: Update ggml-org/llama.cpp to `66b0dbcb2d462e7b70ba5a69ee8c3899ac2efb1c` by @localai-bot in https://github.com/mudler/LocalAI/pull/6520 * chore: :arrow_up: Update ggml-org/llama.cpp to `ee09828cb057460b369576410601a3a09279e23c` by @localai-bot in https://github.com/mudler/LocalAI/pull/6550 * chore: :arrow_up: Update ggml-org/llama.cpp to `cec5edbcaec69bbf6d5851cabce4ac148be41701` by @localai-bot in https://github.com/mudler/LocalAI/pull/6576 * chore: :arrow_up: Update ggml-org/llama.cpp to `84bf3c677857279037adf67cdcfd89eaa4ca9281` by @localai-bot in https://github.com/mudler/LocalAI/pull/6621 * chore: :arrow_up: Update ggml-org/whisper.cpp to `23c19308d8a5786c65effa4570204a881660ff31` by @localai-bot in https://github.com/mudler/LocalAI/pull/6622 * Revert "chore(deps): bump securego/gosec from 2.22.9 to 2.22.10" by @mudler in https://github.com/mudler/LocalAI/pull/6638 * chore: :arrow_up: Update ggml-org/llama.cpp to `03792ad93609fc67e41041c6347d9aa14e5e0d74` by @localai-bot in https://github.com/mudler/LocalAI/pull/6651 * chore: :arrow_up: Update ggml-org/llama.cpp to `a2e0088d9242bd9e57f8b852b05a6e47843b5a45` by @localai-bot in https://github.com/mudler/LocalAI/pull/6676 * chore: :arrow_up: Update ggml-org/whisper.cpp to `322c2adb753a9506f0becee134a7f75e2a6b5687` by @localai-bot in https://github.com/mudler/LocalAI/pull/6677 * chore: :arrow_up: Update ggml-org/llama.cpp to `0bf47a1dbba4d36f2aff4e8c34b06210ba34e688` by @localai-bot in https://github.com/mudler/LocalAI/pull/6703 * chore: :arrow_up: Update ggml-org/llama.cpp to `55945d2ef51b93821d4b6f4a9b994393344a90db` by @localai-bot in https://github.com/mudler/LocalAI/pull/6729 * chore: :arrow_up: Update ggml-org/llama.cpp to `5d195f17bc60eacc15cfb929f9403cf29ccdf419` by @localai-bot in https://github.com/mudler/LocalAI/pull/6757 * chore: :arrow_up: Update ggml-org/llama.cpp to `bbac6a26b2bd7f7c1f0831cb1e7b52734c66673b` by @localai-bot in https://github.com/mudler/LocalAI/pull/6783 * feat(swagger): update swagger by @localai-bot in https://github.com/mudler/LocalAI/pull/6834 * chore: :arrow_up: Update ggml-org/whisper.cpp to `f16c12f3f55f5bd3d6ac8cf2f31ab90a42c884d5` by @localai-bot in https://github.com/mudler/LocalAI/pull/6835 * chore: :arrow_up: Update ggml-org/llama.cpp to `5a4ff43e7dd049e35942bc3d12361dab2f155544` by @localai-bot in https://github.com/mudler/LocalAI/pull/6841 * chore: :arrow_up: Update ggml-org/whisper.cpp to `c62adfbd1ecdaea9e295c72d672992514a2d887c` by @localai-bot in https://github.com/mudler/LocalAI/pull/6868 * chore: :arrow_up: Update ggml-org/llama.cpp to `851553ea6b24cb39fd5fd188b437d777cb411de8` by @localai-bot in https://github.com/mudler/LocalAI/pull/6869 * chore: :arrow_up: Update ggml-org/llama.cpp to `3464bdac37027c5e9661621fc75ffcef3c19c6ef` by @localai-bot in https://github.com/mudler/LocalAI/pull/6896 * chore: :arrow_up: Update ggml-org/llama.cpp to `16724b5b6836a2d4b8936a5824d2ff27c52b4517` by @localai-bot in https://github.com/mudler/LocalAI/pull/6925 * chore: :arrow_up: Update ggml-org/llama.cpp to `4146d6a1a6228711a487a1e3e9ddd120f8d027d7` by @localai-bot in https://github.com/mudler/LocalAI/pull/6945New Contributors
- @robert-cronin made their first contribution in https://github.com/mudler/LocalAI/pull/6495
- @gmaOCR made their first contribution in https://github.com/mudler/LocalAI/pull/6777
- @lukasdotcom made their first contribution in https://github.com/mudler/LocalAI/pull/6842
Full Changelog: https://github.com/mudler/LocalAI/compare/v3.6.0...v3.7.0