[go: up one dir, main page]

Changelog 👾

Apr 9, 2026

LM Studio 0.4.10

Build 1

  • Improve Gemma 4 tool call reliability
  • Add OAuth support for MCP servers
Apr 2, 2026

LM Studio 0.4.9

Build 1

  • Improve Gemma 4 tool call reliability
  • Add support for Anthropic-compatible v1/messages output_config.effort (low, medium, high, max)
  • Fixed a bug where deleting a chat folder would sometimes freeze the UI
  • Fixed a bug where markdown Link popovers would appear at the top of the window
Mar 26, 2026

LM Studio 0.4.8

Build 1

  • Add support for reasoning_effort and reasoning_tokens in OpenAI-compatible v1/chat/completions
  • Adds a reasoning field to the /api/v1/models API response, indicating each model's supported reasoning capabilities/REST configuration options learn more
  • Fixed a bug where Insert in chat input would sometimes not work after toggling assistant and user mode
  • Fixed a bug where surrounding spaces in tool call parameters would be stripped for models that uses XML/XML-like tool call formats
  • [CUDA] Fixed issue where some VRAM would not be deallocated under certain conditions
  • Fixes a bug where setting reasoning to low when using Nemotron 3 Super via the /api/v1/chat or OpenAI-compatible /v1/responses API would error out
Mar 18, 2026

LM Studio 0.4.7

Build 4

  • Fixed a bug when trying to edit an empty chat message

Build 3

  • Fixed unintended parsing of tool calls within reasoning blocks
  • Fixed a bug where parallel tool calls would fail for some models (for example, GLM)
  • Fixed a bug in OpenAI-compatible /v1/responses which sometimes caused "Output items missing; timeline invariant violated"
  • Fixed $...$ parsing so plain currency/text (for example $10, $1.23, $37 trillion) is no longer incorrectly rendered as math
  • Fixed single-dollar boundary handling to reduce false positives when $ appears in normal prose
  • Fixed \[ \] and \( \) handling so bracket/paren math parses correctly, and empty forms stay visible as literal text
  • Fixes responsive handling of models table on narrow screens, adds resizable column handles
  • Fixed a bug where Anthropic-compatible /v1/messages API would error when properties were not provided for a tool input schema
  • Expose Models Directory selector in Settings
  • Fixed tool calling parsing bugs for Qwen 3.5 and GLM models
    • Tool call parameters with string type were sometimes incorrectly parsed as object/number/boolean
  • Added tool call grammar for gpt-oss models using llama.cpp engine, significantly increasing tool call success rate for these models (requires llama.cpp engines updated to v2.7.1 or later)

Build 2

  • Global chat search now takes into account chat titles
  • Add notification UI when LM Link versions are incompatible between devices
  • Fixed a bug creating a duplicate onboarding popover on the LM Link page
  • Make XML-like tool call parsing (e.g., Nemotron 3) more reliable for boolean values
  • Fixed a bug where clicking the Attach File button in chat input would lock the text input UI
  • Fixes a bug where
    tags were showing as text in markdown tables
  • Fixed a responsive UI overlap bug on server page stacked content
  • Fixed a bug where an unnamed chat title would appear as the chat id in the chat sidebar search results
  • Fixed a bug where on certain devices, the app would crash if an image is fed to a vision model
  • Fixed a bug where model load guardrails and resource usage estimates were inaccurate for some models
  • Anthropic-compatible /v1/messages API now surfaces errors when the model generates an invalid tool call, enabling Claude Code to recover gracefully

Build 1

  • New default: "separate reasoning_content and content in API responses" is now ON by default in order to improve compatibility with /v1/chat/completions clients
    • If your use case requires this setting to be off (previous default), you can disable it in the Developer Settings
  • Fixed app header nav button hotkeys
  • Add parallel parameter to /api/v1/load endpoint
  • Add presence_penalty sampling parameter
  • Fix hover effect visual bug on Model Picker model options in chat input
  • Fixed responsive UI styling on the LM Link page
  • [Linux] Fix regression caused by some app files having a space in their name.
  • Fix OpenAI-compatible /v1/responses endpoint erroring on none and xhigh reasoning effort
  • Fixed a bug where /v1/responses responses included logProbs for MLX models even if message.output_text.logprobs was omitted
Feb 27, 2026

LM Studio 0.4.6

Build 1

  • ✨🎉 Introducing LM Link
    • Connect to remote instances of LM Studio, load your models, and use them as if they were local.
    • End-to-end encrypted. Launching in partnership with Tailscale.
  • Fixed a bug where auto update will sometimes not work due to failing to exit before updater runs.
    • This fix only applies, when updating to the next version. That is, you might still encounter issues as you are updating to 0.4.6+1.
  • Fixed Qwen3.5 RAG jinja rendering bug: "No user query found in messages"
  • Updated go version to 0.25.7 for LM Link.
  • [DGX Spark] Enable Direct I/O to improve model load latency
    • Requires llama.cpp engine 2.5.1 or greater.
Feb 25, 2026

LM Studio 0.4.5

Build 2

  • Fixed a bug where LM Link connector was not included in in-app updater

Build 1

  • ✨🎉 Introducing LM Link
    • Connect to remote instances of LM Studio, load your models, and use them as if they were local.
    • End-to-end encrypted. Launching in partnership with Tailscale.
  • Improved tool calling support for the Qwen 3.5 model family
  • Fixed a bug where loading model would sometimes fail with "Attempt to pull a snapshot of system resources failed. Error: 'Utility process is not defined'".
  • Fixed a bug where autoscrolling new message behavior was not respected when clicking the Generate button
  • Hides the Generate button when editing a message to avoid accidental click
Feb 25, 2026

LM Studio 0.4.5

Build 1

  • ✨🎉 Introducing LM Link
    • Connect to remote instances of LM Studio, load your models, and use them as if they were local.
    • End-to-end encrypted. Launching in partnership with Tailscale.
  • Improved tool calling support for the Qwen 3.5 model family
  • Fixed a bug where loading model would sometimes fail with "Attempt to pull a snapshot of system resources failed. Error: 'Utility process is not defined'".
  • Fixed a bug where autoscrolling new message behavior was not respected when clicking the Generate button
  • Hides the Generate button when editing a message to avoid accidental click
Feb 20, 2026

LM Studio 0.4.4

Build 1

  • Fixed a bug where thinking tags were not being emitted correctly through v1/chat/completions OpenAI-compatible REST API
  • Fixed a bug where RAG will sometimes get stuck in "Deciding how to handle the document(s)..."
  • Fixed a bug where previously detected GGUF models that were not being properly identified are now correctly recognized
Feb 19, 2026

LM Studio 0.4.3

Build 2

  • Fixed bug causing TypeError: Cannot read properties of undefined (reading 'backendInfo')

Build 1

  • Fix error when using Claude Code with LM Studio: "Invalid discriminator value. Expected 'enabled' | 'disabled'"
  • Failed downloads will now automatically retry with exponential backoff. This only applies to new downloads created after this update, so existing downloads will not be affected
  • Fix certain GGUFs being detected incorrectly, resulting in ?? arches and limited context lengths. Impacted unsloth/Qwen3-Coder-Next
  • Fix Step 3.5 reasoning parsing
  • Fix a cropped border visual bug on image attachments
  • Fixed a bug where the dropdown for selecting a model file sometimes cannot be opened
  • Fixed streaming REST endpoints sending headers after errors in /v1/responses and /v1/chat
Feb 6, 2026

LM Studio 0.4.2

Build 2

  • Introducing Parallel Requests with MLX! 🎉
    • Added support for continuous batching in mlx-engine 1.0.0
    • At the moment this capability is text-only, with VLM support in the works
  • Fixed a scroll bug on plugin lists
  • Fixed a responsive layout bug when role and insert buttons were displayed in chat input
  • Introduced new styling for message file attachments

Build 1

  • Fixed a bug where Qwen3-Coder-Next could error due to an unsupported jinja template safe filter
  • Fixed a bug where deleting a conversation with attachments will sometimes cause file related app operations to fail until restarted
  • Fixed a bug causing Cannot read properties of null (reading 'visionAdapter')
  • Fixed a bug where selected options in some interactive lms commands were not colored correctly
  • Fixed a bug where large pastes in lms chat would not preserve newline formatting