Now Reading: First look: Run LLMs locally with LM Studio

Loading
svg

First look: Run LLMs locally with LM Studio

NewsFebruary 11, 2026Artifice Prime
svg8

Dedicated desktop applications for agentic AI make it easier for relatively non-technical users to work with large language models. Instead of writing Python programs and wrangling models manually, users open an IDE-like interface and have logged, inspectable interactions with one or more LLMs.

Amazon and Google have released such products with a focus on AI-assisted code development—Kiro and Antigravity, respectively. Both products offer the option to run models locally or in cloud-hosted versions.

LM Studio by Element Labs provides a local-first experience for running, serving, and working with LLMs. It’s designed for more general conversational use rather than code-development tasks, and while its feature set is still minimal, it’s functional enough to try out.

Set up your models

When you first run LM Studio, the first thing you’ll want to do is set up one or more models. A sidebar button opens a curated search panel, where you can search for models by name or author, and even filter based on whether the model fits within the available memory on your current device. Each model has a description of its parameter size, general task type, and whether it’s trained for tool use. For this review, I downloaded three different models:

Downloads and model management are all tracked inside the application, so you don’t have to manual wrangle model files like you would with ComfyUI.

LM Studio model selection interface

The model selection interface for LM Studio. The model list is curated by LM Studio’s creators, but the user can manually install models outside this interface by placing them in the app’s model directory.

Foundry

Conversing with an LLM

To have a conversation with an LLM, you choose which one to load into memory from the selector at the top of the window. You can also finetune the controls for using the model—e.g., if you want to attempt to load the entire model into memory, how many CPU threads to devote to serving predictions, how many layers of the model to offload to the GPU, and so on. The defaults are generally fine, though.

Conversations with a model are all tracked in separate tabs, including any details about the model’s thinking or tool integrations (more on these below). You also get a running count of how many tokens are used or available for the current conversation, so you can get a sense of how much the conversation is costing as it unfolds. If you want to work with local files (“Analyze this document for clarity”), you can just drag and drop them into the conversation. You can also grant the model access to the local file system by way of an integration, although for now I’d only do that with great care and on a system that did not include mission-critical information.

Sample conversation in LM Studio

An example of a conversation with a model in LM Studio. Chats can be exported in a variety of formats, and contain expandable sections that detail the model’s internal thinking. The sidebar at right shows various available integrations, all currently disabled.

Foundry

Integrations

LM Studio lets you add MCP server applications to extend agent functionality. Only one integration is included by default—a JavaScript code sandbox that allows the model to run JavaScript or TypeScript code using Deno. It would have been useful to have at least one more integration to allow web search, though I was able to add a Brave search integration feature with minimal work.

The big downside with integrations in LM Studio is that they are wholly manual. There is currently no automated mechanism for adding integrations, and there’s no directory of integrations to browse. You need to manually edit a mcp.json file to describe the integrations you want and then supply the code yourself. It works, but it’s clunky, and it makes that part of LM Studio feel primitive. If there’s anything that needs immediate fixing, it’s this.

Despite these limits, the way MCP servers are integrated is well-thought-out. You can disable, enable, add, or modify such integrations without having to close and restart the whole program. You can also whitelist the way integrations work with individual conversations or the entire program, so that you don’t have to constantly grant an agent access. (I’m paranoid, so I didn’t enable this.)

Using APIs to facilitate agentic behavior

LM Studio can also work as a model-serving system, either through the desktop app or through a headless service. Either way, you get a REST API that lets you work with models and chat with them, and get results back either all at once or in a progressive stream. A recently added Anthropic-compatible endpoint lets you use Claude Code with LM Studio. This means it’s possible to use self-hosted models as part of a workflow with a code-centric product like Kiro or Antigravity.

Another powerful feature is tool use through an API endpoint. A user can write a script that interacts with the LM Studio API and also supplies its own tool. This allows for complex interactions between the model and the tool—a way to build agentic behaviors from scratch.

LM Studio server settings

The internal server settings for LM Studio. The program can be configured to serve models across a variety of industry-standard APIs, and the UI exposes various tweaks for performance and security.

Foundry

Conclusion

LM Studio’s clean design and convenience features are a good start, but many key features are missing. Future releases could focus on adding salient features.

Tool integration still requires cobbling things together manually, and there is no mechanism for browsing and downloading from a curated tools directory. The included roster of tools is also extremely thin—as an example, there isn’t an included tool for web browsing and fetching.

Another significant issue is that LM Studio isn’t open source even though some of its components are—such as its command-line tooling. The licensing for LM Studio allows for free use, but there’s no guarantee that will always be the case. Nonetheless, even in this early incarnation, LM Studio is useful for those who have the hardware and the knowledge to run models locally.

Original Link:https://www.infoworld.com/article/4127250/first-look-run-llms-locally-with-lm-studio.html
Originally Posted: Wed, 11 Feb 2026 09:00:00 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    First look: Run LLMs locally with LM Studio

Quick Navigation