Skip to main content
  1. Blog/

Switching from LMStudio to Ollama + OpenWebUI

·375 words·2 mins

I enjoy using local LLMs for the convenience and privacy, plus I love building my own tooling around them. My first intro was LMStudio on my MacBook, which was great for getting started, but I slowly moved towards Ollama because of better tool use and easier integration with my Python code. LMStudio hit its limitations pretty quickly when I wanted to do more than just chat with models. The GUI was nice for beginners, but once you want to automate things or build custom workflows, you run into walls. To be fair, I found the program cautiously adding features and that is a good thing. There is the whole non commercial license which means I cannot use it for my Job. LMStudio itself started to become slower over time with every update. So i started to look at alternatives, Ollama opened up a whole new world with its proper API endpoints and command line interface. I could finally script model management, switch between models programmatically, and integrate everything into my development workflow without fighting against the tool. It is far more easier to setup and I can use it cross platform.

The real game changer came when I discovered OpenWebUI running on top of Ollama. Suddenly I had the best of both worlds: the flexibility and power of Ollama’s backend with a polished web interface that actually worked the way I wanted it to. Setting up the Docker container was straightforward, and connecting it to my local Ollama instance gave me features that LMStudio could never match. Chat history that actually persists properly, model switching that doesn’t require restarting anything, and customization options that let me tweak the interface for my specific use cases. Sure, LMStudio had a cleaner initial user experience, but once you outgrow the basic chat interface, Ollama with OpenWebUI becomes the obvious choice for anyone serious about local LLM workflows.

Update May 2025: The same mindset applies except I’m now moving towards llama.cpp while trying to run it on IGPU on a NUC. Ollama is still easier to use than raw llama.cpp, but I found that once you figure it out and with OpenWebUI on top, that combination is even better for my current Ollama setup. I am migrating from Ollama.