I was one of the first people to jump on the ChatGPT bandwagon. The convenience of having an all-knowing research assistant available at the tap of a button has its appeal, and for a long time, I didn’t care much about the ramifications of using AI. Fast forward to today, and it’s a whole different world. There’s no getting around the fact that you are feeding an immense amount of deeply personal information, from journal entries to sensitive work emails, into a black box owned by trillion-dollar corporations. What could go wrong?
Now, there’s no going back from AI, but there are use cases for it where it can work as a productivity multiplier. And that’s why I’ve been going down the rabbit hole of researching local AI. If you’re not familiar with the concept, it’s actually fairly simple. It is entirely possible to run a large language model, the brains behind a tool like ChatGPT, right off your computer or even a phone. Of course, it won’t be as capable or all-knowing as ChatGPT, but depending on your use case, it might still be effective enough. Better still, no data leaves your device, nor is there a monthly subscription fee to consider. But if you’re concerned that pulling this off requires an engineering degree, think again. In 2025, running a local LLM is shockingly easy, with tools like LM Studio and Ollama making it as simple as installing an app. After spending the last few months running my own local AI, I can safely say I’m never going back to being purely cloud-dependent. Here’s why.