in my latest medium article, “no cloud in sight: augmented intelligence for privacy-centric workflows,” i dive into my personal journey of integrating generative ai into my daily work while keeping privacy top of mind. as much as i value the power of tools like chatgpt, i’ve opted for a more secure and efficient approach by running ai models locally on my own machine, using ollama as the backbone. this setup allows me to enjoy the benefits of ai without compromising sensitive data.

i walk through how i’ve configured my macbook to run local llms, including models like gemma2 and llama3. i also explain how i’ve integrated these into my workflows using tools like enchanted and privategpt, making the ai feel like a natural extension of my daily tasks. by automating and customizing these processes with tools like fabric, i’ve optimized both privacy and efficiency, ensuring that my data stays local while still taking full advantage of ai.

this post is a guide for anyone looking to bring ai into their workflows without relying on cloud services - an approach that keeps control over your data and enhances productivity.

read it in full at medium