diff --git a/app/priv/blog/engineering/2026/04-24-smaller-open-llms-now-work-for-open-agents.md b/app/priv/blog/engineering/2026/04-24-smaller-open-llms-now-work-for-open-agents.md index f181920..09e4caf 100644 --- a/app/priv/blog/engineering/2026/04-24-smaller-open-llms-now-work-for-open-agents.md +++ b/app/priv/blog/engineering/2026/04-24-smaller-open-llms-now-work-for-open-agents.md @@ -7,7 +7,7 @@ } --- -(No AI was involved in this attempt at ordering my thoughts and making my work legible to others, hopefull you). +(No AI was involved in this attempt at ordering my thoughts and making my work legible to others, hopefully you). I am replacing most, if not all, of my Claude Code workflows with [pi.dev](https://pi.dev), an open source coding agent, and local LLMs running on my laptop. If you don't have the hardware, smaller models are also cheap(er) to run on hosted services like [Open Router](https://openrouter.ai). As prices of frontier models continue to rise, and subscription plans are watered down, the capability and speed of open weight and open source models continues increasing. The last month saw a step change, with a couple releases from last week (Qwen 3.6 and several inference servers implementing performance improvements - more speed, less memory) marking a step change in user experience.