diff --git a/app/priv/blog/engineering/2026/04-24-how-to-get-started-with-the-pi-coding-agent-on-a-vps.md b/app/priv/blog/engineering/2026/04-24-how-to-get-started-with-the-pi-coding-agent-on-a-vps.md index 68e76de..5f19d6a 100644 --- a/app/priv/blog/engineering/2026/04-24-how-to-get-started-with-the-pi-coding-agent-on-a-vps.md +++ b/app/priv/blog/engineering/2026/04-24-how-to-get-started-with-the-pi-coding-agent-on-a-vps.md @@ -65,37 +65,15 @@ One thing you can do in Pi without a model, is use `!` to run a shell command. I ``` sh !cat [..]/lib/node_modules/@mariozechner/pi-coding-agent/docs/providers.md - ``` - [..] - - ## Custom Providers - - **Via models.json:** Add Ollama, LM Studio, vLLM, or any provider that speaks a supported API (OpenAI Completions, OpenAI Responses, Anthropic Messages, Google Generative AI). See [models.md](models.md). - - ## Resolution Order - - When resolving credentials for a provider: - - 1. CLI `--api-key` flag - 2. `auth.json` entry (API key or OAuth token) - 3. Environment variable - 4. Custom provider keys from `models.json` - - - ... 179 more lines (ctrl+o to expand) - ``` -I prefer the environment variable, so I don't have it in a text file, but the latter is more fire and forget. Up to you.` If you press ctrl+o you can probably find exactly what you need for your provider. -Because Pi can hot-reload it's configuration files, and everything can go in one file, I will start there. - -Now we can edit `$HOME/pi/agent/models.json` to set our provider endpoint and first model. +Now we can edit `$HOME/pi/agent/models.json` to set our provider endpoint and api key. You can specify models there as well, but that is not necessary to get started. `/model` in the UI lets you search provided models. I already have set up 'pi' on another machine, so I asked it. Next section co-written with Qwen3.6:27b : -Tell Pi where your LLM is hosted, and what model you want to use -==== +Tell Pi where your LLM is hosted (by Qwen) +--- Pi has built-in OpenRouter support. You just need to configure it in ~/.pi/agent/models.json. @@ -125,31 +103,30 @@ Pi has built-in OpenRouter support. You just need to configure it in ~/.pi/agent 1. Open Pi and run /model to see available OpenRouter models 2. Pick one with /model openrouter/anthropic/claude-sonnet-4 (or whatever model you want) - Per-Model Routing (Optional) - - You can control which upstream provider OpenRouter routes to using modelOverrides: - - ```json - { - "providers": { - "openrouter": { - "apiKey": "OPENROUTER_API_KEY", - "modelOverrides": { - "anthropic/claude-sonnet-4": { - "compat": { - "openRouterRouting": { - "only": ["anthropic"] - } - } - } - } - } - } - } - ``` See the full compat.openRouterRouting options in the models docs for order, ignore, max_price, preferred_min_throughput, etc. +First prompt +---- + +Finally, we can issue a prompt. I learned (from Claude Code) that asking a model to write a haiku is a way to get started when you just want to see if it works. Sometimes I put my frustrations in the prompt. Not this time. + +> Model: qwen3.5:35b-a3b + + +Me: write a haiku + + +(reasoning tokens:) +> The user is asking me to write a haiku. A haiku is a form of Japanese poetry with three lines following a 5-7-5 syllable pattern. Let me write one about nature, which is the traditional subject. + +> Soft rain falls gently, +> Droplets kiss the green leaves now, +> Spring's breath fills the air. + +I hope this works for you. Enjoy! + + Afterword ----