6.0 KiB
%{ title: "How to get started with the Pi coding agent (on a VPS)", author: "Willem", tags: ~w(agentic-engeeringing getting-started how-to), description: "", published: false }
Demo repository on how to set up the Pi coding agent on a VPS (virtual private server) with a hosted LLM service. This works with Open Router, and should work with anything that supports the OpenAI (ChatGPT) API, including local models, anthropic, OpenAI etc.
The idea of using a VPS (a virtual machine in the cloud) is that it provides you a sandbox to run an agent in. If the agent deletes your home folder, you can just recreate it. There are other ways to sandbox agents, but this I found by far the easiest and most comforting.
Steps in this recipe
- Install Pi
- Put your API key in an environment variable, so Pi can access it
- Tell Pi where your LLM is hosted, and what model you want to use
- Start Pi and enjoy
Or that is what I thought. It is simpler than that.
- Install Pi
- Follow the guidance and complete the installation for your model and provider in small steps.
/Reloadin Pi and enjoy (*)
(*) after fixing syntax errors in ~/.pi/agent/models.json where all of your configuration can live, unless you decide to separate it out.
I thought it was still useful to show my workings, the Pi UI is a lot more responsive than Claude code, and guides you on your way. But I did not notice that at first. I hope this helps. Have fun!
Install Pi
Pi assumes you have NodeJs installed. If you don't have that, NodeJS has instructions, it is usually in your package manager in the VPS's linux distribution.
Once you have NodeJs, run
npm install -g @mariozechner/pi-coding-agent
in the terminal.
Now you can start pi and it will guide you to where to find the rest of the documentation. This is what it showed me:
Warning: No models available. Use /login to log into a provider via OAuth or API key. See:
[somewhere on your disk]/lib/node_modules/@mariozechner/pi-coding-agent/docs/providers.md
[somewhere on your disk]//node/24.0.1/lib/node_modules/@mariozechner/pi-coding-agent/docs/models.md
This is one of the surprising things I like best about Pi: the documentation (of the version you are using) is on your machine, and it goes out of its' way to point you and your model to the documentation, so you can figure out how to use and extend it in a conversation.
We can't have a conversation just yet. Because we have no provider, and no model.
So we need to tell Pi two things:
- what is the 'provider' (the party or server hosting your model(s))
- what models are available there
For the second point you need a bit more detail than I would like. Hence this post. I will take openrouter as provider and will go there and find the cheapest model I can find - we just want to fire off a prompt and see if Pi + provider + a model can work together.
One thing you can do in Pi without a model, is use ! to run a shell command. I'm going to run cat on the providers doc to see how I can set up a provider.
!cat [..]/lib/node_modules/@mariozechner/pi-coding-agent/docs/providers.md
[..]
Custom Providers
Via models.json: Add Ollama, LM Studio, vLLM, or any provider that speaks a supported API (OpenAI Completions, OpenAI Responses, Anthropic Messages, Google Generative AI). See models.md.
Resolution Order
When resolving credentials for a provider:
- CLI
--api-keyflag auth.jsonentry (API key or OAuth token)- Environment variable
- Custom provider keys from
models.json
... 179 more lines (ctrl+o to expand)
I prefer the environment variable, so I don't have it in a text file, but the latter is more fire and forget. Up to you.` If you press ctrl+o you can probably find exactly what you need for your provider.
Because Pi can hot-reload it's configuration files, and everything can go in one file, I will start there.
Now we can edit `$HOME/pi/agent/models.json` to set our provider endpoint and first model.
I already have set up 'pi' on another machine, so I asked it. Next section co-written with Qwen3.6:27b :
Tell Pi where your LLM is hosted, and what model you want to use
====
Pi has built-in OpenRouter support. You just need to configure it in ~/.pi/agent/models.json.
Quick Setup
Create or edit ~/.pi/agent/models.json:
```json
{
"providers": {
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "OPENROUTER_API_KEY",
"api": "openai-completions"
}
}
}
That's it — no models array needed. Pi loads all built-in OpenRouter models automatically. Your API key can be:
- An environment variable name (e.g., "OPENROUTER_API_KEY")
- A literal key (e.g., "sk-or-...")
- A shell command (e.g., "!op read 'op://vault/item/credential'")
After Configuring
- Open Pi and run /model to see available OpenRouter models
- Pick one with /model openrouter/anthropic/claude-sonnet-4 (or whatever model you want)
Per-Model Routing (Optional)
You can control which upstream provider OpenRouter routes to using modelOverrides:
{
"providers": {
"openrouter": {
"apiKey": "OPENROUTER_API_KEY",
"modelOverrides": {
"anthropic/claude-sonnet-4": {
"compat": {
"openRouterRouting": {
"only": ["anthropic"]
}
}
}
}
}
}
}
See the full compat.openRouterRouting options in the models docs for order, ignore, max_price, preferred_min_throughput, etc.
Afterword
Chris Parsons asked me what it took me to get going with Pi, amongst other things, as we were discussing his How I use AI to Code. As I am getting questions on my setup and how I use it, this seemed like a good place to start. It would be nice to have some more people around me use open source agents with open weights and open source models.