Qwen 3 Agent Architectural spike.
This project demonstrates how to use the qwen-agent
library to interact with a Qwen 3 language model, potentially leveraging MLX for local execution and incorporating tools like web search.
Description
The core example (agentic_search.py
) sets up an AI assistant powered by a specified Qwen 3 model (e.g., qwen3:0.6B
running locally via Ollama). It showcases how to:
- Configure the connection to the language model (local or API-based).
- Define and integrate tools (like
code_interpreter
and MCP-based tools like DuckDuckGo search). - Run the agent with a user query and stream the responses.
- Example prompt included - it was difficult to get the model with agents to also output the hyperlinks.
The smaller try6b.py
is meant to test that qwen models work locally. 0.6B is a very small model, so easy to download, and was actually a a lot of fun to use.
Event working with tools, when used with agentic_search.py
worked, up to a point.
Prerequisites
- Python 3.13 or higher
uv
package manager installed.- A running Qwen 3 model endpoint (like one served by Ollama at
http://localhost:11434
) or appropriate API keys if using a hosted service. - Optional:
mise
for managing development tools (like Node.js, though its specific use in this Python project isn't detailed in the provided context).
Installation
-
Clone the repository:
git clone <your-repository-url> cd qwen3
-
Set up a virtual environment (recommended):
# Using Python's built-in venv python -m venv .venv source .venv/bin/activate # On Windows use `.venv\Scripts\activate` # Or using uv to create and activate uv venv .venv source .venv/bin/activate # On Windows use `.venv\Scripts\activate`
-
Install dependencies using
uv
: The project dependencies are listed inpyproject.toml
. Install the project and its dependencies:uv pip install .
Alternatively, install dependencies directly from the list:
uv pip install mcp>=1.6.0 mlx>=0.25.1 mlx-lm>=0.24.0 python-dateutil>=2.9.0.post0 python-dotenv>=1.1.0 "qwen-agent[code-interpreter]>=0.0.20"
Usage
-
Configure the LLM:
- Modify
agentic_search.py
to point to your Qwen 3 model endpoint or provide necessary API keys in thellm_cfg
dictionary. The example is currently set up for a local Ollama endpoint.
- Modify
-
Run the agent script:
python agentic_search.py
This will execute the predefined query in the script, run the agent, print progress dots (
.
) for each response chunk, and finally output the full structured response and the extracted content.
Dependencies
Key Python libraries used:
qwen-agent
: For creating and managing the AI agent.mlx
/mlx-lm
: Likely used for efficient model inference, especially on Apple Silicon.mcp
: For integrating external tools via the Multi-Agent Collaboration Protocol.python-dotenv
: For managing environment variables (e.g., API keys).rich
: For beautiful terminal formatting and progress indicators.
See pyproject.toml
for the full list of dependencies.
Contributing
This an architectural spike, I welcome your feedback through qwan.eu/contact
.
License
Apache 2.0