Update docs to Plano (#639)

This commit is contained in:
Salman Paracha 2025-12-23 17:14:50 -08:00 committed by GitHub
parent 15fbb6c3af
commit e224cba3e3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
139 changed files with 4407 additions and 24735 deletions

303
README.md
View file

@ -1,75 +1,51 @@
<div align="center">
<img src="docs/source/_static/img/arch-logo.png" alt="Arch Logo" width="75%" height=auto>
<img src="docs/source/_static/img/PlanoTagline.svg" alt="Plano Logo" width="75%" height=auto>
</div>
<div align="center">
_Arch is a models-native (edge and service) proxy server for agents._<br><br>
Arch handles the *pesky plumbing work* in building AI agents — like applying guardrails, routing prompts to the right agent, generating hyper-rich information traces for RL, and unifying access to any LLM. Its a language and framework friendly infrastructure layer designed to help you build and ship agentic apps faster.
_Plano is a models-native proxy server and data plane for agents._<br><br>
Plano pulls out the rote plumbing work and decouples you from brittle framework abstractions, centralizing what shouldnt be bespoke in every codebase - like agent routing and orchestration, rich agentic signals and traces for continuous improvement, guardrail filters for safety and moderation, and smart LLM routing APIs for UX and DX agility. Use any language or AI framework, and deliver agents faster to production.
[Quickstart](#Quickstart) •
[Demos](#Demos) •
[Route LLMs](#use-arch-as-a-llm-router) •
[Build agentic apps with Arch](#Build-Agentic-Apps-with-Arch) •
[Documentation](https://docs.archgw.com) •
[Route LLMs](#use-plano-as-a-llm-router) •
[Build Agentic Apps with Plano](#Build-Agentic-Apps-with-Plano) •
[Documentation](https://docs.planoai.dev) •
[Contact](#Contact)
[![pre-commit](https://github.com/katanemo/arch/actions/workflows/pre-commit.yml/badge.svg)](https://github.com/katanemo/arch/actions/workflows/pre-commit.yml)
[![rust tests (prompt and llm gateway)](https://github.com/katanemo/arch/actions/workflows/rust_tests.yml/badge.svg)](https://github.com/katanemo/arch/actions/workflows/rust_tests.yml)
[![e2e tests](https://github.com/katanemo/arch/actions/workflows/e2e_tests.yml/badge.svg)](https://github.com/katanemo/arch/actions/workflows/e2e_tests.yml)
[![Build and Deploy Documentation](https://github.com/katanemo/arch/actions/workflows/static.yml/badge.svg)](https://github.com/katanemo/arch/actions/workflows/static.yml)
[![pre-commit](https://github.com/katanemo/plano/actions/workflows/pre-commit.yml/badge.svg)](https://github.com/katanemo/plano/actions/workflows/pre-commit.yml)
[![rust tests (prompt and llm gateway)](https://github.com/katanemo/plano/actions/workflows/rust_tests.yml/badge.svg)](https://github.com/katanemo/plano/actions/workflows/rust_tests.yml)
[![e2e tests](https://github.com/katanemo/plano/actions/workflows/e2e_tests.yml/badge.svg)](https://github.com/katanemo/plano/actions/workflows/e2e_tests.yml)
[![Build and Deploy Documentation](https://github.com/katanemo/plano/actions/workflows/static.yml/badge.svg)](https://github.com/katanemo/plano/actions/workflows/static.yml)
</div>
# About The Latest Release:
[0.3.20] [Preference-aware multi LLM routing for Claude Code 2.0](demos/use_cases/claude_code_router/README.md) <br><img src="docs/source/_static/img/claude_code_router.png" alt="high-level network architecture for ArchGW" width="50%">
# Overview
<a href="https://www.producthunt.com/posts/arch-3?embed=true&utm_source=badge-top-post-badge&utm_medium=badge&utm_souce=badge-arch&#0045;3" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/top-post-badge.svg?post_id=565761&theme=dark&period=daily&t=1742359429995" alt="Arch - Build&#0032;fast&#0044;&#0032;hyper&#0045;personalized&#0032;agents&#0032;with&#0032;intelligent&#0032;infra | Product Hunt" style="width: 188px; height: 41px;" width="188" height="41" /></a>
Building agentic demos is easy. Shipping agentic applications safely, reliably, and repeatably to production is hard. After the thrill of a quick hack, you end up building the “hidden middleware” to reach production: routing logic to reach the right agent, guardrail hooks for safety and moderation, evaluation and observability glue for continuous learning, and model/provider quirks scattered across frameworks and application code.
AI demos are easy to hack. But once you move past a prototype, youre stuck building and maintaining low-level plumbing code that slows down real innovation. For example:
Plano solves this by moving core delivery concerns into a unified, out-of-process dataplane.
- **Routing & orchestration.** Put routing in code and youve got two choices: maintain it yourself or live with a frameworks baked-in logic. Either way, keeping routing consistent means pushing code changes across all your agents, slowing iteration and turning every policy tweak into a refactor instead of a config flip.
- **Model integration churn.** Frameworks wire LLM integrations directly into code abstractions, making it hard to add or swap models without touching application code — meaning youll have to do codewide search/replace every time you want to experiment with a new model or version.
- **Observability & governance.** Logging, tracing, and guardrails are baked in as tightly coupled features, so bringing in best-of-breed solutions is painful and often requires digging through the guts of a framework.
- **Prompt engineering overhead**. Input validation, clarifying vague user input, and coercing outputs into the right schema all pile up, turning what should be design work into low-level plumbing work.
- **Brittle upgrades**. Every change (new model, new guardrail, new trace format) means patching and redeploying application servers. Contrast that with bouncing a central proxy—one upgrade, instantly consistent everywhere.
- **🚦 Orchestration:** Low-latency orchestration between agents; add new agents without modifying app code.
- **🔗 Model Agility:** Route [by model name, alias (semantic names) or automatically via preferences](#use-plano-as-a-llm-router).
- **🕵 Agentic Signals&trade;:** Zero-code capture of [behavior signals](#observability) plus OTEL traces/metrics across every agent.
- **🛡️ Moderation & Memory Hooks:** Build jailbreak protection, add moderation policies and memory consistently via [Filter Chains](https://docs.planoai.dev/concepts/filter_chain.html).
With Arch, you can move faster by focusing on higher-level objectives in a language and framework agnostic way. **Arch** was built by the contributors of [Envoy Proxy](https://www.envoyproxy.io/) with the belief that:
Plano pulls rote plumbing out of your framework so you can stay focused on what matters most: the core product logic of your agentic applications. Plano is backed by [industry-leading LLM research](https://planoai.dev/research) and built on [Envoy](https://envoyproxy.io) by its core contributors, who built critical infrastructure at scale for modern worklaods.
>Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems to improve speed and accuracy for common agentic scenarios all outside core application logic.*
**High-Level Network Sequence Diagram**:
![high-level network plano arcitecture for Plano](docs/source/_static/img/plano_network_diagram_high_level.png)
**Core Features**:
- `🚦 Route to Agents`: Engineered with purpose-built [LLMs](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) for fast (<100ms) agent routing and hand-off
- `🔗 Route to LLMs`: Unify access to LLMs with support for [three routing strategies](#use-arch-as-a-llm-router).
- `⛨ Guardrails`: Centrally configure and prevent harmful outcomes and ensure safe user interactions
- `⚡ Tools Use`: For common agentic scenarios let Arch instantly clarify and convert prompts to tools/API calls
- `🕵 Observability`: W3C compatible request tracing and LLM metrics that instantly plugin with popular tools
- `🧱 Built on Envoy`: Arch runs alongside app servers as a containerized process, and builds on top of [Envoy's](https://envoyproxy.io) proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs.
**High-Level Sequence Diagram**:
![high-level network architecture for ArchGW](docs/source/_static/img/arch_network_diagram_high_level.png)
**Jump to our [docs](https://docs.archgw.com)** to learn how you can use Arch to improve the speed, security and personalization of your GenAI apps.
**Jump to our [docs](https://docs.planoai.dev)** to learn how you can use Plano to improve the speed, safety and obervability of your agentic applications.
> [!IMPORTANT]
> Today, the function calling LLM (Arch-Function) designed for the agentic and RAG scenarios is hosted free of charge in the US-central region. To offer consistent latencies and throughput, and to manage our expenses, we will enable access to the hosted version via developers keys soon, and give you the option to run that LLM locally. For more details see this issue [#258](https://github.com/katanemo/archgw/issues/258)
> Plano and the Arch family of LLMs (like Plano-Orchestrator-4B, Arch-Router, etc) are hosted free of charge in the US-central region to give you a great first-run developer experience of Plano. To scale and run in production, you can either run these LLMs locally or contact us on [Discord](https://discord.gg/pGZf2gcwEc) for API keys.
## Contact
To get in touch with us, please join our [discord server](https://discord.gg/pGZf2gcwEc). We will be monitoring that actively and offering support there.
## Demos
* [Sample App: Weather Forecast Agent](demos/samples_python/weather_forecast/README.md) - A sample agentic weather forecasting app that highlights core function calling capabilities of Arch.
* [Sample App: Network Operator Agent](demos/samples_python/network_switch_operator_agent/README.md) - A simple network device switch operator agent that can retrieve device statistics and reboot them.
* [Use Case: Connecting to SaaS APIs](demos/use_cases/spotify_bearer_auth) - Connect 3rd party SaaS APIs to your agentic chat experience.
To get in touch with us, please join our [discord server](https://discord.gg/pGZf2gcwEc). We actively monitor that and offer support there.
## Quickstart
Follow this quickstart guide to use Arch as a router for local or hosted LLMs, including dynamic routing. Later in the section we will see how you can Arch to build highly capable agentic applications, and to provide e2e observability.
Follow this quickstart guide to use Plano as a router for local or hosted LLMs, including dynamic routing. Later in the section we will see how you can Plano to build highly capable agentic applications, and to provide e2e observability.
### Prerequisites
@ -79,101 +55,22 @@ Before you begin, ensure you have the following:
2. [Docker compose](https://docs.docker.com/compose/install/) (v2.29)
3. [Python](https://www.python.org/downloads/) (v3.13)
Arch's CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:
Plano's CLI allows you to manage and interact with the Plano gateway efficiently. To install the CLI, simply run the following command:
> [!TIP]
> We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that archgw and its dependencies do not interfere with other packages on your system.
> We recommend that developers create a new Python virtual environment to isolate dependencies before installing Plano. This ensures that plano and its dependencies do not interfere with other packages on your system.
```console
$ python3.12 -m venv venv
$ source venv/bin/activate # On Windows, use: venv\Scripts\activate
$ pip install archgw==0.3.22
$ pip install plano==0.4.0
```
### Use Arch as a LLM Router
Arch supports three powerful routing strategies for LLMs: model-based routing, alias-based routing, and preference-based routing. Each strategy offers different levels of abstraction and control for managing your LLM infrastructure.
### Use Plano as a LLM Router
Plano supports multiple powerful routing strategies for LLMs. [Model-based routing](https://docs.arch.com/guides/llm_router.html#model-based-routing) gives you direct control over specific models and supports 11+ LLM providers including OpenAI, Anthropic, DeepSeek, Mistral, Groq, and more. [Alias-based routing](https://docs.arch.com/guides/llm_router.html#alias-based-routing) lets you create semantic model names that decouple your application code from specific providers, making it easy to experiment with different models or handle provider changes without refactoring. For full configuration examples and code walkthroughs, see our [routing guides](https://docs.arch.com/guides/llm_router.html).
#### Model-based Routing
Model-based routing allows you to configure specific models with static routing. This is ideal when you need direct control over which models handle specific requests. Arch supports 11+ LLM providers including OpenAI, Anthropic, DeepSeek, Mistral, Groq, and more.
```yaml
version: v0.1.0
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
default: true
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
```
You can then route to specific models using any OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:12000/v1", api_key="test")
# Route to specific model
response = client.chat.completions.create(
model="anthropic/claude-3-5-sonnet-20241022",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
```
#### Alias-based Routing
Alias-based routing lets you create semantic model names that map to underlying providers. This approach decouples your application code from specific model names, making it easy to experiment with different models or handle provider changes.
```yaml
version: v0.1.0
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
model_aliases:
# Model aliases - friendly names that map to actual model names
fast-model:
target: gpt-4o-mini
reasoning-model:
target: gpt-4o
creative-model:
target: claude-3-5-sonnet-20241022
```
Use semantic aliases in your application code:
```python
# Your code uses semantic names instead of provider-specific ones
response = client.chat.completions.create(
model="reasoning-model", # Routes to best available reasoning model
messages=[{"role": "user", "content": "Solve this complex problem..."}]
)
```
#### Preference-aligned Routing
Preference-aligned routing provides intelligent, dynamic model selection based on natural language descriptions of tasks and preferences. Instead of hardcoded routing logic, you describe what each model is good at using plain English.
#### Policy-based Routing
Policy-based routing provides deterministic constructs to achieve automatic routing. intelligent, dynamic model selection based on natural language descriptions of tasks and preferences. Instead of hardcoded routing logic, you describe what each model is good at using plain English.
```yaml
version: v0.1.0
@ -203,23 +100,90 @@ llm_providers:
description: analyzing existing code for bugs, improvements, and optimization
```
Arch uses a lightweight 1.5B autoregressive model to intelligently map user prompts to these preferences, automatically selecting the best model for each request. This approach adapts to intent drift, supports multi-turn conversations, and avoids brittle embedding-based classifiers or manual if/else chains. No retraining required when adding models or updating policies — routing is governed entirely by human-readable rules.
Plano uses a lightweight 1.5B autoregressive model to intelligently map user prompts to these preferences, automatically selecting the best model for each request. This approach adapts to intent drift, supports multi-turn conversations, and avoids brittle embedding-based classifiers or manual if/else chains. No retraining required when adding models or updating policies — routing is governed entirely by human-readable rules.
**Learn More**: Check our [documentation](https://docs.archgw.com/concepts/llm_providers/llm_providers.html) for comprehensive provider setup guides and routing strategies. You can learn more about the design, benchmarks, and methodology behind preference-based routing in our paper:
**Learn More**: Check our [documentation](https://docs.plano.com/concepts/llm_providers/llm_providers.html) for comprehensive provider setup guides and routing strategies. You can learn more about the design, benchmarks, and methodology behind preference-based routing in our paper:
<div align="left">
<a href="https://arxiv.org/abs/2506.16655" target="_blank">
<img src="docs/source/_static/img/arch_router_paper_preview.png" alt="Arch Router Paper Preview">
<img src="docs/source/_static/img/arch_router_paper_preview.png" alt="Plano Router Paper Preview">
</a>
</div>
### Build Agentic Apps with Arch
### Build Agentic Apps with Plano
In following quickstart we will show you how easy it is to build AI agent with Arch gateway. We will build a currency exchange agent using following simple steps. For this demo we will use `https://api.frankfurter.dev/` to fetch latest price for currencies and assume USD as base currency.
Plano helps you build agentic applications in two complementary ways:
#### Step 1. Create arch config file
- **Orchestrate agents**: Let Plano decide which agent or LLM should handle each request and in what sequence.
- **Call deterministic backends**: Use prompt targets to turn natural-language prompts into structured, validated API calls.
Create `arch_config.yaml` file with following content,
You focus on product logic (agents and APIs) while Plano handles routing, parameter extraction, and wiring. The full examples used here are available in the [`plano-quickstart` repository](https://github.com/plano-ai/plano-quickstart).
#### Build agents with Plano orchestration
Agents are where your business logic lives (the "inner loop"). Plano takes care of the "outer loop"—routing, sequencing, and managing calls across agents and LLMs. In this quick example, we show a simplified **Travel Assistant** that routes between a `flight_agent` and a `hotel_agent`.
##### Step 1. Minimal orchestration config
Create a `plano_config.yaml` that wires Plano-Orchestrator to your agents:
```yaml
version: v0.1.0
agents:
- id: flight_agent
url: http://host.docker.internal:10520 # your flights service
- id: hotel_agent
url: http://host.docker.internal:10530 # your hotels service
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
listeners:
- type: agent
name: travel_assistant
port: 8001
router: plano_orchestrator_v1
agents:
- id: flight_agent
description: Search for flights and provide flight status.
- id: hotel_agent
description: Find hotels and check availability.
tracing:
random_sampling: 100
```
##### Step 2. Start your agents and Plano
Run your `flight_agent` and `hotel_agent` services (see the [Orchestration guide](https://docs.planoai.dev/guides/orchestration.html) for a full Travel Booking example), then start Plano with the config above:
```console
$ plano up plano_config.yaml
```
Plano will start the orchestrator and expose an agent listener on port `8001`.
##### Step 3. Send a prompt and let Plano route
Send a request to Plano using the OpenAI-compatible chat completions API—the orchestrator will analyze the prompt and route it to the right agent based on intent:
```bash
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "Find me flights from SFO to JFK tomorrow"}], "model": "openai/gpt-4o"}' \
http://localhost:8001/v1/chat/completions
```
You can then ask a follow-up like "Also book me a hotel near JFK" and Plano-Orchestrator will route to `hotel_agent`—your agents stay focused on business logic while Plano handles routing.
#### Deterministic API calls with prompt targets
Next, we'll show Plano's deterministic API calling using a single prompt target. We'll build a currency exchange backend powered by `https://api.frankfurter.dev/`, assuming USD as the base currency.
##### Step 1. Create plano config file
Create `plano_config.yaml` with the following content:
```yaml
version: v0.1.0
@ -238,12 +202,6 @@ llm_providers:
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: currency_exchange
description: Get currency exchange rate from USD to other currencies
@ -271,24 +229,23 @@ endpoints:
protocol: https
```
#### Step 2. Start arch gateway with currency conversion config
##### Step 2. Start Plano with currency conversion config
```sh
$ archgw up arch_config.yaml
2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.3.22
$ plano up plano_config.yaml
2024-12-05 16:56:27,979 - cli.main - INFO - Starting plano cli version: 0.4.0
2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful!
2024-12-05 16:56:28,485 - cli.main - INFO - Starting arch model server and arch gateway
2024-12-05 16:56:28,485 - cli.main - INFO - Starting plano model server and plano gateway
2024-12-05 16:56:51,647 - cli.core - INFO - Container is healthy!
```
Once the gateway is up you can start interacting with at port 10000 using openai chat completion API.
Once the gateway is up you can start interacting with it at port `10000` using the OpenAI chat completion API.
Some of the sample queries you can ask could be `what is currency rate for gbp?` or `show me list of currencies for conversion`.
Some sample queries you can ask include: `what is currency rate for gbp?` or `show me list of currencies for conversion`.
#### Step 3. Interacting with gateway using curl command
##### Step 3. Interact with the gateway using curl
Here is a sample curl command you can use to interact,
Here is a sample curl command you can use to interact:
```bash
$ curl --header 'Content-Type: application/json' \
@ -296,10 +253,9 @@ $ curl --header 'Content-Type: application/json' \
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
"As of the date provided in your context, December 5, 2024, the exchange rate for GBP (British Pound) from USD (United States Dollar) is 0.78558. This means that 1 USD is equivalent to 0.78558 GBP."
```
And to get list of supported currencies,
And to get the list of supported currencies:
```bash
$ curl --header 'Content-Type: application/json' \
@ -307,42 +263,15 @@ $ curl --header 'Content-Type: application/json' \
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
"Here is a list of the currencies that are supported for conversion from USD, along with their symbols:\n\n1. AUD - Australian Dollar\n2. BGN - Bulgarian Lev\n3. BRL - Brazilian Real\n4. CAD - Canadian Dollar\n5. CHF - Swiss Franc\n6. CNY - Chinese Renminbi Yuan\n7. CZK - Czech Koruna\n8. DKK - Danish Krone\n9. EUR - Euro\n10. GBP - British Pound\n11. HKD - Hong Kong Dollar\n12. HUF - Hungarian Forint\n13. IDR - Indonesian Rupiah\n14. ILS - Israeli New Sheqel\n15. INR - Indian Rupee\n16. ISK - Icelandic Króna\n17. JPY - Japanese Yen\n18. KRW - South Korean Won\n19. MXN - Mexican Peso\n20. MYR - Malaysian Ringgit\n21. NOK - Norwegian Krone\n22. NZD - New Zealand Dollar\n23. PHP - Philippine Peso\n24. PLN - Polish Złoty\n25. RON - Romanian Leu\n26. SEK - Swedish Krona\n27. SGD - Singapore Dollar\n28. THB - Thai Baht\n29. TRY - Turkish Lira\n30. USD - United States Dollar\n31. ZAR - South African Rand\n\nIf you want to convert USD to any of these currencies, you can select the one you are interested in."
```
## [Observability](https://docs.archgw.com/guides/observability/observability.html)
Arch is designed to support best-in class observability by supporting open standards. Please read our [docs](https://docs.archgw.com/guides/observability/observability.html) on observability for more details on tracing, metrics, and logs. The screenshot below is from our integration with Signoz (among others)
## [Observability](https://docs.plano.com/guides/observability/observability.html)
Plano is designed to support best-in class observability by supporting open standards. Please read our [docs](https://docs.plano.com/guides/observability/observability.html) on observability for more details on tracing, metrics, and logs. The screenshot below is from our integration with Signoz (among others)
![alt text](docs/source/_static/img/tracing.png)
## Debugging
When debugging issues / errors application logs and access logs provide key information to give you more context on whats going on with the system. Arch gateway runs in info log level and following is a typical output you could see in a typical interaction between developer and arch gateway,
```
$ archgw up --service archgw --foreground
...
[2025-03-26 18:32:01.350][26][info] prompt_gateway: on_http_request_body: sending request to model server
[2025-03-26 18:32:01.851][26][info] prompt_gateway: on_http_call_response: model server response received
[2025-03-26 18:32:01.852][26][info] prompt_gateway: on_http_call_response: dispatching api call to developer endpoint: weather_forecast_service, path: /weather, method: POST
[2025-03-26 18:32:01.882][26][info] prompt_gateway: on_http_call_response: developer api call response received: status code: 200
[2025-03-26 18:32:01.882][26][info] prompt_gateway: on_http_call_response: sending request to upstream llm
[2025-03-26 18:32:01.883][26][info] llm_gateway: on_http_request_body: provider: gpt-4o-mini, model requested: None, model selected: gpt-4o-mini
[2025-03-26 18:32:02.818][26][info] llm_gateway: on_http_response_body: time to first token: 1468ms
[2025-03-26 18:32:04.532][26][info] llm_gateway: on_http_response_body: request latency: 3183ms
...
```
Log level can be changed to debug to get more details. To enable debug logs edit (supervisord.conf)[arch/supervisord.conf], change the log level `--component-log-level wasm:info` to `--component-log-level wasm:debug`. And after that you need to rebuild docker image and restart the arch gateway using following set of commands,
```
# make sure you are at the root of the repo
$ archgw build
# go to your service that has arch_config.yaml file and issue following command,
$ archgw up --service archgw --foreground
```
## Contribution
We would love feedback on our [Roadmap](https://github.com/orgs/katanemo/projects/1) and we welcome contributions to **Arch**!
We would love feedback on our [Roadmap](https://github.com/orgs/katanemo/projects/1) and we welcome contributions to **Plano**!
Whether you're fixing bugs, adding new features, improving documentation, or creating tutorials, your help is much appreciated.
Please visit our [Contribution Guide](CONTRIBUTING.md) for more details

View file

@ -20,7 +20,7 @@ export default function RootLayout({
<body className="antialiased">
{/* Google tag (gtag.js) */}
<Script
src="https://www.googletagmanager.com/gtag/js?id=G-6J5LQH3Q9G"
src="https://www.googletagmanager.com/gtag/js?id=G-ML7B1X9HY2"
strategy="afterInteractive"
/>
<Script id="google-analytics" strategy="afterInteractive">
@ -28,7 +28,7 @@ export default function RootLayout({
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-6J5LQH3Q9G');
gtag('config', 'G-ML7B1X9HY2');
`}
</Script>
<ConditionalLayout>{children}</ConditionalLayout>

View file

@ -36,7 +36,7 @@ properties:
type: string
enum:
- mcp
- rest
- http
transport:
type: string
enum:

View file

@ -61,6 +61,7 @@ pub async fn agent_chat(
body,
}) = &err
{
warn!(
"Client error from agent '{}' (HTTP {}): {}",
agent, status, body
@ -77,7 +78,7 @@ pub async fn agent_chat(
let json_string = error_json.to_string();
let mut response = Response::new(ResponseHandler::create_full_body(json_string));
*response.status_mut() = hyper::StatusCode::from_u16(*status)
.unwrap_or(hyper::StatusCode::INTERNAL_SERVER_ERROR);
.unwrap_or(hyper::StatusCode::BAD_REQUEST);
response.headers_mut().insert(
hyper::header::CONTENT_TYPE,
"application/json".parse().unwrap(),

View file

@ -4,7 +4,7 @@ use std::collections::HashMap;
pub const JSON_RPC_VERSION: &str = "2.0";
pub const TOOL_CALL_METHOD : &str = "tools/call";
pub const MCP_INITIALIZE: &str = "initialize";
pub const MCP_INITIALIZE_NOTIFICATION: &str = "initialize/notification";
pub const MCP_INITIALIZE_NOTIFICATION: &str = "notifications/initialized";
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]

View file

@ -132,7 +132,7 @@ impl PipelineProcessor {
}
/// Record a span for MCP protocol interactions
fn record_mcp_span(
fn record_agent_filter_span(
&self,
collector: &std::sync::Arc<common::traces::TraceCollector>,
operation: &str,
@ -243,7 +243,7 @@ impl PipelineProcessor {
.await?;
} else {
chat_history_updated = self
.execute_rest_filter(
.execute_http_filter(
&chat_history_updated,
agent,
request_headers,
@ -489,7 +489,7 @@ impl PipelineProcessor {
attrs.insert("mcp.session_id", mcp_session_id.clone());
attrs.insert("http.status_code", http_status.as_u16().to_string());
self.record_mcp_span(
self.record_agent_filter_span(
collector,
"tool_call",
&agent.id,
@ -551,7 +551,7 @@ impl PipelineProcessor {
return Err(PipelineError::ClientError {
agent: agent.id.clone(),
status: http_status.as_u16(),
status: hyper::StatusCode::BAD_REQUEST.as_u16(),
body: error_message,
});
}
@ -690,8 +690,8 @@ impl PipelineProcessor {
session_id
}
/// Execute a REST-based filter agent
async fn execute_rest_filter(
/// Execute a HTTP-based filter agent
async fn execute_http_filter(
&mut self,
messages: &[Message],
agent: &Agent,
@ -702,11 +702,11 @@ impl PipelineProcessor {
) -> Result<Vec<Message>, PipelineError> {
let tool_name = agent.tool.as_deref().unwrap_or(&agent.id);
// Generate span ID for this REST call (child of filter span)
let rest_span_id = generate_random_span_id();
// Generate span ID for this HTTP call (child of filter span)
let http_span_id = generate_random_span_id();
// Build headers
let trace_parent = format!("00-{}-{}-01", trace_id, rest_span_id);
let trace_parent = format!("00-{}-{}-01", trace_id, http_span_id);
let mut agent_headers = request_headers.clone();
agent_headers.remove(hyper::header::CONTENT_LENGTH);
@ -742,7 +742,7 @@ impl PipelineProcessor {
let start_instant = Instant::now();
debug!(
"Sending REST request to agent {} at URL: {}",
"Sending HTTP request to agent {} at URL: {}",
agent.id, agent.url
);
@ -761,16 +761,16 @@ impl PipelineProcessor {
let end_time = SystemTime::now();
let elapsed = start_instant.elapsed();
// Record REST call span
// Record HTTP call span
if let Some(collector) = trace_collector {
let mut attrs = HashMap::new();
attrs.insert("rest.tool_name", tool_name.to_string());
attrs.insert("rest.url", agent.url.clone());
attrs.insert("http.tool_name", tool_name.to_string());
attrs.insert("http.url", agent.url.clone());
attrs.insert("http.status_code", http_status.as_u16().to_string());
self.record_mcp_span(
self.record_agent_filter_span(
collector,
"rest_call",
"http_call",
&agent.id,
start_time,
end_time,
@ -778,7 +778,7 @@ impl PipelineProcessor {
Some(attrs),
trace_id.clone(),
filter_span_id.clone(),
Some(rest_span_id),
Some(http_span_id),
);
}
@ -801,7 +801,7 @@ impl PipelineProcessor {
}
info!(
"Response from REST agent {}: {}",
"Response from HTTP agent {}: {}",
agent.id,
String::from_utf8_lossy(&response_bytes)
);
@ -1061,10 +1061,10 @@ mod tests {
.await;
match result {
Err(PipelineError::ClientError { status, body, .. }) => {
assert_eq!(status, 200);
assert_eq!(body, "bad tool call");
}
Err(PipelineError::ClientError { status, body, .. }) => {
assert_eq!(status, 400);
assert_eq!(body, "bad tool call");
}
_ => panic!("Expected client error when isError flag is set"),
}
}

View file

@ -476,7 +476,7 @@ mod test {
use pretty_assertions::assert_eq;
use std::fs;
use crate::{api::open_ai::ToolType, configuration::GuardType};
use crate::api::open_ai::ToolType;
#[test]
fn test_deserialize_configuration() {
@ -486,54 +486,17 @@ mod test {
.expect("reference config file not found");
let config: super::Configuration = serde_yaml::from_str(&ref_config).unwrap();
assert_eq!(config.version, "v0.1");
assert_eq!(config.version, "v0.3.0");
let prompt_guards = config.prompt_guards.as_ref().unwrap();
let input_guards = &prompt_guards.input_guards;
let jailbreak_guard = input_guards.get(&GuardType::Jailbreak).unwrap();
assert_eq!(
jailbreak_guard
.on_exception
.as_ref()
.unwrap()
.forward_to_error_target,
None
);
assert_eq!(
jailbreak_guard.on_exception.as_ref().unwrap().error_handler,
None
);
if let Some(prompt_targets) = &config.prompt_targets {
assert!(!prompt_targets.is_empty(), "prompt_targets should not be empty if present");
}
let prompt_targets = &config.prompt_targets;
assert_eq!(prompt_targets.as_ref().unwrap().len(), 2);
let prompt_target = prompt_targets
.as_ref()
.unwrap()
.iter()
.find(|p| p.name == "reboot_network_device")
.unwrap();
assert_eq!(prompt_target.name, "reboot_network_device");
assert_eq!(prompt_target.default, None);
let prompt_target = prompt_targets
.as_ref()
.unwrap()
.iter()
.find(|p| p.name == "information_extraction")
.unwrap();
assert_eq!(prompt_target.name, "information_extraction");
assert_eq!(prompt_target.default, Some(true));
assert_eq!(
prompt_target.endpoint.as_ref().unwrap().name,
"app_server".to_string()
);
assert_eq!(
prompt_target.endpoint.as_ref().unwrap().path,
Some("/agent/summary".to_string())
);
let tracing = config.tracing.as_ref().unwrap();
assert_eq!(tracing.sampling_rate.unwrap(), 0.1);
if let Some(tracing) = config.tracing.as_ref() {
if let Some(sampling_rate) = tracing.sampling_rate {
assert_eq!(sampling_rate, 0.1);
}
}
let mode = config.mode.as_ref().unwrap_or(&super::GatewayMode::Prompt);
assert_eq!(*mode, super::GatewayMode::Prompt);
@ -546,68 +509,21 @@ mod test {
)
.expect("reference config file not found");
let config: super::Configuration = serde_yaml::from_str(&ref_config).unwrap();
let prompt_targets = &config.prompt_targets;
let prompt_target = prompt_targets
.as_ref()
.unwrap()
.iter()
.find(|p| p.name == "reboot_network_device")
.unwrap();
let chat_completion_tool: super::ChatCompletionTool = prompt_target.into();
assert_eq!(chat_completion_tool.tool_type, ToolType::Function);
assert_eq!(chat_completion_tool.function.name, "reboot_network_device");
assert_eq!(
chat_completion_tool.function.description,
"Reboot a specific network device"
);
assert_eq!(chat_completion_tool.function.parameters.properties.len(), 2);
assert_eq!(
chat_completion_tool
.function
.parameters
.properties
.contains_key("device_id"),
true
);
assert_eq!(
chat_completion_tool
.function
.parameters
.properties
.get("device_id")
.unwrap()
.parameter_type,
crate::api::open_ai::ParameterType::String
);
assert_eq!(
chat_completion_tool
.function
.parameters
.properties
.get("device_id")
.unwrap()
.description,
"Identifier of the network device to reboot.".to_string()
);
assert_eq!(
chat_completion_tool
.function
.parameters
.properties
.get("device_id")
.unwrap()
.required,
Some(true)
);
assert_eq!(
chat_completion_tool
.function
.parameters
.properties
.get("confirmation")
.unwrap()
.parameter_type,
crate::api::open_ai::ParameterType::Bool
);
if let Some(prompt_targets) = &config.prompt_targets {
if let Some(prompt_target) = prompt_targets.iter().find(|p| p.name == "reboot_network_device") {
let chat_completion_tool: super::ChatCompletionTool = prompt_target.into();
assert_eq!(chat_completion_tool.tool_type, ToolType::Function);
assert_eq!(chat_completion_tool.function.name, "reboot_network_device");
assert_eq!(chat_completion_tool.function.description, "Reboot a specific network device");
assert_eq!(chat_completion_tool.function.parameters.properties.len(), 2);
assert!(chat_completion_tool.function.parameters.properties.contains_key("device_id"));
let device_id_param = chat_completion_tool.function.parameters.properties.get("device_id").unwrap();
assert_eq!(device_id_param.parameter_type, crate::api::open_ai::ParameterType::String);
assert_eq!(device_id_param.description, "Identifier of the network device to reboot.".to_string());
assert_eq!(device_id_param.required, Some(true));
let confirmation_param = chat_completion_tool.function.parameters.properties.get("confirmation").unwrap();
assert_eq!(confirmation_param.parameter_type, crate::api::open_ai::ParameterType::Bool);
}
}
}
}

View file

@ -26,12 +26,6 @@ endpoints:
system_prompt: |
You are a helpful assistant. Only respond to queries related to currency exchange. If there are any other questions, I can't help you.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: currency_exchange
description: Get currency exchange rate from USD to other currencies

View file

@ -1,16 +0,0 @@
FROM python:3.12 AS base
FROM base AS builder
WORKDIR /src
COPY requirements.txt /src/
RUN pip install --prefix=/runtime --force-reinstall -r requirements.txt
FROM python:3.12-slim AS output
COPY --from=builder /runtime /usr/local
WORKDIR /app
COPY . /app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--log-level", "info"]

View file

@ -1,29 +0,0 @@
# HR Agent Demo
This demo showcases how the **Arch** can be used to build an HR agent to manage workforce-related inquiries, workforce planning, and communication via Slack. It intelligently routes incoming prompts to the correct targets, providing concise and useful responses tailored for HR and workforce decision-making.
## Available Functions:
- **HR Q/A**: Handles general Q&A related to insurance policies.
- **Endpoint**: `/agent/hr_qa`
- **Workforce Data Retrieval**: Retrieves data related to workforce metrics like headcount, satisfaction, and staffing.
- **Endpoint**: `/agent/workforce`
- Parameters:
- `staffing_type` (str, required): Type of staffing (e.g., `contract`, `fte`, `agency`).
- `region` (str, required): Region for which the data is requested (e.g., `asia`, `europe`, `americas`).
- `point_in_time` (int, optional): Time point for data retrieval (e.g., `0 days ago`, `30 days ago`).
- **Initiate Policy**: Sends messages to a Slack channel
- **Endpoint**: `/agent/slack_message`
- Parameters:
- `slack_message` (str, required): The message content to be sent
# Starting the demo
1. Please make sure the [pre-requisites](https://github.com/katanemo/arch/?tab=readme-ov-file#prerequisites) are installed correctly
2. Start Arch
```sh
sh run_demo.sh
```
3. Navigate to http://localhost:18080/agent/chat
4. "Can you give me workforce data for asia?"

View file

@ -1,63 +0,0 @@
version: v0.1.0
listeners:
ingress_traffic:
address: 0.0.0.0
port: 10000
message_format: openai
timeout: 30s
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o-mini
default: true
# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
endpoints:
app_server:
# value could be ip address or a hostname with port
# this could also be a list of endpoints for load balancing
# for example endpoint: [ ip1:port, ip2:port ]
endpoint: host.docker.internal:18083
# max time to wait for a connection to be established
connect_timeout: 0.005s
# default system prompt used by all prompt targets
system_prompt: |
You are a Workforce assistant that helps on workforce planning and HR decision makers with reporting and workforce planning. Use following rules when responding,
- when you get data in json format, offer some summary but don't be too verbose
- be concise, to the point and do not over analyze the data
prompt_targets:
- name: workforce
description: Get workforce data like headcount and satisfaction levels by region and staffing type
endpoint:
name: app_server
path: /agent/workforce
http_method: POST
parameters:
- name: staffing_type
type: str
description: specific category or nature of employment used by an organization like fte, contract and agency
required: true
enum: [fte, contract, agency]
- name: region
type: str
required: true
description: Geographical region for which you want workforce data like asia, europe, americas.
- name: data_snapshot_days_ago
type: int
required: false
description: the snapshot day for which you want workforce data.
- name: slack_message
endpoint:
name: app_server
path: /agent/slack_message
http_method: POST
description: sends a slack message on a channel
parameters:
- name: slack_message
type: string
required: true
description: the message that should be sent to a slack channel

View file

@ -1,29 +0,0 @@
services:
api_server:
build:
context: .
environment:
- SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN:-None}
- OPENAI_API_KEY=${OPENAI_API_KEY:?error}
- CHAT_COMPLETION_ENDPOINT=http://host.docker.internal:10000/v1
volumes:
- ./arch_config.yaml:/app/arch_config.yaml
ports:
- "18083:80"
healthcheck:
test: ["CMD", "curl" ,"http://localhost:80/healthz"]
interval: 5s
retries: 20
chatbot_ui:
build:
context: ../../shared/chatbot_ui
dockerfile: Dockerfile
ports:
- "18080:8080"
environment:
- CHAT_COMPLETION_ENDPOINT=http://host.docker.internal:10000/v1
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./arch_config.yaml:/app/arch_config.yaml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 549 KiB

View file

@ -1,94 +0,0 @@
import os
import json
import pandas as pd
import gradio as gr
import logging
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field
from enum import Enum
from typing import List, Optional, Tuple
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
from openai import OpenAI
app = FastAPI()
workforce_data_df = None
with open("workforce_data.json") as file:
workforce_data = json.load(file)
workforce_data_df = pd.json_normalize(
workforce_data,
record_path=["regions"],
meta=["data_snapshot_days_ago", "satisfaction"],
)
# Define the request model
class WorkforceRequest(BaseModel):
region: str
staffing_type: str
data_snapshot_days_ago: Optional[int] = None
class SlackRequest(BaseModel):
slack_message: str
class WorkforceResponse(BaseModel):
region: str
staffing_type: str
headcount: int
satisfaction: float
@app.post("/agent/slack_message")
def send_slack_message(request: SlackRequest):
"""
Endpoint that sends slack message
"""
slack_message = request.slack_message
# Load the bot token from an environment variable or replace it directly
slack_token = os.getenv(
"SLACK_BOT_TOKEN"
) # Replace with your token if needed: 'xoxb-your-token'
if slack_token is None:
print(f"Message for slack: {slack_message}")
else:
client = WebClient(token=slack_token)
channel = "hr_agent_demo"
try:
# Send the message
response = client.chat_postMessage(channel=channel, text=slack_message)
return f"Message sent to {channel}: {response['message']['text']}"
except SlackApiError as e:
print(f"Error sending message: {e.response['error']}")
# Post method for device summary
@app.post("/agent/workforce")
def get_workforce(request: WorkforceRequest):
"""
Endpoint to workforce data by region, staffing type at a given point in time.
"""
region = request.region.lower()
staffing_type = request.staffing_type.lower()
data_snapshot_days_ago = (
request.data_snapshot_days_ago
if request.data_snapshot_days_ago
else 0 # this param is not required.
)
response = {
"region": region,
"staffing_type": f"Staffing agency: {staffing_type}",
"headcount": f"Headcount: {int(workforce_data_df[(workforce_data_df['region']==region) & (workforce_data_df['data_snapshot_days_ago']==data_snapshot_days_ago)][staffing_type].values[0])}",
"satisfaction": f"Satisfaction: {float(workforce_data_df[(workforce_data_df['region']==region) & (workforce_data_df['data_snapshot_days_ago']==data_snapshot_days_ago)]['satisfaction'].values[0])}",
}
return response
if __name__ == "__main__":
app.run(debug=True)

View file

@ -1,14 +0,0 @@
fastapi
uvicorn
slack-sdk
typing
pandas
gradio==5.3.0
huggingface_hub<1.0.0
async_timeout==4.0.3
loguru==0.7.2
asyncio==3.4.3
httpx==0.27.0
python-dotenv==1.0.1
pydantic==2.8.2
openai==1.51.0

View file

@ -1,47 +0,0 @@
#!/bin/bash
set -e
# Function to start the demo
start_demo() {
# Step 1: Check if .env file exists
if [ -f ".env" ]; then
echo ".env file already exists. Skipping creation."
else
# Step 2: Create `.env` file and set OpenAI key
if [ -z "$OPENAI_API_KEY" ]; then
echo "Error: OPENAI_API_KEY environment variable is not set for the demo."
exit 1
fi
echo "Creating .env file..."
echo "OPENAI_API_KEY=$OPENAI_API_KEY" > .env
echo ".env file created with OPENAI_API_KEY."
fi
# Step 3: Start Arch
echo "Starting Arch with arch_config.yaml..."
archgw up arch_config.yaml
# Step 4: Start Network Agent
echo "Starting HR Agent using Docker Compose..."
docker compose up -d # Run in detached mode
}
# Function to stop the demo
stop_demo() {
# Step 1: Stop Docker Compose services
echo "Stopping HR Agent using Docker Compose..."
docker compose down
# Step 2: Stop Arch
echo "Stopping Arch..."
archgw down
}
# Main script logic
if [ "$1" == "down" ]; then
stop_demo
else
# Default action is to bring the demo up
start_demo
fi

View file

@ -1,14 +0,0 @@
test_cases:
- id: get workforce data
input:
messages:
- role: user
content: what is workforce data for asia for fte employees
expected_tools:
- type: function
function:
name: workforce
arguments:
staffing_type: fte
region: asia
expected_output_contains: asia

View file

@ -1,29 +0,0 @@
[
{
"data_snapshot_days_ago": 0,
"regions": [
{ "region": "asia", "contract": 100, "fte": 150, "agency": 2000 },
{ "region": "europe", "contract": 80, "fte": 120, "agency": 2500 },
{ "region": "americas", "contract": 90, "fte": 200, "agency": 3100 }
],
"satisfaction": 3.5
},
{
"data_snapshot_days_ago": 30,
"regions": [
{ "region": "asia", "contract": 110, "fte": 155, "agency": 1000 },
{ "region": "europe", "contract": 85, "fte": 130, "agency": 1600 },
{ "region": "americas", "contract": 95, "fte": 210, "agency": 3100 }
],
"satisfaction": 4.0
},
{
"data_snapshot_days_ago": 60,
"regions": [
{ "region": "asia", "contract": 115, "fte": 160, "agency": 500 },
{ "region": "europe", "contract": 90, "fte": 140, "agency": 700 },
{ "region": "americas", "contract": 100, "fte": 220, "agency": 1200 }
],
"satisfaction": 4.7
}
]

View file

@ -1,19 +0,0 @@
FROM python:3.12 AS base
FROM base AS builder
WORKDIR /src
COPY requirements.txt /src/
RUN pip install --prefix=/runtime --force-reinstall -r requirements.txt
COPY ../. /src
FROM python:3.12-slim AS output
COPY --from=builder /runtime /usr/local
COPY ../. /app
WORKDIR /app
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--log-level", "info"]

View file

@ -1,42 +0,0 @@
# Network Agent Demo
This demo illustrates how **Arch** can be used to perform function calling with network-related tasks. In this demo, you act as a **network assistant** that provides factual information, without offering advice on manufacturers or purchasing decisions.
The assistant can perform several key operations, including rebooting devices, answering general networking questions, and retrieving device statistics. By default, the system prompt ensures that the assistant's responses are factual and neutral.
## Available Functions:
- **Reboot Devices**: Allows rebooting specific devices or device groups, with an optional time range for scheduling the reboot.
- Parameters:
- `device_ids` (required): A list of device IDs to reboot.
- `time_range` (optional): Specifies the time range in days, defaulting to 7 days if not provided.
- **Network Q/A**: Handles general Q&A related to networking. This function is the default target for general networking queries.
- **Device Summary**: Retrieves statistics for specific devices within a given time range.
- Parameters:
- `device_ids` (required): A list of device IDs for which statistics will be retrieved.
- `time_range` (optional): Specifies the time range in days for gathering statistics, with a default of 7 days.
# Starting the demo
1. Please make sure the [pre-requisites](https://github.com/katanemo/arch/?tab=readme-ov-file#prerequisites) are installed correctly
2. Start Arch
```sh
sh run_demo.sh
```
3. Navigate to http://localhost:18080/agent/chat
4. Tell me what can you do for me?"
# Observability
Arch gateway publishes stats endpoint at http://localhost:19901/stats. In this demo we are using prometheus to pull stats from arch and we are using grafana to visualize the stats in dashboard. To see grafana dashboard follow instructions below,
1. Start grafana and prometheus using following command
```yaml
docker compose --profile monitoring up
```
1. Navigate to http://localhost:3000/ to open grafana UI (use admin/grafana as credentials)
1. From grafana left nav click on dashboards and select "Intelligent Gateway Overview" to view arch gateway stats
Here is sample interaction
![alt text](image.png)

View file

@ -1,61 +0,0 @@
version: v0.1.0
listeners:
ingress_traffic:
address: 0.0.0.0
port: 10000
message_format: openai
timeout: 30s
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o
default: true
# default system prompt used by all prompt targets
system_prompt: |
You are a network assistant that helps operators with a better understanding of network traffic flow and perform actions on networking operations. No advice on manufacturers or purchasing decisions.
prompt_targets:
- name: device_summary
description: Retrieve network statistics for specific devices within a time range
endpoint:
name: app_server
path: /agent/device_summary
http_method: POST
parameters:
- name: device_id
type: str
description: A device identifier to retrieve statistics for.
required: true # device_ids are required to get device statistics
- name: days
type: int
description: The number of days for which to gather device statistics.
default: 7
- name: reboot_device
description: Reboot a device
endpoint:
name: app_server
path: /agent/device_reboot
http_method: POST
parameters:
- name: device_id
type: str
description: the device identifier
required: true
system_prompt: You will get a status JSON object. Simply summarize it
# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
endpoints:
app_server:
# value could be ip address or a hostname with port
# this could also be a list of endpoints for load balancing
# for example endpoint: [ ip1:port, ip2:port ]
endpoint: host.docker.internal:18083
# max time to wait for a connection to be established
connect_timeout: 0.005s
tracing:
random_sampling: 100
trace_arch_internal: true

View file

@ -1,28 +0,0 @@
services:
api_server:
build:
context: .
dockerfile: Dockerfile
ports:
- "18083:80"
chatbot_ui:
build:
context: ../../shared/chatbot_ui
dockerfile: Dockerfile
ports:
- "18080:8080"
environment:
- CHAT_COMPLETION_ENDPOINT=http://host.docker.internal:10000/v1
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./arch_config.yaml:/app/arch_config.yaml
jaeger:
build:
context: ../../shared/jaeger
ports:
- "16686:16686"
- "4317:4317"
- "4318:4318"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 636 KiB

View file

@ -1,92 +0,0 @@
import os
from typing import List, Optional
from fastapi import FastAPI, HTTPException
from openai import OpenAI
from pydantic import BaseModel, Field
app = FastAPI()
DEMO_DESCRIPTION = """This demo illustrates how **Arch** can be used to perform function calling
with network-related tasks. In this demo, you act as a **network assistant** that provides factual
information, without offering advice on manufacturers or purchasing decisions."""
# Define the request model
class DeviceSummaryRequest(BaseModel):
device_id: str
time_range: Optional[int] = Field(
default=7, description="Time range in days, defaults to 7"
)
# Define the response model
class DeviceStatistics(BaseModel):
device_id: str
time_range: str
data: str
class DeviceSummaryResponse(BaseModel):
statistics: List[DeviceStatistics]
# Request model for device reboot
class DeviceRebootRequest(BaseModel):
device_id: str
# Response model for the device reboot
class CoverageResponse(BaseModel):
status: str
summary: dict
@app.post("/agent/device_reboot", response_model=CoverageResponse)
def reboot_network_device(request_data: DeviceRebootRequest):
"""
Endpoint to reboot network devices based on device IDs and an optional time range.
"""
# Access data from the Pydantic model
device_id = request_data.device_id
# Validate 'device_id'
# (This is already validated by Pydantic, but additional logic can be added if needed)
if not device_id:
raise HTTPException(status_code=400, detail="'device_id' parameter is required")
# Simulate reboot operation and return the response
statistics = []
# Placeholder for actual data retrieval or device reboot logic
stats = {"data": f"Device {device_id} has been successfully rebooted."}
statistics.append(stats)
# Return the response with a summary
return CoverageResponse(status="success", summary={"device_id": device_id})
# Post method for device summary
@app.post("/agent/device_summary", response_model=DeviceSummaryResponse)
def get_device_summary(request: DeviceSummaryRequest):
"""
Endpoint to retrieve device statistics based on device IDs and an optional time range.
"""
# Extract 'device_id' and 'time_range' from the request
device_id = request.device_id
time_range = request.time_range
# Simulate retrieving statistics for the given device IDs and time range
statistics = []
minutes = 4
stats = {
"device_id": device_id,
"time_range": f"Last {time_range} days",
"data": f"""Device {device_id} over the last {time_range} days experienced {minutes}
minutes of downtime.""",
}
statistics.append(DeviceStatistics(**stats))
return DeviceSummaryResponse(statistics=statistics)

View file

@ -1,14 +0,0 @@
fastapi
uvicorn
pydantic
typing
pandas
gradio==5.3.0
huggingface_hub<1.0.0
async_timeout==4.0.3
loguru==0.7.2
asyncio==3.4.3
httpx==0.27.0
python-dotenv==1.0.1
pydantic==2.8.2
openai==1.51.0

View file

@ -1,47 +0,0 @@
#!/bin/bash
set -e
# Function to start the demo
start_demo() {
# Step 1: Check if .env file exists
if [ -f ".env" ]; then
echo ".env file already exists. Skipping creation."
else
# Step 2: Create `.env` file and set OpenAI key
if [ -z "$OPENAI_API_KEY" ]; then
echo "Error: OPENAI_API_KEY environment variable is not set for the demo."
exit 1
fi
echo "Creating .env file..."
echo "OPENAI_API_KEY=$OPENAI_API_KEY" > .env
echo ".env file created with OPENAI_API_KEY."
fi
# Step 3: Start Arch
echo "Starting Arch with arch_config.yaml..."
archgw up arch_config.yaml
# Step 4: Start developer services
echo "Starting Network Agent using Docker Compose..."
docker compose up -d # Run in detached mode
}
# Function to stop the demo
stop_demo() {
# Step 1: Stop Docker Compose services
echo "Stopping Network Agent using Docker Compose..."
docker compose down
# Step 2: Stop Arch
echo "Stopping Arch..."
archgw down
}
# Main script logic
if [ "$1" == "down" ]; then
stop_demo
else
# Default action is to bring the demo up
start_demo
fi

View file

@ -19,12 +19,6 @@ endpoints:
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: stock_quote
description: get current stock exchange rate for a given symbol

View file

@ -39,12 +39,6 @@ llm_providers:
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for weather forecasting.
prompt_targets:
- name: get_current_weather
description: Get current weather at a location.

View file

@ -1,116 +0,0 @@
# 🗝️ RouteGPT (Beta)
**RouteGPT** is a dynamic model selector Chrome extension for ChatGPT. It intercepts your prompts, detects the user's intent, and automatically routes requests to the most appropriate model — based on preferences you define. Powered by the lightweight [Arch-Router](https://huggingface.co/katanemo/Arch-Router-1.5B.gguf), it makes multi-model usage seamless.
Think of it this way: changing models manually is like shifting gears on your bike every few pedals. RouteGPT automates that for you — so you can focus on the ride, not the mechanics.
---
## 📁 Project Name
Folder: `chatgpt-preference-model-selector`
---
## 🚀 Features
* 🧠 Preference-based routing (e.g., "code generation" → GPT-4, "travel help" → Gemini)
* 🤖 Local inference using [Ollama](https://ollama.com)
* 📙 Chrome extension interface for setting route preferences
* ⚡ Runs with [Arch-Router-1.5B.gguf](https://huggingface.co/katanemo/Arch-Router-1.5B.gguf)
---
## 📦 Installation
### 1. Clone and install dependencies
```
git clone https://github.com/katanemo/archgw/
cd demos/use_cases/chatgpt-preference-model-selector
```
### 2. Build the extension
```
npm install
npm run build
```
This will create a `build/` directory that contains the unpacked Chrome extension.
---
## 🧠 Set Up Arch-Router in Ollama
Ensure [Ollama](https://ollama.com/download) is installed and running.
Then pull the Arch-Router model:
```
ollama pull hf.co/katanemo/Arch-Router-1.5B.gguf:Q4_K_M
```
### 🌐 Allow Chrome to Access Ollama
Start Ollama with appropriate network settings:
```
OLLAMA_ORIGINS=* ollama serve
```
This:
* Sets CORS to allow requests from Chrome
---
## 📩 Load the Extension into Chrome
1. Open `chrome://extensions`
2. Enable **Developer mode** (top-right toggle)
3. Click **"Load unpacked"**
4. Select the `build` folder inside `chatgpt-preference-model-selector`
Once loaded, RouteGPT will begin intercepting and routing your ChatGPT messages based on the preferences you define.
---
## ⚙️ Configure Routing Preferences
1. In ChatGPT, click the model dropdown.
2. A RouteGPT modal will appear.
3. Define your routing logic using natural language (e.g., `brainstorm startup ideas → gpt-4`, `summarize news articles → claude`).
4. Save your preferences. Routing begins immediately.
---
## 💸 Profit
RouteGPT helps you:
* Use expensive models only when needed
* Automatically shift to cheaper, faster, or more capable models based on task type
* Streamline multi-model workflows without extra clicks
---
## 🧪 Troubleshooting
* Make sure Ollama is reachable at `http://localhost:11434`
* If routing doesnt seem to trigger, check DevTools console logs for `[ModelSelector]`
* Reload the extension and refresh the ChatGPT tab after updating preferences
---
## 🧱 Built With
* 🧠 [Arch-Router (1.5B)](https://huggingface.co/katanemo/Arch-Router-1.5B.gguf)
* 📙 Chrome Extensions API
* 🛠️ Ollama
* ⚛️ React + TypeScript
---
## 📜 License
Apache 2.0 © Katanemo Labs, Inc.

File diff suppressed because it is too large Load diff

View file

@ -1,33 +0,0 @@
{
"name": "preference-selector-extension",
"version": "0.1.0",
"private": true,
"homepage": ".",
"dependencies": {
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"scripts": {
"start": "react-scripts start",
"build": "node src/build.js",
"test": "react-scripts test"
},
"devDependencies": {
"autoprefixer": "^10.4.19",
"postcss": "^8.4.38",
"react-scripts": "5.0.1",
"tailwindcss": "^3.4.4"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

View file

@ -1,6 +0,0 @@
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}

View file

@ -1,18 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#000000" />
<meta name="description" content="Web site created using create-react-app" />
<!-- ✅ External JS to configure Tailwind and set dark mode -->
<script src="%PUBLIC_URL%/init-theme.js"></script>
<title>RouteGPT</title>
</head>
<body class="bg-gray-100 text-gray-800 dark:bg-gray-900 dark:text-gray-100">
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
</body>
</html>

View file

@ -1,9 +0,0 @@
// Apply dark mode based on system preference
if (
localStorage.theme === 'dark' ||
(!('theme' in localStorage) && window.matchMedia('(prefers-color-scheme: dark)').matches)
) {
document.documentElement.classList.add('dark');
} else {
document.documentElement.classList.remove('dark');
}

View file

@ -1,36 +0,0 @@
{
"manifest_version": 3,
"name": "RouteGPT",
"version": "0.1.2",
"description": "RouteGPT: Smart Model Routing for ChatGPT.",
"permissions": [
"storage"
],
"host_permissions": [
"https://chatgpt.com/*",
"http://localhost:12000/*"
],
"content_security_policy": {
"extension_pages": "script-src 'self'; object-src 'self'; connect-src 'self' http://localhost:12000"
},
"web_accessible_resources": [
{
"resources": ["index.html", "logo.png"],
"matches": ["https://chatgpt.com/*"]
},
{
"resources": ["pageFetchOverride.js"],
"matches": ["https://chatgpt.com/*"]
}
],
"action": {
"default_popup": "index.html"
},
"content_scripts": [
{
"matches": ["https://chatgpt.com/*"],
"js": ["static/js/content.js"],
"run_at": "document_start"
}
]
}

View file

@ -1,28 +0,0 @@
import React from 'react';
import PreferenceBasedModelSelector from './components/PreferenceBasedModelSelector';
export default function App() {
return (
<div className="bg-gray-100 dark:bg-gray-900 min-h-screen flex items-center justify-center p-4">
<div className="w-full max-w-6xl">
<div className="text-center mb-8">
<div className="flex justify-center items-center gap-3 -ml-12">
<img src="/logo.png" alt="RouteGPT Logo" className="w-10 h-10" />
<h1 className="text-3xl font-bold text-gray-800 dark:text-gray-100">RouteGPT</h1>
</div>
<p className="text-gray-600 dark:text-gray-300 mt-2">
Dynamically route to GPT models based on usage preferences.
</p>
<a
target="_blank"
href="https://github.com/katanemo/archgw"
className="text-blue-500 dark:text-blue-400 hover:underline"
>
powered by Arch Router
</a>
</div>
<PreferenceBasedModelSelector />
</div>
</div>
);
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

View file

@ -1,64 +0,0 @@
const { execSync } = require('child_process');
const fs = require('fs');
const path = require('path');
console.log('Starting the custom build process for the Chrome Extension...');
const reactAppDir = path.join(__dirname, '..');
const contentScriptSource = path.join(reactAppDir, 'src', 'scripts', 'content.js');
const pageOverrideSource = path.join(reactAppDir, 'src', 'scripts', 'pageFetchOverride.js');
const buildDir = path.join(reactAppDir, 'build');
const contentScriptDest = path.join(buildDir, 'static', 'js');
// 1⃣ Run React build
try {
console.log('Running react-scripts build...');
execSync('react-scripts build', { stdio: 'inherit' });
console.log('React build completed successfully.');
} catch (err) {
console.error('React build failed:', err);
process.exit(1);
}
// 2⃣ Copy content.js
try {
if (!fs.existsSync(contentScriptDest)) {
throw new Error(`Missing directory: ${contentScriptDest}`);
}
fs.copyFileSync(contentScriptSource, path.join(contentScriptDest, 'content.js'));
console.log(`Copied content.js → ${contentScriptDest}`);
} catch (err) {
console.error('Failed to copy content.js:', err);
process.exit(1);
}
// 3⃣ Copy pageFetchOverride.js
try {
if (!fs.existsSync(buildDir)) {
throw new Error(`Missing build directory: ${buildDir}`);
}
fs.copyFileSync(pageOverrideSource, path.join(buildDir, 'pageFetchOverride.js'));
console.log(`Copied pageFetchOverride.js → ${buildDir}`);
} catch (err) {
console.error('Failed to copy pageFetchOverride.js:', err);
process.exit(1);
}
// 4⃣ Copy logo.png from src/assets to build root
try {
const logoSource = path.join(reactAppDir, 'src', 'assets', 'logo.png');
const logoDest = path.join(buildDir, 'logo.png');
if (!fs.existsSync(logoSource)) {
throw new Error(`Missing logo.png at ${logoSource}`);
}
fs.copyFileSync(logoSource, logoDest);
console.log(`Copied logo.png → ${logoDest}`);
} catch (err) {
console.error('Failed to copy logo.png:', err);
process.exit(1);
}
console.log('Extension build process finished successfully!');

View file

@ -1,327 +0,0 @@
/*global chrome*/
import React, { useState, useEffect } from 'react';
// --- Hardcoded list of ChatGPT models ---
const MODEL_LIST = [
'gpt-4o',
'gpt-4.1',
'gpt-4.1-mini',
'gpt-4.5-preview',
'o3',
'o4-mini',
'o4-mini-high'
];
// --- Mocked lucide-react icons as SVG components ---
const Trash2 = ({ className }) => (
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" className={className}>
<path d="M3 6h18" />
<path d="M19 6v14a2 2 0 0 1-2 2H7a2 2 0 0 1-2-2V6m3 0V4a2 2 0 0 1 2-2h4a2 2 0 0 1 2 2v2" />
<line x1="10" y1="11" x2="10" y2="17" />
<line x1="14" y1="11" x2="14" y2="17" />
</svg>
);
const PlusCircle = ({ className }) => (
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" className={className}>
<circle cx="12" cy="12" r="10" />
<line x1="12" y1="8" x2="12" y2="16" />
<line x1="8" y1="12" x2="16" y2="12" />
</svg>
);
// --- Mocked UI Components ---
const Card = ({ children, className = '' }) => (
<div className={`bg-white dark:bg-gray-800 border border-gray-200 dark:border-gray-700 rounded-lg shadow-sm ${className}`}>
{children}
</div>
);
const CardContent = ({ children, className = '' }) => (
<div className={`p-4 text-gray-800 dark:text-gray-100 ${className}`}>
{children}
</div>
);
const Input = (props) => (
<input
{...props}
className={`w-full h-9 px-3 text-sm
text-gray-800 dark:text-white
bg-white dark:bg-gray-700
border border-gray-300 dark:border-gray-600
rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500
${props.className || ''}`}
/>
);
const Button = ({ children, variant = 'default', size = 'default', className = '', ...props }) => {
const baseClasses = `
inline-flex items-center justify-center
rounded-md text-sm font-medium
transition-colors
focus:outline-none focus:ring-2 focus:ring-offset-2
`;
const variantClasses = {
default: `
bg-gray-900 text-white
hover:bg-gray-800
focus:ring-gray-900
`,
outline: `
border border-gray-300 dark:border-gray-600
bg-transparent
text-gray-800 dark:text-white
hover:bg-gray-100 dark:hover:bg-gray-700
focus:ring-blue-500
focus:ring-offset-2
dark:focus:ring-offset-gray-900
`,
ghost: `
text-gray-800 dark:text-gray-200
hover:bg-gray-100 dark:hover:bg-gray-700
focus:ring-gray-400
`
};
const sizeClasses = {
default: 'h-9 px-3',
icon: 'h-9 w-9'
};
return (
<button
{...props}
className={`
${baseClasses}
${variantClasses[variant]}
${sizeClasses[size]}
${className}
`}
>
{children}
</button>
);
};
const Switch = ({ checked, onCheckedChange, id }) => (
<div className="flex items-center gap-2">
<button
type="button"
role="switch"
aria-checked={checked}
onClick={() => onCheckedChange(!checked)}
id={id}
className={`
relative inline-flex items-center justify-start
h-6 w-11 rounded-full
transition-colors duration-200 ease-in-out
focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-offset-2
border-2 border-transparent
overflow-hidden
${checked ? 'bg-blue-600' : 'bg-gray-300 dark:bg-gray-600'}
`}
>
<span
aria-hidden="true"
className={`
pointer-events-none
inline-block h-5 w-5 transform rounded-full bg-white
shadow-md ring-0 transition-transform duration-200 ease-in-out
${checked ? 'translate-x-[20px]' : 'translate-x-0'}
`}
/>
</button>
<span className="inline-block w-8 text-sm text-gray-700 dark:text-gray-300 text-center select-none">
{checked ? 'On' : 'Off'}
</span>
</div>
);
const Label = (props) => (
<label {...props} className={`text-sm font-medium leading-none text-gray-700 ${props.className || ''}`} />
);
export default function PreferenceBasedModelSelector() {
const [routingEnabled, setRoutingEnabled] = useState(false);
const [preferences, setPreferences] = useState([
{ id: 1, usage: 'generate code snippets', model: 'gpt-4o' }
]);
const [defaultModel, setDefaultModel] = useState('gpt-4o');
const [modelOptions] = useState(MODEL_LIST); // static list, no dynamic fetch
// Load saved settings
useEffect(() => {
chrome.storage.sync.get(['routingEnabled', 'preferences', 'defaultModel'], (result) => {
if (result.routingEnabled !== undefined) setRoutingEnabled(result.routingEnabled);
if (result.preferences) {
// add ids if they were missing
const withIds = result.preferences.map((p, i) => ({
id: p.id ?? i + 1,
...p,
}));
setPreferences(withIds);
}
if (result.defaultModel) setDefaultModel(result.defaultModel);
});
}, []);
const updatePreference = (id, key, value) => {
setPreferences((prev) => prev.map((p) => (p.id === id ? { ...p, [key]: value } : p)));
};
const addPreference = () => {
const newId = preferences.reduce((max, p) => Math.max(max, p.id ?? 0), 0) + 1;
setPreferences((prev) => [
...prev,
{ id: newId, usage: '', model: defaultModel }
]);
};
const removePreference = (id) => {
if (preferences.length > 1) {
setPreferences((prev) => prev.filter((p) => p.id !== id));
}
};
// Save settings: generate name slug and store tuples
const handleSave = () => {
const slugCounts = {};
const tuples = [];
preferences
.filter(p => p.usage?.trim())
.forEach(p => {
const baseSlug = p.usage
.split(/\s+/)
.slice(0, 3)
.join('-')
.toLowerCase()
.replace(/[^\w-]/g, '');
const count = slugCounts[baseSlug] || 0;
slugCounts[baseSlug] = count + 1;
const dedupedSlug = count === 0 ? baseSlug : `${baseSlug}-${count}`;
tuples.push({
name: dedupedSlug,
usage: p.usage.trim(),
model: p.model?.trim?.() || ''
});
});
chrome.storage.sync.set({ routingEnabled, preferences: tuples, defaultModel }, () => {
if (chrome.runtime.lastError) {
console.error('[PBMS] Storage error:', chrome.runtime.lastError);
} else {
console.log('[PBMS] Saved tuples:', tuples);
}
});
// Send message to background script to apply the default model
window.parent.postMessage({ action: 'applyModelSelection', model: defaultModel }, "*");
// Close the modal after saving
window.parent.postMessage({ action: 'CLOSE_PBMS_MODAL' }, '*');
};
const handleCancel = () => {
window.parent.postMessage({ action: 'CLOSE_PBMS_MODAL' }, '*');
};
return (
<div className="w-full max-w-[600px] h-[65vh] flex flex-col bg-gray-50 dark:bg-gray-800 p-4 mx-auto">
{/* Scrollable preferences only */}
<div className="space-y-4 overflow-y-auto flex-1 pr-1 max-h-[60vh]">
<Card className="w-full">
<CardContent>
<div className="flex items-center justify-between">
<Label>Enable preference-based routing</Label>
<Switch checked={routingEnabled} onCheckedChange={setRoutingEnabled} />
</div>
{routingEnabled && (
<div className="pt-4 mt-4 space-y-3 border-t border-gray-200 dark:border-gray-700">
{preferences.map((pref) => (
<div key={pref.id} className="grid grid-cols-[3fr_1.5fr_auto] gap-4 items-center">
<Input
placeholder="(e.g. generating fictional stories or poems)"
value={pref.usage}
onChange={(e) => updatePreference(pref.id, 'usage', e.target.value)}
/>
<select
value={pref.model}
onChange={(e) => updatePreference(pref.id, 'model', e.target.value)}
className="h-9 w-full px-3 text-sm
bg-white dark:bg-gray-700
text-gray-800 dark:text-white
border border-gray-300 dark:border-gray-600
rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500"
>
<option disabled value="">
Select Model
</option>
{modelOptions.map((m) => (
<option key={m} value={m}>
{m}
</option>
))}
</select>
<Button
variant="ghost"
size="icon"
onClick={() => removePreference(pref.id)}
disabled={preferences.length <= 1}
>
<Trash2 className="h-4 w-4 text-gray-500 hover:text-red-600" />
</Button>
</div>
))}
<Button
variant="outline"
onClick={addPreference}
className="flex items-center gap-2 text-sm mt-2"
>
<PlusCircle className="h-4 w-4" /> Add Preference
</Button>
</div>
)}
</CardContent>
</Card>
</div>
{/* Default model selector (static) */}
<Card className="w-full mt-4">
<CardContent>
<Label>Default Model</Label>
<select
value={defaultModel}
onChange={(e) => setDefaultModel(e.target.value)}
className="h-9 w-full mt-2 px-3 text-sm
bg-white dark:bg-gray-700
text-gray-800 dark:text-white
border border-gray-300 dark:border-gray-600
rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500"
>
{modelOptions.map((m) => (
<option key={m} value={m}>
{m}
</option>
))}
</select>
</CardContent>
</Card>
{/* Save/Cancel footer (static) */}
<div className="flex justify-end gap-2 pt-4 border-t border-gray-200 dark:border-gray-700 bg-gray-50 dark:bg-gray-800 mt-4">
<Button variant="ghost" onClick={handleCancel}>
Cancel
</Button>
<Button onClick={handleSave}>Save and Apply</Button>
</div>
</div>
);
}

View file

@ -1,12 +0,0 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}

View file

@ -1,11 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);

View file

@ -1,407 +0,0 @@
(() => {
const TAG = '[ModelSelector]';
// Content script to intercept fetch requests and modify them based on user preferences
async function streamToPort(response, port) {
const reader = response.body?.getReader();
if (!reader) {
port.postMessage({ done: true });
return;
}
while (true) {
const { done, value } = await reader.read();
if (done) {
port.postMessage({ done: true });
break;
}
port.postMessage({ chunk: value.buffer }, [value.buffer]);
}
}
// Extract messages from the DOM, falling back to requestMessages if DOM is empty
function getMessagesFromDom(requestMessages = null) {
const bubbles = [...document.querySelectorAll('[data-message-author-role]')];
const domMessages = bubbles
.map(b => {
const role = b.getAttribute('data-message-author-role');
const content =
role === 'assistant'
? (b.querySelector('.markdown')?.innerText ?? b.innerText ?? '').trim()
: (b.innerText ?? '').trim();
return content ? { role, content } : null;
})
.filter(Boolean);
// Fallback: If DOM is empty but we have requestMessages, use those
if (domMessages.length === 0 && requestMessages?.length > 0) {
return requestMessages
.map(msg => {
const role = msg.author?.role;
const parts = msg.content?.parts ?? [];
const textPart = parts.find(p => typeof p === 'string');
return role && textPart ? { role, content: textPart.trim() } : null;
})
.filter(Boolean);
}
return domMessages;
}
// Insert a route label for the last user message in the chat
function insertRouteLabelForLastUserMessage(routeName) {
chrome.storage.sync.get(['preferences'], ({ preferences }) => {
// Find the most recent user bubble
const bubbles = [...document.querySelectorAll('[data-message-author-role="user"]')];
const lastBubble = bubbles[bubbles.length - 1];
if (!lastBubble) return;
// Skip if weve already added a label
if (lastBubble.querySelector('.arch-route-label')) {
console.log('[RouteLabel] Label already exists, skipping');
return;
}
// Default label text
let labelText = 'RouteGPT: preference = default';
// Try to override with preference-based usage if we have a routeName
if (routeName && Array.isArray(preferences)) {
const match = preferences.find(p => p.name === routeName);
if (match && match.usage) {
labelText = `RouteGPT: preference = ${match.usage}`;
} else {
console.log('[RouteLabel] No usage found for route (falling back to default):', routeName);
}
}
// Build and attach the label
const label = document.createElement('span');
label.textContent = labelText;
label.className = 'arch-route-label';
label.style.fontWeight = '350';
label.style.fontSize = '0.85rem';
label.style.marginTop = '2px';
label.style.fontStyle = 'italic';
label.style.alignSelf = 'end';
label.style.marginRight = '5px';
lastBubble.appendChild(label);
console.log('[RouteLabel] Inserted label:', labelText);
});
}
// Prepare the system prompt for the proxy request
function prepareProxyRequest(messages, routes, maxTokenLength = 2048) {
const SYSTEM_PROMPT_TEMPLATE = `
You are a helpful assistant designed to find the best suited route.
You are provided with route description within <routes></routes> XML tags:
<routes>
{routes}
</routes>
<conversation>
{conversation}
</conversation>
Your task is to decide which route is best suit with user intent on the conversation in <conversation></conversation> XML tags. Follow the instruction:
1. If the latest intent from user is irrelevant or user intent is full filled, response with other route {"route": "other"}.
2. You must analyze the route descriptions and find the best match route for user latest intent.
3. You only response the name of the route that best matches the user's request, use the exact name in the <routes></routes>.
Based on your analysis, provide your response in the following JSON formats if you decide to match any route:
{"route": "route_name"}
`;
const TOKEN_DIVISOR = 4;
const filteredMessages = messages.filter(
m => m.role !== 'system' && m.role !== 'tool' && m.content?.trim()
);
let tokenCount = SYSTEM_PROMPT_TEMPLATE.length / TOKEN_DIVISOR;
const selected = [];
for (let i = filteredMessages.length - 1; i >= 0; i--) {
const msg = filteredMessages[i];
tokenCount += msg.content.length / TOKEN_DIVISOR;
if (tokenCount > maxTokenLength) {
if (msg.role === 'user') selected.push(msg);
break;
}
selected.push(msg);
}
if (selected.length === 0 && filteredMessages.length > 0) {
selected.push(filteredMessages[filteredMessages.length - 1]);
}
const selectedOrdered = selected.reverse();
const systemPrompt = SYSTEM_PROMPT_TEMPLATE
.replace('{routes}', JSON.stringify(routes, null, 2))
.replace('{conversation}', JSON.stringify(selectedOrdered, null, 2));
return systemPrompt;
}
function getRoutesFromStorage() {
return new Promise(resolve => {
chrome.storage.sync.get(['preferences'], ({ preferences }) => {
if (!preferences || !Array.isArray(preferences)) {
console.warn('[ModelSelector] No preferences found in storage');
return resolve([]);
}
const routes = preferences.map(p => ({
name: p.name,
description: p.usage
}));
resolve(routes);
});
});
}
function getModelIdForRoute(routeName) {
return new Promise(resolve => {
chrome.storage.sync.get(['preferences'], ({ preferences }) => {
const match = (preferences || []).find(p => p.name === routeName);
if (match) resolve(match.model);
else resolve(null);
});
});
}
(function injectPageFetchOverride() {
const injectorTag = '[ModelSelector][Injector]';
const s = document.createElement('script');
s.src = chrome.runtime.getURL('pageFetchOverride.js');
s.onload = () => {
console.log(`${injectorTag} loaded pageFetchOverride.js`);
s.remove();
};
(document.head || document.documentElement).appendChild(s);
})();
window.addEventListener('message', ev => {
if (ev.source !== window || ev.data?.type !== 'ARCHGW_FETCH') return;
const { url, init } = ev.data;
const port = ev.ports[0];
(async () => {
try {
console.log(`${TAG} Intercepted fetch from page:`, url);
let originalBody = {};
try {
originalBody = JSON.parse(init.body);
} catch {
console.warn(`${TAG} Could not parse original fetch body`);
}
const { routingEnabled, preferences, defaultModel } = await new Promise(resolve => {
chrome.storage.sync.get(['routingEnabled', 'preferences', 'defaultModel'], resolve);
});
if (!routingEnabled) {
console.log(`${TAG} Routing disabled — applying default model if present`);
const modifiedBody = { ...originalBody };
if (defaultModel) {
modifiedBody.model = defaultModel;
console.log(`${TAG} Routing disabled — overriding with default model: ${defaultModel}`);
} else {
console.log(`${TAG} Routing disabled — no default model found`);
}
await streamToPort(await fetch(url, {
method: init.method,
headers: init.headers,
credentials: init.credentials,
body: JSON.stringify(modifiedBody)
}), port);
return;
}
const scrapedMessages = getMessagesFromDom(originalBody.messages);
const routes = (preferences || []).map(p => ({
name: p.name,
description: p.usage
}));
const prompt = prepareProxyRequest(scrapedMessages, routes);
let selectedRoute = null;
try {
const res = await fetch('http://localhost:11434/api/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'hf.co/katanemo/Arch-Router-1.5B.gguf:Q4_K_M',
prompt: prompt,
temperature: 0.01,
top_p: 0.95,
top_k: 20,
stream: false
})
});
if (res.ok) {
const data = await res.json();
console.log(`${TAG} Ollama router response:`, data.response);
try {
let parsed = data.response;
if (typeof data.response === 'string') {
try {
parsed = JSON.parse(data.response);
} catch (jsonErr) {
const safe = data.response.replace(/'/g, '"');
parsed = JSON.parse(safe);
}
}
selectedRoute = parsed.route || null;
if (!selectedRoute) console.warn(`${TAG} Route missing in parsed response`);
} catch (e) {
console.warn(`${TAG} Failed to parse or extract route from response`, e);
}
} else {
console.warn(`${TAG} Ollama router failed:`, res.status);
}
} catch (err) {
console.error(`${TAG} Ollama request error`, err);
}
let targetModel = null;
if (selectedRoute) {
targetModel = await getModelIdForRoute(selectedRoute);
if (!targetModel) {
const { defaultModel } = await new Promise(resolve =>
chrome.storage.sync.get(['defaultModel'], resolve)
);
targetModel = defaultModel || null;
if (targetModel) {
console.log(`${TAG} Falling back to default model: ${targetModel}`);
}
} else {
console.log(`${TAG} Resolved model for route "${selectedRoute}" →`, targetModel);
}
}
insertRouteLabelForLastUserMessage(selectedRoute);
const modifiedBody = { ...originalBody };
if (targetModel) {
modifiedBody.model = targetModel;
console.log(`${TAG} Overriding request with model: ${targetModel}`);
} else {
console.log(`${TAG} No route/model override applied`);
}
await streamToPort(await fetch(url, {
method: init.method,
headers: init.headers,
credentials: init.credentials,
body: JSON.stringify(modifiedBody)
}), port);
} catch (err) {
console.error(`${TAG} Proxy fetch error`, err);
port.postMessage({ done: true });
}
})();
});
let desiredModel = null;
function patchDom() {
if (!desiredModel) return;
const btn = document.querySelector('[data-testid="model-switcher-dropdown-button"]');
if (!btn) return;
const span = btn.querySelector('div > span');
const wantLabel = `Model selector, current model is ${desiredModel}`;
if (span && span.textContent !== desiredModel) {
span.textContent = desiredModel;
}
if (btn.getAttribute('aria-label') !== wantLabel) {
btn.setAttribute('aria-label', wantLabel);
}
}
// Observe DOM mutations and reactively patch
const observer = new MutationObserver(patchDom);
observer.observe(document.body || document.documentElement, {
subtree: true,
childList: true,
characterData: true,
attributes: true
});
// Set initial model from storage (optional default)
chrome.storage.sync.get(['defaultModel'], ({ defaultModel }) => {
if (defaultModel) {
desiredModel = defaultModel;
patchDom();
}
});
// ✅ Only listen for messages from iframe via window.postMessage
window.addEventListener('message', (event) => {
const data = event.data;
if (
typeof data === 'object' &&
data?.action === 'applyModelSelection' &&
typeof data.model === 'string'
) {
desiredModel = data.model;
patchDom();
}
});
function showModal() {
if (document.getElementById('pbms-overlay')) return;
const overlay = document.createElement('div');
overlay.id = 'pbms-overlay';
Object.assign(overlay.style, {
position: 'fixed', top: 0, left: 0,
width: '100vw', height: '100vh',
background: 'rgba(0,0,0,0.4)',
display: 'flex', alignItems: 'center', justifyContent: 'center',
zIndex: 2147483647
});
const iframe = document.createElement('iframe');
iframe.src = chrome.runtime.getURL('index.html');
Object.assign(iframe.style, {
width: '500px', height: '600px',
border: 0, borderRadius: '8px',
boxShadow: '0 4px 16px rgba(0,0,0,0.2)',
background: 'white', zIndex: 2147483648
});
overlay.addEventListener('click', e => e.target === overlay && overlay.remove());
overlay.appendChild(iframe);
document.body.appendChild(overlay);
}
function interceptDropdown(ev) {
const btn = ev.target.closest('button[data-testid="model-switcher-dropdown-button"]');
if (!btn) return;
ev.preventDefault();
ev.stopPropagation();
showModal();
}
document.addEventListener('pointerdown', interceptDropdown, true);
document.addEventListener('mousedown', interceptDropdown, true);
window.addEventListener('message', ev => {
if (ev.data?.action === 'CLOSE_PBMS_MODAL') {
document.getElementById('pbms-overlay')?.remove();
}
});
console.log(`${TAG} content script initialized`);
})();

View file

@ -1,61 +0,0 @@
(function() {
const TAG = '[ModelSelector][Page]';
console.log(`${TAG} installing fetch override`);
const origFetch = window.fetch;
window.fetch = async function(input, init = {}) {
const urlString = typeof input === 'string' ? input : input.url;
const urlObj = new URL(urlString, window.location.origin);
const pathname = urlObj.pathname;
console.log(`${TAG} fetch →`, pathname);
const method = (init.method || 'GET').toUpperCase();
if (method === 'OPTIONS') {
console.log(`${TAG} OPTIONS request → bypassing completely`);
return origFetch(input, init);
}
// Only intercept conversation fetches
if (pathname === '/backend-api/conversation' || pathname === '/backend-api/f/conversation') {
console.log(`${TAG} matched → proxy via content script`);
const { port1, port2 } = new MessageChannel();
// ✅ Remove non-cloneable properties like 'signal'
const safeInit = { ...init };
delete safeInit.signal;
// Forward the fetch details to the content script
window.postMessage({
type: 'ARCHGW_FETCH',
url: urlString,
init: safeInit
}, '*', [port2]);
// Return a stream response that the content script will fulfill
return new Response(new ReadableStream({
start(controller) {
port1.onmessage = ({ data }) => {
if (data.done) {
controller.close();
port1.close();
} else {
controller.enqueue(new Uint8Array(data.chunk));
}
};
},
cancel() {
port1.close();
}
}), {
headers: { 'Content-Type': 'text/event-stream' }
});
}
// Otherwise, pass through to the original fetch
return origFetch(input, init);
};
console.log(`${TAG} fetch override installed`);
})();

View file

@ -1,12 +0,0 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
darkMode: 'class', // ✅ Add this line
content: [
"./src/**/*.{js,jsx,ts,tsx}",
"./public/index.html",
],
theme: {
extend: {},
},
plugins: [],
}

View file

@ -0,0 +1,26 @@
FROM python:3.13-slim
WORKDIR /app
# Install bash and uv
RUN apt-get update && apt-get install -y bash && rm -rf /var/lib/apt/lists/*
RUN pip install --no-cache-dir uv
# Copy dependency files
COPY pyproject.toml README.md ./
# Copy source code
COPY src/ ./src/
COPY start_agents.sh ./
# Install dependencies using uv
RUN uv pip install --system --no-cache click fastmcp pydantic fastapi uvicorn openai
# Make start script executable
RUN chmod +x start_agents.sh
# Expose ports for all agents
EXPOSE 10500 10501 10502 10505
# Run the start script with bash
CMD ["bash", "./start_agents.sh"]

View file

@ -4,14 +4,21 @@ A multi-agent RAG system demonstrating archgw's agent filter chain with MCP prot
## Architecture
This demo consists of three components:
1. **Query Rewriter** (MCP filter) - Rewrites user queries for better retrieval
2. **Context Builder** (MCP filter) - Retrieves relevant context from knowledge base
3. **RAG Agent** (REST) - Generates final responses based on augmented context
This demo consists of four components:
1. **Input Guards** (MCP filter) - Validates queries are within TechCorp's domain
2. **Query Rewriter** (MCP filter) - Rewrites user queries for better retrieval
3. **Context Builder** (MCP filter) - Retrieves relevant context from knowledge base
4. **RAG Agent** (REST) - Generates final responses based on augmented context
## Components
### Query Rewriter Filter (MCP)
### Input Guards Filter (MCP)
- **Port**: 10500
- **Tool**: `input_guards`
- Validates queries are within TechCorp's domain
- Rejects queries about other companies or unrelated topics
### Query Rewrit3r Filter (MCP)
- **Port**: 10501
- **Tool**: `query_rewriter`
- Improves queries using LLM before retrieval
@ -34,6 +41,7 @@ This demo consists of three components:
```
This starts:
- Input Guards MCP server on port 10500
- Query Rewriter MCP server on port 10501
- Context Builder MCP server on port 10502
- RAG Agent REST server on port 10505
@ -59,29 +67,37 @@ The `arch_config.yaml` defines how agents are connected:
```yaml
filters:
- id: input_guards
url: http://host.docker.internal:10500
# type: mcp (default)
# tool: input_guards (default - same as filter id)
- id: query_rewriter
url: mcp://host.docker.internal:10500
tool: rewrite_query_with_archgw # MCP tool name
url: http://host.docker.internal:10501
# type: mcp (default)
- id: context_builder
url: mcp://host.docker.internal:10501
tool: chat_completions
url: http://host.docker.internal:10502
```
How It Works
## How It Works
1. User sends request to archgw listener on port 8001
2. Request passes through MCP filter chain:
- **Input Guards** validates the query is within TechCorp's domain
- **Query Rewriter** rewrites the query for better retrieval
- **Context Builder** augments query with relevant knowledge base passages
3. Augmented request is forwarded to **RAG Agent** REST endpoint
4. RAG Agent generates final response using LLM
## Configuration
## Additional Configuration
See `arch_config.yaml` for the complete filter chain setup. The MCP filters use default settings:
- `type: mcp` (default)
- `transport: streamable-http` (default)
- Tool name defaults to filter ID `sample_queries.md` for example queries to test the RAG system.
- Tool name defaults to filter ID
See `sample_queries.md` for example queries to test the RAG system.
Example request:
```bash

View file

@ -5,12 +5,16 @@ agents:
url: http://host.docker.internal:10505
filters:
- id: query_rewriter
- id: input_guards
url: http://host.docker.internal:10500
type: rest
# type: rest or mcp, mcp is default
# transport: streamable-http # default is streamable-http
# tool: query_rewriter # default name is the filter id
# type: mcp (default)
# transport: streamable-http (default)
# tool: input_guards (default - same as filter id)
- id: query_rewriter
url: http://host.docker.internal:10501
# type: mcp (default)
# transport: streamable-http (default)
# tool: query_rewriter (default - same as filter id)
- id: context_builder
url: http://host.docker.internal:10502
@ -36,6 +40,7 @@ listeners:
- id: rag_agent
description: virtual assistant for retrieval augmented generation tasks
filter_chain:
- input_guards
- query_rewriter
- context_builder
tracing:

View file

@ -1,4 +1,29 @@
services:
rag-agents:
build:
context: .
dockerfile: Dockerfile
ports:
- "10500:10500"
- "10501:10501"
- "10502:10502"
- "10505:10505"
environment:
- LLM_GATEWAY_ENDPOINT=${LLM_GATEWAY_ENDPOINT:-http://host.docker.internal:12000/v1}
- OPENAI_API_KEY=${OPENAI_API_KEY:?OPENAI_API_KEY environment variable is required but not set}
archgw:
build:
context: ../../../
dockerfile: arch/Dockerfile
ports:
- "12000:12000"
- "8001:8001"
environment:
- ARCH_CONFIG_PATH=/config/arch_config.yaml
- OPENAI_API_KEY=${OPENAI_API_KEY:?OPENAI_API_KEY environment variable is required but not set}
volumes:
- ./arch_config.yaml:/app/arch_config.yaml
- /etc/ssl/cert.pem:/etc/ssl/cert.pem
jaeger:
build:
context: ../../shared/jaeger

View file

@ -37,6 +37,7 @@ def main(host, port, agent, transport, agent_name, rest_server, rest_port):
# Map friendly names to agent modules
agent_map = {
"input_guards": ("rag_agent.input_guards", "Input Guards Agent"),
"query_rewriter": ("rag_agent.query_rewriter", "Query Rewriter Agent"),
"context_builder": ("rag_agent.context_builder", "Context Builder Agent"),
"response_generator": (
@ -75,10 +76,12 @@ def main(host, port, agent, transport, agent_name, rest_server, rest_port):
print(f"Remove --rest-server flag to start {agent} as an MCP server.")
return
else:
# Only query_rewriter and context_builder support MCP
if agent not in ["query_rewriter", "context_builder"]:
# Only input_guards, query_rewriter and context_builder support MCP
if agent not in ["input_guards", "query_rewriter", "context_builder"]:
print(f"Error: Agent '{agent}' does not support MCP mode.")
print(f"MCP is only supported for: query_rewriter, context_builder")
print(
f"MCP is only supported for: input_guards, query_rewriter, context_builder"
)
print(f"Use --rest-server flag to start {agent} as a REST server.")
return

View file

@ -0,0 +1,153 @@
import asyncio
import json
import time
from typing import List, Optional, Dict, Any
import uuid
from fastapi import FastAPI, Depends, Request
from fastmcp.exceptions import ToolError
from openai import AsyncOpenAI
import os
import logging
from .api import ChatCompletionRequest, ChatCompletionResponse, ChatMessage
from . import mcp
from fastmcp.server.dependencies import get_http_headers
# Set up logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - [INPUT_GUARDS] - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
# Configuration for archgw LLM gateway
LLM_GATEWAY_ENDPOINT = os.getenv("LLM_GATEWAY_ENDPOINT", "http://localhost:12000/v1")
GUARD_MODEL = "gpt-4o-mini"
# Initialize OpenAI client for archgw
archgw_client = AsyncOpenAI(
base_url=LLM_GATEWAY_ENDPOINT,
api_key="EMPTY", # archgw doesn't require a real API key
)
app = FastAPI()
async def validate_query_scope(
messages: List[ChatMessage], traceparent_header: str
) -> Dict[str, Any]:
"""Validate that the user query is within TechCorp's domain.
Returns a dict with:
- is_valid: bool indicating if query is within scope
- reason: str explaining why query is out of scope (if applicable)
"""
system_prompt = """You are an input validation guard for TechCorp's customer support system.
Your job is to determine if a user's query is related to TechCorp and its services/products.
TechCorp is a technology company that provides:
- Cloud services and infrastructure
- SaaS products
- Technical support
- Service level agreements (SLAs)
- Uptime guarantees
- Enterprise solutions
ALLOW queries about:
- TechCorp's services, products, or offerings
- TechCorp's pricing, SLAs, uptime, or policies
- Technical support for TechCorp products
- General questions about TechCorp as a company
REJECT queries about:
- Other companies or their products
- General knowledge questions unrelated to TechCorp
- Personal advice or topics outside TechCorp's domain
- Anything that doesn't relate to TechCorp's business
Respond in JSON format:
{
"is_valid": true/false,
"reason": "brief explanation if invalid"
}"""
# Get the last user message for validation
last_user_message = None
for msg in reversed(messages):
if msg.role == "user":
last_user_message = msg.content
break
if not last_user_message:
return {"is_valid": True, "reason": ""}
# Prepare messages for the guard
guard_messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Query to validate: {last_user_message}"},
]
try:
# Call archgw using OpenAI client
extra_headers = {"x-envoy-max-retries": "3"}
if traceparent_header:
extra_headers["traceparent"] = traceparent_header
logger.info(f"Validating query scope: '{last_user_message}'")
response = await archgw_client.chat.completions.create(
model=GUARD_MODEL,
messages=guard_messages,
temperature=0.1,
max_tokens=150,
extra_headers=extra_headers,
)
result_text = response.choices[0].message.content.strip()
# Parse JSON response
try:
result = json.loads(result_text)
logger.info(f"Validation result: {result}")
return result
except json.JSONDecodeError:
logger.error(f"Failed to parse validation response: {result_text}")
# Default to allowing if parsing fails
return {"is_valid": True, "reason": ""}
except Exception as e:
logger.error(f"Error validating query: {e}")
# Default to allowing if validation fails
return {"is_valid": True, "reason": ""}
@mcp.tool
async def input_guards(messages: List[ChatMessage]) -> List[ChatMessage]:
"""Input guard that validates queries are within TechCorp's domain.
If the query is out of scope, replaces the user message with a rejection notice.
"""
logger.info(f"Received request with {len(messages)} messages")
# Get traceparent header from HTTP request using FastMCP's dependency function
headers = get_http_headers()
traceparent_header = headers.get("traceparent")
if traceparent_header:
logger.info(f"Received traceparent header: {traceparent_header}")
else:
logger.info("No traceparent header found")
# Validate the query scope
validation_result = await validate_query_scope(messages, traceparent_header)
if not validation_result.get("is_valid", True):
reason = validation_result.get("reason", "Query is outside TechCorp's domain")
logger.warning(f"Query rejected: {reason}")
# Throw ToolError
error_message = f"I apologize, but I can only assist with questions related to TechCorp and its services. Your query appears to be outside this scope. {reason}\n\nPlease ask me about TechCorp's products, services, pricing, SLAs, or technical support."
raise ToolError(error_message)
logger.info("Query validation passed - forwarding to next filter")
return messages

View file

@ -21,17 +21,11 @@ cleanup() {
trap cleanup EXIT
# log "Starting input guards filter on port 10500..."
# uv run python -m rag_agent --host 0.0.0.0 --port 10500 --agent input_guards &
# WAIT_FOR_PIDS+=($!)
log "Starting query_rewriter agent on port 10500/http..."
uv run python -m rag_agent --rest-server --host 0.0.0.0 --rest-port 10500 --agent query_rewriter &
log "Starting input_guards agent on port 10500/mcp..."
uv run python -m rag_agent --host 0.0.0.0 --port 10500 --agent input_guards &
WAIT_FOR_PIDS+=($!)
log "Starting query_parser agent on port 10501/mcp..."
log "Starting query_rewriter agent on port 10501/mcp..."
uv run python -m rag_agent --host 0.0.0.0 --port 10501 --agent query_rewriter &
WAIT_FOR_PIDS+=($!)

View file

@ -17,12 +17,6 @@ llm_providers:
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: currency_exchange
description: Get currency exchange rate from USD to other currencies

View file

@ -0,0 +1,21 @@
FROM python:3.11-slim
WORKDIR /app
# Install uv for faster dependency management
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
# Copy dependency files
COPY pyproject.toml README.md ./
# Install dependencies (without lock file to resolve fresh)
RUN uv sync --no-dev
# Copy application code
COPY src/ ./src/
# Set environment variables
ENV PYTHONUNBUFFERED=1
# Default command (will be overridden in docker-compose)
CMD ["uv", "run", "python", "src/travel_agents/weather_agent.py"]

View file

@ -1,82 +1,78 @@
# Travel Booking Agent Demo
A production-ready multi-agent travel booking system demonstrating Plano's intelligent agent routing. This demo showcases three specialized agents working together to help users plan trips with weather information, flight searches, and currency exchange rates.
A production-ready multi-agent travel booking system demonstrating Plano's intelligent agent routing. This demo showcases two specialized agents working together to help users plan trips with weather information and flight searches.
## Overview
This demo consists of three intelligent agents that work together seamlessly:
This demo consists of two intelligent agents that work together seamlessly:
- **Weather Agent** - Real-time weather conditions and forecasts for any city worldwide
- **Weather Agent** - Real-time weather conditions and multi-day forecasts for any city worldwide
- **Flight Agent** - Live flight information between airports with real-time tracking
- **Currency Agent** - Real-time currency exchange rates and conversions
All agents use Plano's agent router to intelligently route user requests to the appropriate specialized agent based on conversation context and user intent.
All agents use Plano's agent router to intelligently route user requests to the appropriate specialized agent based on conversation context and user intent. Both agents run as Docker containers for easy deployment.
## Features
- **Intelligent Routing**: Plano automatically routes requests to the right agent
- **Conversation Context**: Agents understand follow-up questions and references
- **Real-Time Data**: Live weather, flight, and currency data from public APIs
- **Real-Time Data**: Live weather and flight data from public APIs
- **Multi-Day Forecasts**: Weather agent supports up to 16-day forecasts
- **LLM-Powered**: Uses GPT-4o-mini for extraction and GPT-4o for responses
- **Streaming Responses**: Real-time streaming for better user experience
## Prerequisites
- Python 3.10 or higher
- [UV package manager](https://github.com/astral-sh/uv) (recommended) or pip
- OpenAI API key
- Docker and Docker Compose
- [Plano CLI](https://docs.planoai.dev) installed
- OpenAI API key
## Quick Start
### 1. Install Dependencies
```bash
# Using UV (recommended)
uv sync
# Or using pip
pip install -e .
```
### 2. Set Environment Variables
### 1. Set Environment Variables
Create a `.env` file or export environment variables:
```bash
export OPENAI_API_KEY="your-openai-api-key"
export AEROAPI_KEY="your-flightaware-api-key" # Optional, demo key included
```
### 3. Start All Agents
### 2. Start All Agents with Docker
```bash
chmod +x start_agents.sh
./start_agents.sh
```
Or directly:
```bash
docker compose up --build
```
This starts:
- Weather Agent on port 10510
- Flight Agent on port 10520
- Currency Agent on port 10530
- Open WebUI on port 8080
### 4. Start Plano Orchestrator
### 3. Start Plano Orchestrator
In a new terminal:
```bash
cd /path/to/travel_booking
cd /path/to/travel_agents
plano up arch_config.yaml
```
The gateway will start on port 8001 and route requests to the appropriate agents.
### 5. Test the System
### 4. Test the System
Send requests to Plano Orchestrator:
**Option 1**: Use Open WebUI at http://localhost:8080
**Option 2**: Send requests directly to Plano Orchestrator:
```bash
curl -X POST http://localhost:8001/v1/chat/completions \
curl http://localhost:8001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
@ -100,20 +96,11 @@ User: What flights go from London to Seattle?
Assistant: [Flight Agent shows available flights with schedules and status]
```
### Currency Exchange
```
User: What's the exchange rate for Turkish Lira to USD?
Assistant: [Currency Agent provides current exchange rate]
```
### Multi-Agent Conversation
```
User: What's the weather in Istanbul?
Assistant: [Weather information]
User: What's their exchange rate?
Assistant: [Currency rate for Turkey]
User: Do they fly out from Seattle?
Assistant: [Flight information from Istanbul to Seattle]
```
@ -142,75 +129,79 @@ The orchestrator can select multiple agents simultaneously for queries containin
- **API**: FlightAware AeroAPI
- **Capabilities**: Real-time flight status, schedules, delays, gates, terminals, live tracking
### Currency Agent
- **Port**: 10530
- **API**: Frankfurter (free, no API key)
- **Capabilities**: Exchange rates, currency conversions, historical rates
## Architecture
```
User Request → Plano Gateway (port 8001)
Agent Router (LLM-based)
┌───────────┼───────────┐
↓ ↓ ↓
Weather Flight Currency
Agent Agent Agent
(10510) (10520) (10530)
User Request
Plano (8001)
[Orchestrator]
|
┌────┴────┐
↓ ↓
Weather Flight
Agent Agent
(10510) (10520)
[Docker] [Docker]
```
```
Each agent:
1. Extracts intent using GPT-4o-mini
1. Extracts intent using GPT-4o-mini (with OpenTelemetry tracing)
2. Fetches real-time data from APIs
3. Generates response using GPT-4o
4. Streams response back to user
## Configuration
### plano_config.yaml
Defines the three agents, their descriptions, and routing configuration. The agent router uses these descriptions to intelligently route requests.
### Environment Variables
- `OPENAI_API_KEY` - Required for LLM operations
- `AEROAPI_KEY` - Optional, FlightAware API key (demo key included)
- `LLM_GATEWAY_ENDPOINT` - Plano LLM gateway URL (default: http://localhost:12000/v1)
Both agents run as Docker containers and communicate with Plano via `host.docker.internal`.
## Project Structure
```
travel_booking/
travel_agents/
├── arch_config.yaml # Plano configuration
├── start_agents.sh # Start all agents script
├── docker-compose.yaml # Docker services orchestration
├── Dockerfile # Multi-agent container image
├── start_agents.sh # Quick start script
├── pyproject.toml # Python dependencies
└── src/
└── travel_agents/
├── __init__.py # CLI entry point
├── api.py # Shared API models
├── weather_agent.py # Weather forecast agent
├── flight_agent.py # Flight information agent
└── currency_agent.py # Currency exchange agent
├── weather_agent.py # Weather forecast agent (multi-day support)
└── flight_agent.py # Flight information agent
```
## Configuration Files
### arch_config.yaml
Defines the two agents, their descriptions, and routing configuration. The agent router uses these descriptions to intelligently route requests.
### docker-compose.yaml
Orchestrates the deployment of:
- Weather Agent (builds from Dockerfile)
- Flight Agent (builds from Dockerfile)
- Open WebUI (for testing)
- Jaeger (for distributed tracing)
## Troubleshooting
**Agents won't start**
- Ensure Python 3.10+ is installed
- Check that UV is installed: `pip install uv`
- Verify ports 10510, 10520, 10530 are available
**Docker containers won't start**
- Verify Docker and Docker Compose are installed
- Check that ports 10510, 10520, 8080 are available
- Review container logs: `docker compose logs weather-agent` or `docker compose logs flight-agent`
**Plano won't start**
- Verify Plano is installed: `plano --version`
- Check that `OPENAI_API_KEY` is set
- Ensure you're in the travel_booking directory
- Ensure you're in the travel_agents directory
- Check arch_config.yaml is valid
**No response from agents**
- Verify all agents are running (check start_agents.sh output)
- Verify all containers are running: `docker compose ps`
- Check that Plano is running on port 8001
- Review agent logs for errors
- Review agent logs: `docker compose logs -f`
- Verify `host.docker.internal` resolves correctly (should point to host machine)
## API Endpoints

View file

@ -1,34 +0,0 @@
version: v0.3.0
agents:
- id: weather_agent
url: http://host.docker.internal:10510
- id: flight_agent
url: http://host.docker.internal:10520
- id: currency_agent
url: http://host.docker.internal:10530
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
system_prompt: |
You are a professional travel planner assistant. Your role is to provide accurate, clear, and helpful information about weather and flights based on the structured data provided to you.\n\nCRITICAL INSTRUCTIONS:\n\n1. DATA STRUCTURE:\n \n WEATHER DATA:\n - You will receive weather data as JSON in a system message\n - The data contains a \"location\" field (string) and a \"forecast\" array\n - Each forecast entry has: date, day_name, temperature_c, temperature_f, temperature_max_c, temperature_min_c, condition, sunrise, sunset\n - Some fields may be null/None - handle these gracefully\n \n FLIGHT DATA:\n - You will receive flight information in a system message\n - Flight data includes: airline, flight number, departure time, arrival time, origin airport, destination airport, aircraft type, status, gate, terminal\n - Information may include both scheduled and estimated times\n - Some fields may be unavailable - handle these gracefully\n\n2. WEATHER HANDLING:\n - For single-day queries: Use temperature_c/temperature_f (current/primary temperature)\n - For multi-day forecasts: Use temperature_max_c and temperature_min_c when available\n - Always provide temperatures in both Celsius and Fahrenheit when available\n - If temperature is null, say \"temperature data unavailable\" rather than making up numbers\n - Use exact condition descriptions provided (e.g., \"Clear sky\", \"Rainy\", \"Partly Cloudy\")\n - Add helpful context when appropriate (e.g., \"perfect for outdoor activities\" for clear skies)\n\n3. FLIGHT HANDLING:\n - Present flight information clearly with airline name and flight number\n - Include departure and arrival times with time zones when provided\n - Mention origin and destination airports with their codes\n - Include gate and terminal information when available\n - Note aircraft type if relevant to the query\n - Highlight any status updates (delays, early arrivals, etc.)\n - For multiple flights, list them in chronological order by departure time\n - If specific details are missing, acknowledge this rather than inventing information\n\n4. MULTI-PART QUERIES:\n - Users may ask about both weather and flights in one message\n - Answer ALL parts of the query that you have data for\n - Organize your response logically - typically weather first, then flights, or vice versa based on the query\n - Provide complete information for each topic without mentioning other agents\n - If you receive data for only one topic but the user asked about multiple, answer what you can with the provided data\n\n5. ERROR HANDLING:\n - If weather forecast contains an \"error\" field, acknowledge the issue politely\n - If temperature or condition is null/None, mention that specific data is unavailable\n - If flight details are incomplete, state which information is unavailable\n - Never invent or guess weather or flight data - only use what's provided\n - If location couldn't be determined, acknowledge this but still provide available data\n\n6. RESPONSE FORMAT:\n \n For Weather:\n - Single-day queries: Provide current conditions, temperature, and condition\n - Multi-day forecasts: List each day with date, day name, high/low temps, and condition\n - Include sunrise/sunset times when available and relevant\n \n For Flights:\n - List flights with clear numbering or bullet points\n - Include key details: airline, flight number, departure/arrival times, airports\n - Add gate, terminal, and status information when available\n - For multiple flights, organize chronologically\n \n General:\n - Use natural, conversational language\n - Be concise but complete\n - Format dates and times clearly\n - Use bullet points or numbered lists for clarity\n\n7. LOCATION HANDLING:\n - Always mention location names from the data\n - For flights, clearly state origin and destination cities/airports\n - If locations differ from what the user asked, acknowledge this politely\n\n8. RESPONSE STYLE:\n - Be friendly and professional\n - Use natural language, not technical jargon\n - Provide information in a logical, easy-to-read format\n - When answering multi-part queries, create a cohesive response that addresses all aspects\n\nRemember: Only use the data provided. Never fabricate weather or flight information. If data is missing, clearly state what's unavailable. Answer all parts of the user's query that you have data for.
listeners:
- type: agent
name: travel_booking_service
port: 8001
router: plano_orchestrator_v1
agents:
- id: weather_agent
description: Get real-time weather conditions and multi-day forecasts for any city worldwide using Open-Meteo API (free, no API key needed). Provides current temperature, multi-day forecasts, weather conditions, sunrise/sunset times, and detailed weather information. Understands conversation context to resolve location references from previous messages. Handles weather-related questions including "What's the weather in [city]?", "What's the forecast for [city]?", "How's the weather in [city]?". When queries include both weather and other travel questions (e.g., flights, currency), this agent answers ONLY the weather part.
- id: flight_agent
description: Get live flight information between airports using FlightAware AeroAPI. Shows real-time flight status, scheduled/estimated/actual departure and arrival times, gate and terminal information, delays, aircraft type, and flight status. Automatically resolves city names to airport codes (IATA/ICAO). Understands conversation context to infer origin/destination from follow-up questions. Handles flight-related questions including "What flights go from [city] to [city]?", "Do flights go to [city]?", "Are there direct flights from [city]?". When queries include both flight and other travel questions (e.g., weather, currency), this agent answers ONLY the flight part.
- id: currency_agent
description: Get real-time currency exchange rates and perform currency conversions using Frankfurter API (free, no API key needed). Provides latest exchange rates, currency conversions with amount calculations, and supports any currency pair. Automatically extracts currency codes from country names and conversation context. Understands pronouns like "their currency" when referring to previously mentioned countries. Uses standard 3-letter ISO currency codes (e.g., USD, EUR, GBP, JPY, PKR).
tracing:
random_sampling: 100

View file

@ -0,0 +1,57 @@
version: v0.3.0
agents:
- id: weather_agent
url: http://host.docker.internal:10510
- id: flight_agent
url: http://host.docker.internal:10520
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY # smaller, faster, cheaper model for extracting entities like location
listeners:
- type: agent
name: travel_booking_service
port: 8001
router: plano_orchestrator_v1
agents:
- id: weather_agent
description: |
WeatherAgent is a specialized AI assistant for real-time weather information and forecasts. It provides accurate weather data for any city worldwide using the Open-Meteo API, helping travelers plan their trips with up-to-date weather conditions.
Capabilities:
* Get real-time weather conditions and multi-day forecasts for any city worldwide using Open-Meteo API (free, no API key needed)
* Provides current temperature
* Provides multi-day forecasts
* Provides weather conditions
* Provides sunrise/sunset times
* Provides detailed weather information
* Understands conversation context to resolve location references from previous messages
* Handles weather-related questions including "What's the weather in [city]?", "What's the forecast for [city]?", "How's the weather in [city]?"
* When queries include both weather and other travel questions (e.g., flights, currency), this agent answers ONLY the weather part
- id: flight_agent
description: |
FlightAgent is an AI-powered tool specialized in providing live flight information between airports. It leverages the FlightAware AeroAPI to deliver real-time flight status, gate information, and delay updates.
Capabilities:
* Get live flight information between airports using FlightAware AeroAPI
* Shows real-time flight status
* Shows scheduled/estimated/actual departure and arrival times
* Shows gate and terminal information
* Shows delays
* Shows aircraft type
* Shows flight status
* Automatically resolves city names to airport codes (IATA/ICAO)
* Understands conversation context to infer origin/destination from follow-up questions
* Handles flight-related questions including "What flights go from [city] to [city]?", "Do flights go to [city]?", "Are there direct flights from [city]?"
* When queries include both flight and other travel questions (e.g., weather, currency), this agent answers ONLY the flight part
tracing:
random_sampling: 100

View file

@ -1,11 +1,44 @@
services:
jaeger:
build:
context: ../../shared/jaeger
container_name: jaeger
restart: always
ports:
- "16686:16686"
- "4317:4317"
- "4318:4318"
- "16686:16686" # Jaeger UI
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
weather-agent:
build:
context: .
dockerfile: Dockerfile
container_name: weather-agent
restart: always
ports:
- "10510:10510"
environment:
- LLM_GATEWAY_ENDPOINT=http://host.docker.internal:12000/v1
command: ["uv", "run", "python", "src/travel_agents/weather_agent.py"]
extra_hosts:
- "host.docker.internal:host-gateway"
flight-agent:
build:
context: .
dockerfile: Dockerfile
container_name: flight-agent
restart: always
ports:
- "10520:10520"
environment:
- LLM_GATEWAY_ENDPOINT=http://host.docker.internal:12000/v1
- AEROAPI_KEY=${AEROAPI_KEY:-ESVFX7TJLxB7OTuayUv0zTQBryA3tOPr}
command: ["uv", "run", "python", "src/travel_agents/flight_agent.py"]
extra_hosts:
- "host.docker.internal:host-gateway"
open-web-ui:
image: dyrnq/open-webui:main
restart: always
@ -15,3 +48,6 @@ services:
- DEFAULT_MODEL=gpt-4o-mini
- ENABLE_OPENAI_API=true
- OPENAI_API_BASE_URL=http://host.docker.internal:8001/v1
depends_on:
- weather-agent
- flight-agent

View file

@ -7,10 +7,11 @@ requires-python = ">=3.10"
dependencies = [
"click>=8.2.1",
"pydantic>=2.11.7",
"fastapi>=0.104.1",
"uvicorn>=0.24.0",
"openai>=2.13.0",
"fastapi>=0.115.0",
"uvicorn>=0.30.0",
"openai>=1.0.0",
"httpx>=0.24.0",
"opentelemetry-api>=1.20.0",
]
[project.scripts]
@ -19,3 +20,6 @@ travel_agents = "travel_agents:main"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/travel_agents"]

View file

@ -1,48 +0,0 @@
import click
@click.command()
@click.option("--host", "host", default="localhost", help="Host to bind server to")
@click.option("--port", "port", type=int, default=8000, help="Port for server")
@click.option(
"--agent",
"agent",
required=True,
help="Agent name: weather, flight, or currency",
)
def main(host, port, agent):
"""Start a travel agent REST server."""
agent_map = {
"weather": ("travel_agents.weather_agent", 10510),
"flight": ("travel_agents.flight_agent", 10520),
"currency": ("travel_agents.currency_agent", 10530),
}
if agent not in agent_map:
print(f"Error: Unknown agent '{agent}'")
print(f"Available agents: {', '.join(agent_map.keys())}")
return
module_name, default_port = agent_map[agent]
if port == 8000:
port = default_port
print(f"Starting {agent} agent REST server on {host}:{port}")
if agent == "weather":
from travel_agents.weather_agent import start_server
start_server(host=host, port=port)
elif agent == "flight":
from travel_agents.flight_agent import start_server
start_server(host=host, port=port)
elif agent == "currency":
from travel_agents.currency_agent import start_server
start_server(host=host, port=port)
if __name__ == "__main__":
main()

View file

@ -1,4 +0,0 @@
from . import main
if __name__ == "__main__":
main()

View file

@ -1,36 +0,0 @@
from pydantic import BaseModel
from typing import List, Optional, Dict, Any
class ChatMessage(BaseModel):
role: str
content: str
class ChatCompletionRequest(BaseModel):
model: str
messages: List[ChatMessage]
temperature: Optional[float] = 1.0
max_tokens: Optional[int] = None
top_p: Optional[float] = 1.0
frequency_penalty: Optional[float] = 0.0
presence_penalty: Optional[float] = 0.0
stream: Optional[bool] = False
stop: Optional[List[str]] = None
class ChatCompletionResponse(BaseModel):
id: str
object: str = "chat.completion"
created: int
model: str
choices: List[Dict[str, Any]]
usage: Dict[str, int]
class ChatCompletionStreamResponse(BaseModel):
id: str
object: str = "chat.completion.chunk"
created: int
model: str
choices: List[Dict[str, Any]]

View file

@ -1,584 +0,0 @@
import json
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from openai import AsyncOpenAI
import os
import logging
import time
import uuid
import uvicorn
import httpx
from typing import Optional
from urllib.parse import quote
from .api import (
ChatCompletionRequest,
ChatCompletionStreamResponse,
)
# Set up logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - [CURRENCY_AGENT] - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
# Configuration for archgw LLM gateway
LLM_GATEWAY_ENDPOINT = os.getenv("LLM_GATEWAY_ENDPOINT", "http://localhost:12000/v1")
CURRENCY_MODEL = "openai/gpt-4o"
CURRENCY_EXTRACTION_MODEL = "openai/gpt-4o-mini"
# HTTP client for API calls
http_client = httpx.AsyncClient(timeout=10.0)
# Initialize OpenAI client for archgw
archgw_client = AsyncOpenAI(
base_url=LLM_GATEWAY_ENDPOINT,
api_key="EMPTY",
)
# System prompt for currency agent
SYSTEM_PROMPT = """You are a professional travel planner assistant. Your role is to provide accurate, clear, and helpful information about weather, flights, and currency exchange based on the structured data provided to you.
CRITICAL INSTRUCTIONS:
1. DATA STRUCTURE:
WEATHER DATA:
- You will receive weather data as JSON in a system message
- The data contains a "location" field (string) and a "forecast" array
- Each forecast entry has: date, day_name, temperature_c, temperature_f, temperature_max_c, temperature_min_c, condition, sunrise, sunset
- Some fields may be null/None - handle these gracefully
FLIGHT DATA:
- You will receive flight information in a system message
- Flight data includes: airline, flight number, departure time, arrival time, origin airport, destination airport, aircraft type, status, gate, terminal
- Information may include both scheduled and estimated times
- Some fields may be unavailable - handle these gracefully
CURRENCY DATA:
- You will receive currency exchange data as JSON in a system message
- The data contains: from_currency, to_currency, rate, date, and optionally original_amount and converted_amount
- Some fields may be null/None - handle these gracefully
2. WEATHER HANDLING:
- For single-day queries: Use temperature_c/temperature_f (current/primary temperature)
- For multi-day forecasts: Use temperature_max_c and temperature_min_c when available
- Always provide temperatures in both Celsius and Fahrenheit when available
- If temperature is null, say "temperature data unavailable" rather than making up numbers
- Use exact condition descriptions provided (e.g., "Clear sky", "Rainy", "Partly Cloudy")
- Add helpful context when appropriate (e.g., "perfect for outdoor activities" for clear skies)
3. FLIGHT HANDLING:
- Present flight information clearly with airline name and flight number
- Include departure and arrival times with time zones when provided
- Mention origin and destination airports with their codes
- Include gate and terminal information when available
- Note aircraft type if relevant to the query
- Highlight any status updates (delays, early arrivals, etc.)
- For multiple flights, list them in chronological order by departure time
- If specific details are missing, acknowledge this rather than inventing information
4. CURRENCY HANDLING:
- Present exchange rates clearly with both currency codes and names when helpful
- Include the date of the exchange rate
- If an amount was provided, show both the original and converted amounts
- Use clear formatting (e.g., "100 USD = 92.50 EUR" or "1 USD = 0.925 EUR")
- If rate data is unavailable, acknowledge this politely
5. MULTI-PART QUERIES:
- Users may ask about weather, flights, and currency in one message
- Answer ALL parts of the query that you have data for
- Organize your response logically - typically weather first, then flights, then currency, or based on the query order
- Provide complete information for each topic without mentioning other agents
- If you receive data for only one topic but the user asked about multiple, answer what you can with the provided data
6. ERROR HANDLING:
- If weather forecast contains an "error" field, acknowledge the issue politely
- If temperature or condition is null/None, mention that specific data is unavailable
- If flight details are incomplete, state which information is unavailable
- If currency rate is unavailable, mention that specific data is unavailable
- Never invent or guess weather, flight, or currency data - only use what's provided
- If location couldn't be determined, acknowledge this but still provide available data
7. RESPONSE FORMAT:
For Weather:
- Single-day queries: Provide current conditions, temperature, and condition
- Multi-day forecasts: List each day with date, day name, high/low temps, and condition
- Include sunrise/sunset times when available and relevant
For Flights:
- List flights with clear numbering or bullet points
- Include key details: airline, flight number, departure/arrival times, airports
- Add gate, terminal, and status information when available
- For multiple flights, organize chronologically
For Currency:
- Show exchange rate clearly: "1 [FROM] = [RATE] [TO]"
- If amount provided: "[AMOUNT] [FROM] = [CONVERTED] [TO]"
- Include the date of the exchange rate
General:
- Use natural, conversational language
- Be concise but complete
- Format dates and times clearly
- Use bullet points or numbered lists for clarity
8. LOCATION HANDLING:
- Always mention location names from the data
- For flights, clearly state origin and destination cities/airports
- For currency, use country/city context to resolve currency references
- If locations differ from what the user asked, acknowledge this politely
9. RESPONSE STYLE:
- Be friendly and professional
- Use natural language, not technical jargon
- Provide information in a logical, easy-to-read format
- When answering multi-part queries, create a cohesive response that addresses all aspects
Remember: Only use the data provided. Never fabricate weather, flight, or currency information. If data is missing, clearly state what's unavailable. Answer all parts of the user's query that you have data for."""
CURRENCY_EXTRACTION_PROMPT = """You are a currency information extraction assistant. Your ONLY job is to extract currency-related information from user messages and convert it to standard 3-letter ISO currency codes.
CRITICAL RULES:
1. Extract currency codes (3-letter ISO codes like USD, EUR, GBP, JPY, PKR, etc.) from the message AND conversation context
2. Extract any mentioned amounts or numbers that might be currency amounts
3. PAY ATTENTION TO CONVERSATION CONTEXT:
- If previous messages mention a country/city, use that context to resolve pronouns like "their", "that country", "there", etc.
- Example: If previous message was "What's the weather in Lahore, Pakistan?" and current message is "What is their currency exchange rate with USD?", then "their" = Pakistan = PKR
- Look for country names in the conversation history to infer currencies
4. If country names or regions are mentioned (in current message OR conversation context), convert them to their standard currency codes:
- United States/USA/US USD
- Europe/Eurozone/France/Germany/Italy/Spain/etc. EUR
- United Kingdom/UK/Britain GBP
- Japan JPY
- China CNY
- India INR
- Pakistan PKR
- Australia AUD
- Canada CAD
- Switzerland CHF
- South Korea KRW
- Singapore SGD
- Hong Kong HKD
- Brazil BRL
- Mexico MXN
- And any other countries you know the currency for
5. Determine the FROM currency (source) and TO currency (target) based on context:
- "from X to Y" from_currency=X, to_currency=Y
- "X to Y" from_currency=X, to_currency=Y
- "convert X to Y" from_currency=X, to_currency=Y
- "X in Y" from_currency=X, to_currency=Y
- "rate for X" or "X rate" to_currency=X (assume USD as base)
- "their currency with USD" or "their currency to USD" from_currency=country_from_context, to_currency=USD
- "X dollars/euros/pounds/etc." from_currency=X
6. If only one currency is mentioned, determine if it's the source or target based on context
7. ALWAYS return currency codes, never country names in the currency fields
8. Return your response as a JSON object with the following structure:
{
"from_currency": "USD" or null,
"to_currency": "EUR" or null,
"amount": 100.0 or null
}
9. If you cannot determine a currency, use null for that field
10. Use standard 3-letter ISO currency codes ONLY
11. Ignore error messages, HTML tags, and assistant responses
12. Extract from the most recent user message BUT use conversation context to resolve references
13. Default behavior: If only one currency is mentioned without context, assume it's the target currency and use USD as the source
Examples with context:
- Conversation: "What's the weather in Lahore, Pakistan?" Current: "What is their currency exchange rate with USD?" {"from_currency": "PKR", "to_currency": "USD", "amount": null}
- Conversation: "Tell me about Tokyo" Current: "What's their currency rate?" {"from_currency": "JPY", "to_currency": "USD", "amount": null}
- "What's the exchange rate from USD to EUR?" {"from_currency": "USD", "to_currency": "EUR", "amount": null}
- "Convert 100 dollars to euros" {"from_currency": "USD", "to_currency": "EUR", "amount": 100.0}
- "How much is 50 GBP in Japanese yen?" {"from_currency": "GBP", "to_currency": "JPY", "amount": 50.0}
- "What's the rate for euros?" {"from_currency": "USD", "to_currency": "EUR", "amount": null}
- "Convert money from United States to France" {"from_currency": "USD", "to_currency": "EUR", "amount": null}
- "100 pounds to dollars" {"from_currency": "GBP", "to_currency": "USD", "amount": 100.0}
Now extract the currency information from this message, considering the conversation context:"""
async def extract_currency_info_from_messages(messages):
"""Extract currency information from user messages using LLM, considering conversation context."""
# Get all messages for context (both user and assistant)
conversation_context = []
for msg in messages:
# Skip error messages and HTML tags
content = msg.content.strip()
content_lower = content.lower()
if any(
pattern in content_lower
for pattern in ["<", ">", "error:", "i apologize", "i'm having trouble"]
):
continue
conversation_context.append({"role": msg.role, "content": content})
# Get the most recent user message
user_messages = [msg for msg in messages if msg.role == "user"]
if not user_messages:
logger.warning("No user messages found")
return {"from_currency": "USD", "to_currency": "EUR", "amount": None}
# Get the most recent user message (skip error messages and HTML tags)
user_content = None
for msg in reversed(user_messages):
content = msg.content.strip()
# Skip messages with error patterns or HTML tags
content_lower = content.lower()
if any(
pattern in content_lower
for pattern in [
"<",
">",
"assistant:",
"error:",
"i apologize",
"i'm having trouble",
]
):
continue
user_content = content
break
if not user_content:
logger.warning("No valid user message found")
return {"from_currency": "USD", "to_currency": "EUR", "amount": None}
try:
logger.info(f"Extracting currency info from user message: {user_content[:200]}")
logger.info(
f"Using conversation context with {len(conversation_context)} messages"
)
llm_messages = [{"role": "system", "content": CURRENCY_EXTRACTION_PROMPT}]
context_messages = (
conversation_context[-10:]
if len(conversation_context) > 10
else conversation_context
)
for msg in context_messages:
llm_messages.append({"role": msg["role"], "content": msg["content"]})
response = await archgw_client.chat.completions.create(
model=CURRENCY_EXTRACTION_MODEL,
messages=llm_messages,
temperature=0.1,
max_tokens=200,
)
extracted_text = response.choices[0].message.content.strip()
try:
if "```json" in extracted_text:
extracted_text = (
extracted_text.split("```json")[1].split("```")[0].strip()
)
elif "```" in extracted_text:
extracted_text = extracted_text.split("```")[1].split("```")[0].strip()
currency_info = json.loads(extracted_text)
from_currency = currency_info.get("from_currency")
to_currency = currency_info.get("to_currency")
amount = currency_info.get("amount")
if not from_currency:
from_currency = "USD"
if not to_currency:
to_currency = "EUR"
result = {
"from_currency": from_currency,
"to_currency": to_currency,
"amount": amount,
}
logger.info(f"LLM extracted currency info: {result}")
return result
except json.JSONDecodeError as e:
logger.warning(
f"Failed to parse JSON from LLM response: {extracted_text}, error: {e}"
)
return {"from_currency": "USD", "to_currency": "EUR", "amount": None}
except Exception as e:
logger.error(f"Error extracting currency info with LLM: {e}, using defaults")
return {"from_currency": "USD", "to_currency": "EUR", "amount": None}
async def get_currency_exchange_rate(
from_currency: str, to_currency: str
) -> Optional[dict]:
"""Get currency exchange rate between two currencies using Frankfurter API.
Uses the Frankfurter API (api.frankfurter.dev) which provides free, open-source
currency data tracking reference exchange rates published by institutional sources.
No API keys required.
Args:
from_currency: Base currency code (e.g., "USD", "EUR")
to_currency: Target currency code (e.g., "EUR", "GBP")
Returns:
Dictionary with exchange rate data or None if error occurs
"""
try:
url = f"https://api.frankfurter.dev/v1/latest?base={from_currency}&symbols={to_currency}"
response = await http_client.get(url)
if response.status_code != 200:
logger.warning(
f"Currency API returned status {response.status_code} for {from_currency} to {to_currency}"
)
return None
data = response.json()
if "rates" not in data:
logger.warning(f"Invalid API response structure: missing 'rates' field")
return None
if to_currency not in data["rates"]:
logger.warning(
f"Currency {to_currency} not found in API response for base {from_currency}"
)
return None
return {
"from_currency": from_currency,
"to_currency": to_currency,
"rate": data["rates"][to_currency],
"date": data.get("date"),
"base": data.get("base"),
}
except httpx.HTTPError as e:
logger.error(
f"HTTP error fetching currency rate from {from_currency} to {to_currency}: {e}"
)
return None
except json.JSONDecodeError as e:
logger.error(f"Failed to parse JSON response from currency API: {e}")
return None
except Exception as e:
logger.error(f"Unexpected error fetching currency rate: {e}")
return None
# FastAPI app for REST server
app = FastAPI(title="Currency Exchange Agent", version="1.0.0")
async def prepare_currency_messages(request_body: ChatCompletionRequest):
"""Prepare messages with currency exchange data."""
# Extract currency information from conversation using LLM
currency_info = await extract_currency_info_from_messages(request_body.messages)
from_currency = currency_info["from_currency"]
to_currency = currency_info["to_currency"]
amount = currency_info.get("amount")
# Get currency exchange rate
rate_data = await get_currency_exchange_rate(from_currency, to_currency)
if rate_data:
currency_data = {
"from_currency": rate_data["from_currency"],
"to_currency": rate_data["to_currency"],
"rate": rate_data["rate"],
"date": rate_data.get("date"),
}
# If an amount was mentioned, calculate the conversion
if amount is not None:
converted_amount = amount * rate_data["rate"]
currency_data["original_amount"] = amount
currency_data["converted_amount"] = round(converted_amount, 2)
else:
logger.warning(
f"Could not fetch currency rate for {from_currency} to {to_currency}"
)
currency_data = {
"from_currency": from_currency,
"to_currency": to_currency,
"rate": None,
"error": "Could not retrieve exchange rate",
}
# Create system message with currency data
currency_context = f"""
Current currency exchange data:
{json.dumps(currency_data, indent=2)}
Use this data to answer the user's currency exchange query.
"""
response_messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "assistant", "content": currency_context},
]
# Add conversation history
for msg in request_body.messages:
response_messages.append({"role": msg.role, "content": msg.content})
return response_messages
@app.post("/v1/chat/completions")
async def chat_completion_http(request: Request, request_body: ChatCompletionRequest):
"""HTTP endpoint for chat completions with streaming support."""
logger.info(f"Received currency request with {len(request_body.messages)} messages")
traceparent_header = request.headers.get("traceparent")
if traceparent_header:
logger.info(f"Received traceparent header: {traceparent_header}")
return StreamingResponse(
stream_chat_completions(request_body, traceparent_header),
media_type="text/plain",
headers={
"content-type": "text/event-stream",
},
)
async def stream_chat_completions(
request_body: ChatCompletionRequest, traceparent_header: str = None
):
"""Generate streaming chat completions."""
# Prepare messages with currency exchange data
response_messages = await prepare_currency_messages(request_body)
try:
logger.info(
f"Calling archgw at {LLM_GATEWAY_ENDPOINT} to generate currency response"
)
# Prepare extra headers
extra_headers = {"x-envoy-max-retries": "3"}
if traceparent_header:
extra_headers["traceparent"] = traceparent_header
response_stream = await archgw_client.chat.completions.create(
model=CURRENCY_MODEL,
messages=response_messages,
temperature=request_body.temperature or 0.7,
max_tokens=request_body.max_tokens or 1000,
stream=True,
extra_headers=extra_headers,
)
completion_id = f"chatcmpl-{uuid.uuid4().hex[:8]}"
created_time = int(time.time())
collected_content = []
async for chunk in response_stream:
if chunk.choices and chunk.choices[0].delta.content:
content = chunk.choices[0].delta.content
collected_content.append(content)
stream_chunk = ChatCompletionStreamResponse(
id=completion_id,
created=created_time,
model=request_body.model,
choices=[
{
"index": 0,
"delta": {"content": content},
"finish_reason": None,
}
],
)
yield f"data: {stream_chunk.model_dump_json()}\n\n"
full_response = "".join(collected_content)
updated_history = [{"role": "assistant", "content": full_response}]
final_chunk = ChatCompletionStreamResponse(
id=completion_id,
created=created_time,
model=request_body.model,
choices=[
{
"index": 0,
"delta": {},
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": json.dumps(updated_history),
},
}
],
)
yield f"data: {final_chunk.model_dump_json()}\n\n"
yield "data: [DONE]\n\n"
except Exception as e:
logger.error(f"Error generating currency response: {e}")
error_chunk = ChatCompletionStreamResponse(
id=f"chatcmpl-{uuid.uuid4().hex[:8]}",
created=int(time.time()),
model=request_body.model,
choices=[
{
"index": 0,
"delta": {
"content": "I apologize, but I'm having trouble generating a currency exchange response right now. Please try again."
},
"finish_reason": "stop",
}
],
)
yield f"data: {error_chunk.model_dump_json()}\n\n"
yield "data: [DONE]\n\n"
@app.get("/health")
async def health_check():
"""Health check endpoint."""
return {"status": "healthy", "agent": "currency_exchange"}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=10530)
def start_server(host: str = "localhost", port: int = 10530):
"""Start the currency agent server."""
uvicorn.run(
app,
host=host,
port=port,
log_config={
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s - [CURRENCY_AGENT] - %(levelname)s - %(message)s",
},
},
"handlers": {
"default": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
},
},
"root": {
"level": "INFO",
"handlers": ["default"],
},
},
)

File diff suppressed because it is too large Load diff

View file

@ -12,12 +12,7 @@ from datetime import datetime, timedelta
import httpx
from typing import Optional
from urllib.parse import quote
from .api import (
ChatCompletionRequest,
ChatCompletionResponse,
ChatCompletionStreamResponse,
)
from opentelemetry.propagate import extract, inject
# Set up logging
logging.basicConfig(
@ -26,458 +21,16 @@ logging.basicConfig(
)
logger = logging.getLogger(__name__)
# Configuration for archgw LLM gateway
LLM_GATEWAY_ENDPOINT = os.getenv("LLM_GATEWAY_ENDPOINT", "http://localhost:12000/v1")
# Configuration for plano LLM gateway
LLM_GATEWAY_ENDPOINT = os.getenv(
"LLM_GATEWAY_ENDPOINT", "http://host.docker.internal:12001/v1"
)
WEATHER_MODEL = "openai/gpt-4o"
LOCATION_MODEL = "openai/gpt-4o-mini"
# HTTP client for API calls
http_client = httpx.AsyncClient(timeout=10.0)
# System prompt for weather agent
SYSTEM_PROMPT = """You are a professional travel planner assistant. Your role is to provide accurate, clear, and helpful information about weather and flights based on the structured data provided to you.
CRITICAL INSTRUCTIONS:
1. DATA STRUCTURE:
WEATHER DATA:
- You will receive weather data as JSON in a system message
- The data contains a "location" field (string) and a "forecast" array
- Each forecast entry has: date, day_name, temperature_c, temperature_f, temperature_max_c, temperature_min_c, condition, sunrise, sunset
- Some fields may be null/None - handle these gracefully
FLIGHT DATA:
- You will receive flight information in a system message
- Flight data includes: airline, flight number, departure time, arrival time, origin airport, destination airport, aircraft type, status, gate, terminal
- Information may include both scheduled and estimated times
- Some fields may be unavailable - handle these gracefully
2. WEATHER HANDLING:
- For single-day queries: Use temperature_c/temperature_f (current/primary temperature)
- For multi-day forecasts: Use temperature_max_c and temperature_min_c when available
- Always provide temperatures in both Celsius and Fahrenheit when available
- If temperature is null, say "temperature data unavailable" rather than making up numbers
- Use exact condition descriptions provided (e.g., "Clear sky", "Rainy", "Partly Cloudy")
- Add helpful context when appropriate (e.g., "perfect for outdoor activities" for clear skies)
3. FLIGHT HANDLING:
- Present flight information clearly with airline name and flight number
- Include departure and arrival times with time zones when provided
- Mention origin and destination airports with their codes
- Include gate and terminal information when available
- Note aircraft type if relevant to the query
- Highlight any status updates (delays, early arrivals, etc.)
- For multiple flights, list them in chronological order by departure time
- If specific details are missing, acknowledge this rather than inventing information
4. MULTI-PART QUERIES:
- Users may ask about both weather and flights in one message
- Answer ALL parts of the query that you have data for
- Organize your response logically - typically weather first, then flights, or vice versa based on the query
- Provide complete information for each topic without mentioning other agents
- If you receive data for only one topic but the user asked about multiple, answer what you can with the provided data
5. ERROR HANDLING:
- If weather forecast contains an "error" field, acknowledge the issue politely
- If temperature or condition is null/None, mention that specific data is unavailable
- If flight details are incomplete, state which information is unavailable
- Never invent or guess weather or flight data - only use what's provided
- If location couldn't be determined, acknowledge this but still provide available data
6. RESPONSE FORMAT:
For Weather:
- Single-day queries: Provide current conditions, temperature, and condition
- Multi-day forecasts: List each day with date, day name, high/low temps, and condition
- Include sunrise/sunset times when available and relevant
For Flights:
- List flights with clear numbering or bullet points
- Include key details: airline, flight number, departure/arrival times, airports
- Add gate, terminal, and status information when available
- For multiple flights, organize chronologically
General:
- Use natural, conversational language
- Be concise but complete
- Format dates and times clearly
- Use bullet points or numbered lists for clarity
7. LOCATION HANDLING:
- Always mention location names from the data
- For flights, clearly state origin and destination cities/airports
- If locations differ from what the user asked, acknowledge this politely
8. RESPONSE STYLE:
- Be friendly and professional
- Use natural language, not technical jargon
- Provide information in a logical, easy-to-read format
- When answering multi-part queries, create a cohesive response that addresses all aspects
Remember: Only use the data provided. Never fabricate weather or flight information. If data is missing, clearly state what's unavailable. Answer all parts of the user's query that you have data for."""
async def geocode_city(city: str) -> Optional[dict]:
"""Geocode a city name to latitude and longitude using Open-Meteo API."""
try:
url = f"https://geocoding-api.open-meteo.com/v1/search?name={quote(city)}&count=1&language=en&format=json"
response = await http_client.get(url)
if response.status_code != 200:
logger.warning(
f"Geocoding API returned status {response.status_code} for city: {city}"
)
return None
data = response.json()
if not data.get("results") or len(data["results"]) == 0:
logger.warning(f"No geocoding results found for city: {city}")
return None
result = data["results"][0]
return {
"latitude": result["latitude"],
"longitude": result["longitude"],
"name": result.get("name", city),
}
except Exception as e:
logger.error(f"Error geocoding city {city}: {e}")
return None
async def get_live_weather(
latitude: float, longitude: float, days: int = 1
) -> Optional[dict]:
"""Get live weather data from Open-Meteo API."""
try:
forecast_days = min(days, 16)
url = (
f"https://api.open-meteo.com/v1/forecast?"
f"latitude={latitude}&"
f"longitude={longitude}&"
f"current=temperature_2m&"
f"hourly=temperature_2m&"
f"daily=sunrise,sunset,temperature_2m_max,temperature_2m_min,weather_code&"
f"forecast_days={forecast_days}&"
f"timezone=auto"
)
response = await http_client.get(url)
if response.status_code != 200:
logger.warning(f"Weather API returned status {response.status_code}")
return None
return response.json()
except Exception as e:
logger.error(f"Error fetching weather data: {e}")
return None
def weather_code_to_condition(weather_code: int) -> str:
"""Convert WMO weather code to human-readable condition."""
# WMO Weather interpretation codes (WW)
if weather_code == 0:
return "Clear sky"
elif weather_code in [1, 2, 3]:
return "Partly Cloudy"
elif weather_code in [45, 48]:
return "Foggy"
elif weather_code in [51, 53, 55, 56, 57]:
return "Drizzle"
elif weather_code in [61, 63, 65, 66, 67]:
return "Rainy"
elif weather_code in [71, 73, 75, 77]:
return "Snowy"
elif weather_code in [80, 81, 82]:
return "Rainy"
elif weather_code in [85, 86]:
return "Snowy"
elif weather_code in [95, 96, 99]:
return "Stormy"
else:
return "Cloudy"
async def get_weather_data(location: str, days: int = 1):
"""Get live weather data for a location using Open-Meteo API."""
geocode_result = await geocode_city(location)
if not geocode_result:
logger.warning(f"Could not geocode location: {location}, using fallback")
geocode_result = await geocode_city("New York")
if not geocode_result:
return {
"location": location,
"forecast": [
{
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"condition": "Unknown",
"error": "Could not retrieve weather data",
}
],
}
location_name = geocode_result["name"]
latitude = geocode_result["latitude"]
longitude = geocode_result["longitude"]
weather_data = await get_live_weather(latitude, longitude, days)
if not weather_data:
logger.warning("Could not fetch weather data for requested location")
return {
"location": location_name,
"forecast": [
{
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"condition": "Unknown",
"error": "Could not retrieve weather data",
}
],
}
current_temp = weather_data.get("current", {}).get("temperature_2m")
daily_data = weather_data.get("daily", {})
forecast = []
for i in range(min(days, len(daily_data.get("time", [])))):
date_str = daily_data["time"][i]
date_obj = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
temp_max = (
daily_data.get("temperature_2m_max", [None])[i]
if i < len(daily_data.get("temperature_2m_max", []))
else None
)
temp_min = (
daily_data.get("temperature_2m_min", [None])[i]
if i < len(daily_data.get("temperature_2m_min", []))
else None
)
weather_code = (
daily_data.get("weather_code", [0])[i]
if i < len(daily_data.get("weather_code", []))
else 0
)
sunrise = (
daily_data.get("sunrise", [None])[i]
if i < len(daily_data.get("sunrise", []))
else None
)
sunset = (
daily_data.get("sunset", [None])[i]
if i < len(daily_data.get("sunset", []))
else None
)
temp_c = (
temp_max if temp_max is not None else (current_temp if i == 0 else temp_min)
)
day_info = {
"date": date_str.split("T")[0],
"day_name": date_obj.strftime("%A"),
"temperature_c": round(temp_c, 1) if temp_c is not None else None,
"temperature_f": (
round(temp_c * 9 / 5 + 32, 1) if temp_c is not None else None
),
"temperature_max_c": round(temp_max, 1) if temp_max is not None else None,
"temperature_min_c": round(temp_min, 1) if temp_min is not None else None,
"condition": weather_code_to_condition(weather_code),
"sunrise": sunrise.split("T")[1] if sunrise else None,
"sunset": sunset.split("T")[1] if sunset else None,
}
forecast.append(day_info)
return {"location": location_name, "forecast": forecast}
LOCATION_EXTRACTION_PROMPT = """You are a location extraction assistant for WEATHER queries. Your ONLY job is to extract the geographic location (city, state, country, etc.) that the user is asking about for WEATHER information.
CRITICAL RULES:
1. Extract ONLY the location name associated with WEATHER questions - nothing else
2. Return just the location name in plain text (e.g., "London", "New York", "Paris, France")
3. **MULTI-PART QUERY HANDLING**: If the user mentions multiple locations in a multi-part query, extract ONLY the location mentioned in the WEATHER part
- Look for patterns like "weather in [location]", "forecast for [location]", "weather [location]"
- The location that appears WITH "weather" keywords is the weather location
- Example: "What's the weather in Seattle, and what is one flight that goes direct to Atlanta?" Extract "Seattle" (appears with "weather in")
- Example: "What is the weather in Atlanta and what flight goes from Detroit to Atlanta?" Extract "Atlanta" (appears with "weather in", even though Atlanta also appears in flight part)
- Example: "Weather in London and flights to Paris" Extract "London" (weather location)
- Example: "What flight goes from Detroit to Atlanta and what's the weather in Atlanta?" Extract "Atlanta" (appears with "weather in")
4. Look for patterns like "weather in [location]", "forecast for [location]", "weather [location]", "temperature in [location]"
5. Ignore error messages, HTML tags, and assistant responses
6. If no clear weather-related location is found, return exactly: "NOT_FOUND"
7. Clean the location name - remove words like "about", "for", "in", "the weather in", etc.
8. Return the location in a format suitable for geocoding (city name, or "City, State", or "City, Country")
Examples:
- "What's the weather in London?" "London"
- "Tell me about the weather for New York" "New York"
- "Weather forecast for Paris, France" "Paris, France"
- "What's the weather in Seattle, and what is one flight that goes direct to Atlanta?" "Seattle" (appears with "weather in")
- "What is the weather in Atlanta and what flight goes from Detroit to Atlanta?" "Atlanta" (appears with "weather in")
- "Weather in Istanbul and flights to Seattle" "Istanbul" (weather location)
- "What flight goes from Detroit to Atlanta and what's the weather in Atlanta?" "Atlanta" (appears with "weather in")
- "I'm going to Seattle" "Seattle" (if context suggests weather query)
- "What's happening?" "NOT_FOUND"
Now extract the WEATHER location from this message:"""
async def extract_location_from_messages(messages):
"""Extract location from user messages using LLM, focusing on weather-related locations."""
user_messages = [msg for msg in messages if msg.role == "user"]
if not user_messages:
logger.warning("No user messages found, using default: New York")
return "New York"
# CRITICAL: Always preserve the FIRST user message (original query) for multi-agent scenarios
# When Plano processes multiple agents, it may add assistant responses that get filtered out,
# but we need to always use the original user query
original_user_message = user_messages[0].content.strip() if user_messages else None
# Try to find a valid recent user message first (for follow-up queries)
user_content = None
for msg in reversed(user_messages):
content = msg.content.strip()
content_lower = content.lower()
# Skip messages that are clearly JSON-encoded assistant responses or errors
# But be less aggressive - only skip if it's clearly not a user query
if content.startswith("[{") or content.startswith("[{"):
# Likely JSON-encoded assistant response
continue
if any(
pattern in content_lower
for pattern in [
'"role": "assistant"',
'"role":"assistant"',
"error:",
]
):
continue
# Don't skip messages that just happen to contain these words naturally
user_content = content
break
# Fallback to original user message if no valid recent message found
if not user_content and original_user_message:
# Check if original message is valid (not JSON-encoded)
if not (
original_user_message.startswith("[{")
or original_user_message.startswith("[{")
):
user_content = original_user_message
logger.info(f"Using original user message: {user_content[:200]}")
if not user_content:
logger.warning("No valid user message found, using default: New York")
return "New York"
try:
logger.info(
f"Extracting weather location from user message: {user_content[:200]}"
)
# Build context from conversation history
conversation_context = []
for msg in messages:
content = msg.content.strip()
content_lower = content.lower()
if any(
pattern in content_lower
for pattern in ["<", ">", "error:", "i apologize", "i'm having trouble"]
):
continue
conversation_context.append({"role": msg.role, "content": content})
# Use last 5 messages for context
context_messages = (
conversation_context[-5:]
if len(conversation_context) > 5
else conversation_context
)
llm_messages = [{"role": "system", "content": LOCATION_EXTRACTION_PROMPT}]
for msg in context_messages:
llm_messages.append({"role": msg["role"], "content": msg["content"]})
response = await archgw_client.chat.completions.create(
model=LOCATION_MODEL,
messages=llm_messages,
temperature=0.1,
max_tokens=50,
)
location = response.choices[0].message.content.strip()
location = location.strip("\"'`.,!?")
if not location or location.upper() == "NOT_FOUND":
# Fallback: Try regex extraction for weather patterns
weather_patterns = [
r"weather\s+(?:in|for)\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)",
r"forecast\s+(?:in|for)\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)",
r"weather\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)",
]
for msg in reversed(context_messages):
if msg["role"] == "user":
content = msg["content"]
for pattern in weather_patterns:
match = re.search(pattern, content, re.IGNORECASE)
if match:
potential_location = match.group(1).strip()
logger.info(
f"Fallback regex extracted weather location: {potential_location}"
)
return potential_location
logger.warning(
f"LLM could not extract location from message, using default: New York"
)
return "New York"
logger.info(f"LLM extracted weather location: {location}")
return location
except Exception as e:
logger.error(f"Error extracting location with LLM: {e}, trying fallback regex")
# Fallback regex extraction
try:
for msg in reversed(messages):
if msg.role == "user":
content = msg.content
weather_patterns = [
r"weather\s+(?:in|for)\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)",
r"forecast\s+(?:in|for)\s+([A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)",
]
for pattern in weather_patterns:
match = re.search(pattern, content, re.IGNORECASE)
if match:
potential_location = match.group(1).strip()
logger.info(
f"Fallback regex extracted weather location: {potential_location}"
)
return potential_location
except:
pass
logger.error("All extraction methods failed, using default: New York")
return "New York"
# Initialize OpenAI client for archgw
archgw_client = AsyncOpenAI(
# Initialize OpenAI client for plano
openai_client_via_plano = AsyncOpenAI(
base_url=LLM_GATEWAY_ENDPOINT,
api_key="EMPTY",
)
@ -485,60 +38,241 @@ archgw_client = AsyncOpenAI(
# FastAPI app for REST server
app = FastAPI(title="Weather Forecast Agent", version="1.0.0")
# HTTP client for API calls
http_client = httpx.AsyncClient(timeout=10.0)
async def prepare_weather_messages(request_body: ChatCompletionRequest):
"""Prepare messages with weather data."""
# Extract location from conversation using LLM
location = await extract_location_from_messages(request_body.messages)
# Determine if they want forecast (multi-day)
last_user_msg = ""
for msg in reversed(request_body.messages):
if msg.role == "user":
last_user_msg = msg.content.lower()
break
# Utility functions
def celsius_to_fahrenheit(temp_c: Optional[float]) -> Optional[float]:
"""Convert Celsius to Fahrenheit."""
return round(temp_c * 9 / 5 + 32, 1) if temp_c is not None else None
days = 5 if "forecast" in last_user_msg or "week" in last_user_msg else 1
# Get live weather data
weather_data = await get_weather_data(location, days)
def get_user_messages(messages: list) -> list:
"""Extract user messages from message list."""
return [msg for msg in messages if msg.get("role") == "user"]
# Create system message with weather data
weather_context = f"""
Current weather data for {weather_data['location']}:
{json.dumps(weather_data, indent=2)}
def get_last_user_content(messages: list) -> str:
"""Get the content of the most recent user message."""
for msg in reversed(messages):
if msg.get("role") == "user":
return msg.get("content", "").lower()
return ""
Use this data to answer the user's weather query.
"""
response_messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "assistant", "content": weather_context},
]
async def get_weather_data(request: Request, messages: list, days: int = 1):
"""Extract location from user's conversation and fetch weather data from Open-Meteo API.
# Add conversation history
for msg in request_body.messages:
response_messages.append({"role": msg.role, "content": msg.content})
This function does two things:
1. Uses an LLM to extract the location from the user's message
2. Fetches weather data for that location from Open-Meteo
return response_messages
Currently returns only current day weather. Want to add multi-day forecasts?
"""
instructions = """Extract the location for WEATHER queries. Return just the city name.
Rules:
1. For multi-part queries, extract ONLY the location mentioned with weather keywords ("weather in [location]")
2. If user says "there" or "that city", it typically refers to the DESTINATION city in travel contexts (not the origin)
3. For flight queries with weather, "there" means the destination city where they're traveling TO
4. Return plain text (e.g., "London", "New York", "Paris, France")
5. If no weather location found, return "NOT_FOUND"
Examples:
- "What's the weather in London?" "London"
- "Flights from Seattle to Atlanta, and show me the weather there" "Atlanta"
- "Can you get me flights from Seattle to Atlanta tomorrow, and also please show me the weather there" "Atlanta"
- "What's the weather in Seattle, and what is one flight that goes direct to Atlanta?" "Seattle"
- User asked about flights to Atlanta, then "what's the weather like there?" "Atlanta"
- "I'm going to Seattle" "Seattle"
- "What's happening?" "NOT_FOUND"
Extract location:"""
try:
user_messages = [
msg.get("content") for msg in messages if msg.get("role") == "user"
]
if not user_messages:
location = "New York"
else:
ctx = extract(request.headers)
extra_headers = {}
inject(extra_headers, context=ctx)
# For location extraction, pass full conversation for context (e.g., "there" = previous destination)
response = await openai_client_via_plano.chat.completions.create(
model=LOCATION_MODEL,
messages=[
{"role": "system", "content": instructions},
*[
{"role": msg.get("role"), "content": msg.get("content")}
for msg in messages
],
],
temperature=0.1,
max_tokens=50,
extra_headers=extra_headers if extra_headers else None,
)
location = response.choices[0].message.content.strip().strip("\"'`.,!?")
logger.info(f"Location extraction result: '{location}'")
if not location or location.upper() == "NOT_FOUND":
location = "New York"
logger.info(f"Location not found, defaulting to: {location}")
except Exception as e:
logger.error(f"Error extracting location: {e}")
location = "New York"
logger.info(f"Fetching weather for location: '{location}' (days: {days})")
# Step 2: Fetch weather data for the extracted location
try:
# Geocode city to get coordinates
geocode_url = f"https://geocoding-api.open-meteo.com/v1/search?name={quote(location)}&count=1&language=en&format=json"
geocode_response = await http_client.get(geocode_url)
if geocode_response.status_code != 200 or not geocode_response.json().get(
"results"
):
logger.warning(f"Could not geocode {location}, using New York")
location = "New York"
geocode_url = f"https://geocoding-api.open-meteo.com/v1/search?name={quote(location)}&count=1&language=en&format=json"
geocode_response = await http_client.get(geocode_url)
geocode_data = geocode_response.json()
if not geocode_data.get("results"):
return {
"location": location,
"weather": {
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"weather_code": None,
"error": "Could not retrieve weather data",
},
}
result = geocode_data["results"][0]
location_name = result.get("name", location)
latitude = result["latitude"]
longitude = result["longitude"]
logger.info(
f"Geocoded '{location}' to {location_name} ({latitude}, {longitude})"
)
# Get weather forecast
weather_url = (
f"https://api.open-meteo.com/v1/forecast?"
f"latitude={latitude}&longitude={longitude}&"
f"current=temperature_2m&"
f"daily=sunrise,sunset,temperature_2m_max,temperature_2m_min,weather_code&"
f"forecast_days={days}&timezone=auto"
)
weather_response = await http_client.get(weather_url)
if weather_response.status_code != 200:
return {
"location": location_name,
"weather": {
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"weather_code": None,
"error": "Could not retrieve weather data",
},
}
weather_data = weather_response.json()
current_temp = weather_data.get("current", {}).get("temperature_2m")
daily = weather_data.get("daily", {})
# Build forecast for requested number of days
forecast = []
for i in range(days):
date_str = daily["time"][i]
date_obj = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
temp_max = (
daily.get("temperature_2m_max", [])[i]
if daily.get("temperature_2m_max")
else None
)
temp_min = (
daily.get("temperature_2m_min", [])[i]
if daily.get("temperature_2m_min")
else None
)
weather_code = (
daily.get("weather_code", [0])[i] if daily.get("weather_code") else 0
)
sunrise = daily.get("sunrise", [])[i] if daily.get("sunrise") else None
sunset = daily.get("sunset", [])[i] if daily.get("sunset") else None
# Use current temp for today, otherwise use max temp
temp_c = (
temp_max
if temp_max is not None
else (current_temp if i == 0 and current_temp else temp_min)
)
forecast.append(
{
"date": date_str.split("T")[0],
"day_name": date_obj.strftime("%A"),
"temperature_c": round(temp_c, 1) if temp_c is not None else None,
"temperature_f": celsius_to_fahrenheit(temp_c),
"temperature_max_c": round(temp_max, 1)
if temp_max is not None
else None,
"temperature_min_c": round(temp_min, 1)
if temp_min is not None
else None,
"weather_code": weather_code,
"sunrise": sunrise.split("T")[1] if sunrise else None,
"sunset": sunset.split("T")[1] if sunset else None,
}
)
return {"location": location_name, "forecast": forecast}
except Exception as e:
logger.error(f"Error getting weather data: {e}")
return {
"location": location,
"weather": {
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"weather_code": None,
"error": "Could not retrieve weather data",
},
}
@app.post("/v1/chat/completions")
async def chat_completion_http(request: Request, request_body: ChatCompletionRequest):
async def handle_request(request: Request):
"""HTTP endpoint for chat completions with streaming support."""
logger.info(f"Received weather request with {len(request_body.messages)} messages")
request_body = await request.json()
messages = request_body.get("messages", [])
logger.info(
f"messages detail json dumps: {json.dumps([msg.model_dump() for msg in request_body.messages], indent=2)}"
"messages detail json dumps: %s",
json.dumps(messages, indent=2),
)
traceparent_header = request.headers.get("traceparent")
if traceparent_header:
logger.info(f"Received traceparent header: {traceparent_header}")
return StreamingResponse(
stream_chat_completions(request_body, traceparent_header),
invoke_weather_agent(request, request_body, traceparent_header),
media_type="text/plain",
headers={
"content-type": "text/event-stream",
@ -546,85 +280,100 @@ async def chat_completion_http(request: Request, request_body: ChatCompletionReq
)
async def stream_chat_completions(
request_body: ChatCompletionRequest, traceparent_header: str = None
async def invoke_weather_agent(
request: Request, request_body: dict, traceparent_header: str = None
):
"""Generate streaming chat completions."""
response_messages = await prepare_weather_messages(request_body)
messages = request_body.get("messages", [])
# Detect if user wants multi-day forecast
last_user_msg = get_last_user_content(messages)
days = 1
if "forecast" in last_user_msg or "week" in last_user_msg:
days = 7
elif "tomorrow" in last_user_msg:
days = 2
# Extract specific number of days if mentioned (e.g., "5 day forecast")
import re
day_match = re.search(r"(\d{1,2})\s+day", last_user_msg)
if day_match:
requested_days = int(day_match.group(1))
days = min(requested_days, 16) # API supports max 16 days
# Get live weather data (location extraction happens inside this function)
weather_data = await get_weather_data(request, messages, days)
# Create weather context to append to user message
forecast_type = "forecast" if days > 1 else "current weather"
weather_context = f"""
Weather data for {weather_data['location']} ({forecast_type}):
{json.dumps(weather_data, indent=2)}"""
# System prompt for weather agent
instructions = """You are a weather assistant in a multi-agent system. You will receive weather data in JSON format with these fields:
- "location": City name
- "forecast": Array of weather objects, each with date, day_name, temperature_c, temperature_f, temperature_max_c, temperature_min_c, weather_code, sunrise, sunset
- weather_code: WMO code (0=clear, 1-3=partly cloudy, 45-48=fog, 51-67=rain, 71-86=snow, 95-99=thunderstorm)
Your task:
1. Present the weather/forecast clearly for the location
2. For single day: show current conditions
3. For multi-day: show each day with date and conditions
4. Include temperature in both Celsius and Fahrenheit
5. Describe conditions naturally based on weather_code
6. Use conversational language
Important: If the conversation includes information from other agents (like flight details), acknowledge and build upon that context naturally. Your primary focus is weather, but maintain awareness of the full conversation.
Remember: Only use the provided data. If fields are null, mention data is unavailable."""
# Build message history with weather data appended to the last user message
response_messages = [{"role": "system", "content": instructions}]
for i, msg in enumerate(messages):
# Append weather data to the last user message
if i == len(messages) - 1 and msg.get("role") == "user":
response_messages.append(
{"role": "user", "content": msg.get("content") + weather_context}
)
else:
response_messages.append(
{"role": msg.get("role"), "content": msg.get("content")}
)
try:
logger.info(
f"Calling archgw at {LLM_GATEWAY_ENDPOINT} to generate weather response"
)
ctx = extract(request.headers)
extra_headers = {"x-envoy-max-retries": "3"}
if traceparent_header:
extra_headers["traceparent"] = traceparent_header
inject(extra_headers, context=ctx)
response_stream = await archgw_client.chat.completions.create(
stream = await openai_client_via_plano.chat.completions.create(
model=WEATHER_MODEL,
messages=response_messages,
temperature=request_body.temperature or 0.7,
max_tokens=request_body.max_tokens or 1000,
temperature=request_body.get("temperature", 0.7),
max_tokens=request_body.get("max_tokens", 1000),
stream=True,
extra_headers=extra_headers,
)
completion_id = f"chatcmpl-{uuid.uuid4().hex[:8]}"
created_time = int(time.time())
collected_content = []
async for chunk in stream:
if chunk.choices:
yield f"data: {chunk.model_dump_json()}\n\n"
async for chunk in response_stream:
if chunk.choices and chunk.choices[0].delta.content:
content = chunk.choices[0].delta.content
collected_content.append(content)
stream_chunk = ChatCompletionStreamResponse(
id=completion_id,
created=created_time,
model=request_body.model,
choices=[
{
"index": 0,
"delta": {"content": content},
"finish_reason": None,
}
],
)
yield f"data: {stream_chunk.model_dump_json()}\n\n"
full_response = "".join(collected_content)
updated_history = [{"role": "assistant", "content": full_response}]
final_chunk = ChatCompletionStreamResponse(
id=completion_id,
created=created_time,
model=request_body.model,
choices=[
{
"index": 0,
"delta": {},
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": json.dumps(updated_history),
},
}
],
)
yield f"data: {final_chunk.model_dump_json()}\n\n"
yield "data: [DONE]\n\n"
except Exception as e:
logger.error(f"Error generating weather response: {e}")
error_chunk = ChatCompletionStreamResponse(
id=f"chatcmpl-{uuid.uuid4().hex[:8]}",
created=int(time.time()),
model=request_body.model,
choices=[
error_chunk = {
"id": f"chatcmpl-{uuid.uuid4().hex[:8]}",
"object": "chat.completion.chunk",
"created": int(time.time()),
"model": request_body.get("model", WEATHER_MODEL),
"choices": [
{
"index": 0,
"delta": {
@ -633,9 +382,8 @@ async def stream_chat_completions(
"finish_reason": "stop",
}
],
)
yield f"data: {error_chunk.model_dump_json()}\n\n"
}
yield f"data: {json.dumps(error_chunk)}\n\n"
yield "data: [DONE]\n\n"
@ -672,3 +420,7 @@ def start_server(host: str = "localhost", port: int = 10510):
},
},
)
if __name__ == "__main__":
start_server(host="0.0.0.0", port=10510)

View file

@ -1,45 +0,0 @@
#!/bin/bash
set -e
WAIT_FOR_PIDS=()
log() {
timestamp=$(python3 -c 'from datetime import datetime; print(datetime.now().strftime("%Y-%m-%d %H:%M:%S,%f")[:23])')
message="$*"
echo "$timestamp - $message"
}
cleanup() {
log "Caught signal, terminating all agent processes ..."
for PID in "${WAIT_FOR_PIDS[@]}"; do
if kill $PID 2> /dev/null; then
log "killed process: $PID"
fi
done
exit 1
}
trap cleanup EXIT
log "Starting weather agent on port 10510..."
uv run python -m travel_agents --host 0.0.0.0 --port 10510 --agent weather &
WAIT_FOR_PIDS+=($!)
log "Starting flight agent on port 10520..."
uv run python -m travel_agents --host 0.0.0.0 --port 10520 --agent flight &
WAIT_FOR_PIDS+=($!)
log "Starting currency agent on port 10530..."
uv run python -m travel_agents --host 0.0.0.0 --port 10530 --agent currency &
WAIT_FOR_PIDS+=($!)
log "All agents started successfully!"
log " - Weather Agent: http://localhost:10510"
log " - Flight Agent: http://localhost:10520"
log " - Currency Agent: http://localhost:10530"
log ""
log "Waiting for agents to run..."
for PID in "${WAIT_FOR_PIDS[@]}"; do
wait "$PID"
done

View file

@ -7,18 +7,6 @@ Content-Type: application/json
{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a professional travel planner assistant. Your role is to provide accurate, clear, and helpful information about weather and flights based on the structured data provided to you.\n\nCRITICAL INSTRUCTIONS:\n\n1. DATA STRUCTURE:\n \n WEATHER DATA:\n - You will receive weather data as JSON in a system message\n - The data contains a \"location\" field (string) and a \"forecast\" array\n - Each forecast entry has: date, day_name, temperature_c, temperature_f, temperature_max_c, temperature_min_c, condition, sunrise, sunset\n - Some fields may be null/None - handle these gracefully\n \n FLIGHT DATA:\n - You will receive flight information in a system message\n - Flight data includes: airline, flight number, departure time, arrival time, origin airport, destination airport, aircraft type, status, gate, terminal\n - Information may include both scheduled and estimated times\n - Some fields may be unavailable - handle these gracefully\n\n2. WEATHER HANDLING:\n - For single-day queries: Use temperature_c/temperature_f (current/primary temperature)\n - For multi-day forecasts: Use temperature_max_c and temperature_min_c when available\n - Always provide temperatures in both Celsius and Fahrenheit when available\n - If temperature is null, say \"temperature data unavailable\" rather than making up numbers\n - Use exact condition descriptions provided (e.g., \"Clear sky\", \"Rainy\", \"Partly Cloudy\")\n - Add helpful context when appropriate (e.g., \"perfect for outdoor activities\" for clear skies)\n\n3. FLIGHT HANDLING:\n - Present flight information clearly with airline name and flight number\n - Include departure and arrival times with time zones when provided\n - Mention origin and destination airports with their codes\n - Include gate and terminal information when available\n - Note aircraft type if relevant to the query\n - Highlight any status updates (delays, early arrivals, etc.)\n - For multiple flights, list them in chronological order by departure time\n - If specific details are missing, acknowledge this rather than inventing information\n\n4. MULTI-PART QUERIES:\n - Users may ask about both weather and flights in one message\n - Answer ALL parts of the query that you have data for\n - Organize your response logically - typically weather first, then flights, or vice versa based on the query\n - Provide complete information for each topic without mentioning other agents\n - If you receive data for only one topic but the user asked about multiple, answer what you can with the provided data\n\n5. ERROR HANDLING:\n - If weather forecast contains an \"error\" field, acknowledge the issue politely\n - If temperature or condition is null/None, mention that specific data is unavailable\n - If flight details are incomplete, state which information is unavailable\n - Never invent or guess weather or flight data - only use what's provided\n - If location couldn't be determined, acknowledge this but still provide available data\n\n6. RESPONSE FORMAT:\n \n For Weather:\n - Single-day queries: Provide current conditions, temperature, and condition\n - Multi-day forecasts: List each day with date, day name, high/low temps, and condition\n - Include sunrise/sunset times when available and relevant\n \n For Flights:\n - List flights with clear numbering or bullet points\n - Include key details: airline, flight number, departure/arrival times, airports\n - Add gate, terminal, and status information when available\n - For multiple flights, organize chronologically\n \n General:\n - Use natural, conversational language\n - Be concise but complete\n - Format dates and times clearly\n - Use bullet points or numbered lists for clarity\n\n7. LOCATION HANDLING:\n - Always mention location names from the data\n - For flights, clearly state origin and destination cities/airports\n - If locations differ from what the user asked, acknowledge this politely\n\n8. RESPONSE STYLE:\n - Be friendly and professional\n - Use natural language, not technical jargon\n - Provide information in a logical, easy-to-read format\n - When answering multi-part queries, create a cohesive response that addresses all aspects\n\nRemember: Only use the data provided. Never fabricate weather or flight information. If data is missing, clearly state what's unavailable. Answer all parts of the user's query that you have data for."
},
{
"role": "system",
"content": "Current weather data for Seattle:\n\n{\n \"location\": \"Seattle\",\n \"forecast\": [\n {\n \"date\": \"2025-12-22\",\n \"day_name\": \"Monday\",\n \"temperature_c\": 8.3,\n \"temperature_f\": 46.9,\n \"temperature_max_c\": 8.3,\n \"temperature_min_c\": 2.8,\n \"condition\": \"Rainy\",\n \"sunrise\": \"07:55\",\n \"sunset\": \"16:20\"\n }\n ]\n}\n\nUse this data to answer the user's weather query."
},
{
"role": "system",
"content": "Here are some direct flights from Seattle to Atlanta on December 23, 2025:\n\n1. **Delta Airlines Flight DL552**\n - **Departure:** Scheduled at 3:47 PM (Seattle Time), from Seattle-Tacoma Intl (SEA)\n - **Arrival:** Scheduled at 8:31 PM (Atlanta Time), at Hartsfield-Jackson Intl (ATL)\n - **Aircraft:** Boeing 737-900 (B739)\n - **Status:** Scheduled\n - **Terminal at Atlanta:** S\n - **Estimated arrival slightly early**: 8:26 PM\n\n2. **Delta Airlines Flight DL542**\n - **Departure:** Scheduled at 12:00 PM (Seattle Time), Gate A4, from Seattle-Tacoma Intl (SEA)\n - **Arrival:** Scheduled at 4:49 PM (Atlanta Time), at Hartsfield-Jackson Intl (ATL)\n - **Aircraft:** Boeing 737-900 (B739)\n - **Status:** Scheduled\n - **Gate at Atlanta:** E10, Terminal: S\n - **Estimated early arrival**: 4:44 PM\n\n3. **Delta Airlines Flight DL554**\n - **Departure:** Scheduled at 10:15 AM (Seattle Time), Gate A10, from Seattle-Tacoma Intl (SEA)\n - **Arrival:** Scheduled at 4:05 PM (Atlanta Time), at Hartsfield-Jackson Intl (ATL)\n - **Aircraft:** Boeing 737-900 (B739)\n - **Status:** Scheduled\n - **Gate at Atlanta:** B19, Terminal: S\n - **Estimated late arrival**: 4:06 PM\n\n4. **Alaska Airlines Flight AS334**\n - **Departure:** Scheduled at 9:16 AM (Seattle Time), Gate C20, from Seattle-Tacoma Intl (SEA)\n - **Arrival:** Scheduled at 5:08 PM (Atlanta Time), at Hartsfield-Jackson Intl (ATL)\n - **Aircraft:** Boeing 737-900 (B739)\n - **Status:** Scheduled\n - **Gate at Atlanta:** C5, Terminal: N\n\nThese are just a few of the direct flights available. Please let me know if you need more details on any other specific flight."
},
{
"role": "user",
"content": "What's the weather in Seattle?"

View file

@ -1 +1 @@
docs.archgw.com
docs.planoai.dev

View file

@ -0,0 +1,93 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Iterable
from typing import TYPE_CHECKING
if TYPE_CHECKING:
# Only for type-checkers; Sphinx is only required in the docs build environment.
from sphinx.application import Sphinx # type: ignore[import-not-found]
@dataclass(frozen=True)
class LlmsTxtDoc:
docname: str
title: str
text: str
def _iter_docs(app: Sphinx) -> Iterable[LlmsTxtDoc]:
env = app.env
# Sphinx internal pages that shouldn't be included.
excluded = {"genindex", "search"}
for docname in sorted(d for d in env.found_docs if d not in excluded):
title_node = env.titles.get(docname)
title = title_node.astext().strip() if title_node else docname
doctree = env.get_doctree(docname)
text = doctree.astext().strip()
yield LlmsTxtDoc(docname=docname, title=title, text=text)
def _render_llms_txt(app: Sphinx) -> str:
now = datetime.now(timezone.utc).isoformat()
project = str(getattr(app.config, "project", "")).strip()
release = str(getattr(app.config, "release", "")).strip()
header = f"{project} {release}".strip() or "Documentation"
docs = list(_iter_docs(app))
lines: list[str] = []
lines.append(header)
lines.append("llms.txt (auto-generated)")
lines.append(f"Generated (UTC): {now}")
lines.append("")
lines.append("Table of contents")
for d in docs:
lines.append(f"- {d.title} ({d.docname})")
lines.append("")
for d in docs:
lines.append(d.title)
lines.append("-" * max(3, len(d.title)))
lines.append(f"Doc: {d.docname}")
lines.append("")
if d.text:
lines.append(d.text)
else:
lines.append("(empty)")
lines.append("")
lines.append("---")
lines.append("")
return "\n".join(lines).replace("\r\n", "\n").strip() + "\n"
def _on_build_finished(app: Sphinx, exception: Exception | None) -> None:
if exception is not None:
return
# Only generate for HTML-like builders where app.outdir is a website root.
if getattr(app.builder, "format", None) != "html":
return
# Per repo convention, place generated artifacts under an `includes/` folder.
out_path = Path(app.outdir) / "includes" / "llms.txt"
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(_render_llms_txt(app), encoding="utf-8")
def setup(app: Sphinx) -> dict[str, object]:
app.connect("build-finished", _on_build_finished)
return {
"version": "0.1.0",
"parallel_read_safe": True,
"parallel_write_safe": True,
}

View file

@ -0,0 +1,6 @@
/* Prevent sphinxawesome-theme's Tailwind utility `dark:invert` from inverting the header logo. */
.dark header img[alt="Logo"],
.dark #left-sidebar img[alt="Logo"] {
--tw-invert: invert(0%) !important;
filter: none !important;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

Before After
Before After

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 389 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 365 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 692 KiB

After

Width:  |  Height:  |  Size: 8.6 MiB

Before After
Before After

View file

@ -1,70 +0,0 @@
.. _arch_agent_guide:
Agentic Apps
=============
Arch helps you build personalized agentic applications by calling application-specific (API) functions via user prompts.
This involves any predefined functions or APIs you want to expose to users to perform tasks, gather information,
or manipulate data. This capability is generally referred to as :ref:`function calling <function_calling>`, where
you can support “agentic” apps tailored to specific use cases - from updating insurance claims to creating ad campaigns - via prompts.
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with the user to
gather any missing parameters and makes API calls so that you can focus on writing business logic. Arch does this via its
purpose-built `Arch-Function <https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68>`_ -
the fastest (200ms p50 - 12x faser than GPT-4o) and cheapest (44x than GPT-4o) function calling LLM that matches or outperforms
frontier LLMs.
.. image:: includes/agent/function-calling-flow.jpg
:width: 100%
:align: center
Single Function Call
--------------------
In the most common scenario, users will request a single action via prompts, and Arch efficiently processes the
request by extracting relevant parameters, validating the input, and calling the designated function or API. Here
is how you would go about enabling this scenario with Arch:
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: includes/agent/function-calling-agent.yaml
:language: yaml
:linenos:
:emphasize-lines: 19-49
:caption: Prompt Target Example Configuration
Step 2: Process Request Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the prompt targets are configured as above, handling those parameters is
.. literalinclude:: includes/agent/parameter_handling.py
:language: python
:linenos:
:caption: Parameter handling with Flask
Parallel & Multiple Function Calling
------------------------------------
In more complex use cases, users may request multiple actions or need multiple APIs/functions to be called
simultaneously or sequentially. With Arch, you can handle these scenarios efficiently using parallel or multiple
function calling. This allows your application to engage in a broader range of interactions, such as updating
different datasets, triggering events across systems, or collecting results from multiple services in one prompt.
Arch-FC1B is built to manage these parallel tasks efficiently, ensuring low latency and high throughput, even
when multiple functions are invoked. It provides two mechanisms to handle these cases:
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When enabling multiple function calling, define the prompt targets in a way that supports multiple functions or
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
the user's intent.
Example of Multiple Prompt Targets in YAML:
.. literalinclude:: includes/agent/function-calling-agent.yaml
:language: yaml
:linenos:
:emphasize-lines: 19-49
:caption: Prompt Target Example Configuration

View file

@ -1,90 +0,0 @@
.. _arch_multi_turn_guide:
Multi-Turn
==========
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds latency and token cost for
common scenarios that can be managed more efficiently.
Arch is highly capable of accurately detecting and processing prompts in multi-turn scenarios so that you can buil fast and accurate agents in minutes.
Below are some cnversational examples that you can build via Arch. Each example is enriched with annotations (via ** [Arch] ** ) that illustrates how Arch
processess conversational messages on your behalf.
.. Note::
The following section assumes that you have some knowledge about the core concepts of Arch, such as :ref:`prompt_targets <arch_overview_prompt_handling>`.
If you haven't familizaried yourself with Arch's concepts, we recommend you first read the :ref:`tech overview <tech_overview>` section firtst.
Additionally, the conversation examples below assume the usage of the following :ref:`arch_config.yaml <multi_turn_subsection_prompt_target>` file.
Example 1: Adjusting Retrieval
------------------------------
.. code-block:: text
User: What are the benefits of renewable energy?
**[Arch]**: Check if there is an available <prompt_target> that can handle this user query.
**[Arch]**: Found "get_info_for_energy_source" prompt_target in arch_config.yaml. Forward prompt to the endpoint configured in "get_info_for_energy_source"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind.
User: Include cost considerations in the response.
**[Arch]**: Follow-up detected. Forward prompt history to the "get_info_for_energy_source" prompt_target and post the following parameters consideration="cost"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind. While the initial setup costs can be high, long-term savings from reduced fuel expenses and government incentives make it cost-effective.
Example 2: Switching Intent
---------------------------
.. code-block:: text
User: What are the symptoms of diabetes?
**[Arch]**: Check if there is an available <prompt_target> that can handle this user query.
**[Arch]**: Found "diseases_symptoms" prompt_target in arch_config.yaml. Forward disease=diabeteres to "diseases_symptoms" prompt target
...
Assistant: Common symptoms include frequent urination, excessive thirst, fatigue, and blurry vision.
User: How is it diagnosed?
**[Arch]**: New intent detected.
**[Arch]**: Found "disease_diagnoses" prompt_target in arch_config.yaml. Forward disease=diabeteres to "disease_diagnoses" prompt target
...
Assistant: Diabetes is diagnosed through blood tests like fasting blood sugar, A1C, or an oral glucose tolerance test.
Build Multi-Turn RAG Apps
--------------------------
The following section describes how you can easilly add support for multi-turn scenarios via Arch. You process and manage multi-turn prompts
just like you manage single-turn ones. Arch handles the conpleixity of detecting the correct intent based on the last user prompt and
the covnersational history, extracts relevant parameters needed by downstream APIs, and dipatches calls to any upstream LLMs to summarize the
response from your APIs.
.. _multi_turn_subsection_prompt_target:
Step 1: Define Arch Config
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: includes/multi_turn/prompt_targets_multi_turn.yaml
:language: yaml
:caption: Arch Config
:linenos:
Step 2: Process Request in Flask
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the prompt targets are configured as above, handle parameters across multi-turn as if its a single-turn request
.. literalinclude:: includes/multi_turn/multi_turn_rag.py
:language: python
:caption: Parameter handling with Flask
:linenos:
Demo App
~~~~~~~~
For your convenience, we've built a `demo app <https://github.com/katanemo/archgw/tree/main/demos/samples_python/multi_turn_rag_agent>`_
that you can test and modify locally for multi-turn RAG scenarios.
.. figure:: includes/multi_turn/mutli-turn-example.png
:width: 100%
:align: center
Example multi-turn user conversation showing adjusting retrieval

View file

@ -1,52 +0,0 @@
.. _arch_rag_guide:
RAG Apps
========
The following section describes how Arch can help you build faster, smarter and more accurate
Retrieval-Augmented Generation (RAG) applications, including fast and accurate RAG in multi-turn
converational scenarios.
What is Retrieval-Augmented Generation (RAG)?
---------------------------------------------
RAG applications combine retrieval-based methods with generative AI models to provide more accurate,
contextually relevant, and reliable outputs. These applications leverage external data sources to augment
the capabilities of Large Language Models (LLMs), enabling them to retrieve and integrate specific information
rather than relying solely on the LLM's internal knowledge.
Parameter Extraction for RAG
----------------------------
To build RAG (Retrieval Augmented Generation) applications, you can configure prompt targets with parameters,
enabling Arch to retrieve critical information in a structured way for processing. This approach improves the
retrieval quality and speed of your application. By extracting parameters from the conversation, you can pull
the appropriate chunks from a vector database or SQL-like data store to enhance accuracy. With Arch, you can
streamline data retrieval and processing to build more efficient and precise RAG applications.
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: includes/rag/prompt_targets.yaml
:language: yaml
:caption: Prompt Targets
:linenos:
Step 2: Process Request Parameters in Flask
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the prompt targets are configured as above, handling those parameters is
.. literalinclude:: includes/rag/parameter_handling.py
:language: python
:caption: Parameter handling with Flask
:linenos:
Multi-Turn RAG (Follow-up Questions)
-------------------------------------
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds signifcant latency to the
user experience.
Arch is highly capable of accurately detecting and processing prompts in a multi-turn scenarios so that you can buil fast and accurate RAG apps in
minutes. For additional details on how to build multi-turn RAG applications please refer to our :ref:`multi-turn <arch_multi_turn_guide>` docs.

View file

Before

Width:  |  Height:  |  Size: 297 KiB

After

Width:  |  Height:  |  Size: 297 KiB

Before After
Before After

View file

@ -0,0 +1,76 @@
.. _agents:
Agents
======
Agents are autonomous systems that handle wide-ranging, open-ended tasks by calling models in a loop until the work is complete. Unlike deterministic :ref:`prompt targets <prompt_target>`, agents have access to tools, reason about which actions to take, and adapt their behavior based on intermediate results—making them ideal for complex workflows that require multi-step reasoning, external API calls, and dynamic decision-making.
Plano helps developers build and scale multi-agent systems by managing the orchestration layer—deciding which agent(s) or LLM(s) should handle each request, and in what sequence—while developers focus on implementing agent logic in any language or framework they choose.
Agent Orchestration
-------------------
**Plano-Orchestrator** is a family of state-of-the-art routing and orchestration models that decide which agent(s) should handle each request, and in what sequence. Built for real-world multi-agent deployments, it analyzes user intent and conversation context to make precise routing and orchestration decisions while remaining efficient enough for low-latency production use across general chat, coding, and long-context multi-turn conversations.
This allows development teams to:
* **Scale multi-agent systems**: Route requests across multiple specialized agents without hardcoding routing logic in application code.
* **Improve performance**: Direct requests to the most appropriate agent based on intent, reducing unnecessary handoffs and improving response quality.
* **Enhance debuggability**: Centralized routing decisions are observable through Plano's tracing and logging, making it easier to understand why a particular agent was selected.
Inner Loop vs. Outer Loop
--------------------------
Plano distinguishes between the **inner loop** (agent implementation logic) and the **outer loop** (orchestration and routing):
Inner Loop (Agent Logic)
^^^^^^^^^^^^^^^^^^^^^^^^^
The inner loop is where your agent lives—the business logic that decides which tools to call, how to interpret results, and when the task is complete. You implement this in any language or framework:
* **Python agents**: Using frameworks like LangChain, LlamaIndex, CrewAI, or custom Python code.
* **JavaScript/TypeScript agents**: Using frameworks like LangChain.js or custom Node.js implementations.
* **Any other AI famreowkr**: Agents are just HTTP services that Plano can route to.
Your agent controls:
* Which tools or APIs to call in response to a prompt.
* How to interpret tool results and decide next steps.
* When to call the LLM for reasoning or summarization.
* When the task is complete and what response to return.
.. note::
**Making LLM Calls from Agents**
When your agent needs to call an LLM for reasoning, summarization, or completion, you should route those calls through Plano's Model Proxy rather than calling LLM providers directly. This gives you:
* **Consistent responses**: Normalized response formats across all :ref:`LLM providers <llm_providers>`, whether you're using OpenAI, Anthropic, Azure OpenAI, or any OpenAI-compatible provider.
* **Rich agentic signals**: Automatic capture of function calls, tool usage, reasoning steps, and model behavior—surfaced through traces and metrics without instrumenting your agent code.
* **Smart model routing**: Leverage :ref:`model-based, alias-based, or preference-aligned routing <llm_providers>` to dynamically select the best model for each task based on cost, performance, or custom policies.
By routing LLM calls through the Model Proxy, your agents remain decoupled from specific providers and can benefit from centralized policy enforcement, observability, and intelligent routing—all managed in the outer loop. For a step-by-step guide, see :ref:`llm_router` in the LLM Router guide.
Outer Loop (Orchestration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The outer loop is Plano's orchestration layer—it manages the lifecycle of requests across agents and LLMs:
* **Intent analysis**: Plano-Orchestrator analyzes incoming prompts to determine user intent and conversation context.
* **Routing decisions**: Routes requests to the appropriate agent(s) or LLM(s) based on capabilities, context, and availability.
* **Sequencing**: Determines whether multiple agents need to collaborate and in what order.
* **Lifecycle management**: Handles retries, failover, circuit breaking, and load balancing across agent instances.
By managing the outer loop, Plano allows you to:
* Add new agents without changing routing logic in existing agents.
* Run multiple versions or variants of agents for A/B testing or canary deployments.
* Apply consistent :ref:`filter chains <filter_chain>` (guardrails, context enrichment) before requests reach agents.
* Monitor and debug multi-agent workflows through centralized observability.
Key Benefits
------------
* **Language and framework agnostic**: Write agents in any language; Plano orchestrates them via HTTP.
* **Reduced complexity**: Agents focus on task logic; Plano handles routing, retries, and cross-cutting concerns.
* **Better observability**: Centralized tracing shows which agents were called, in what sequence, and why.
* **Easier scaling**: Add more agent instances or new agent types without refactoring existing code.

View file

@ -0,0 +1,74 @@
.. _filter_chain:
Filter Chains
==============
Filter chains are Plano's way of capturing **reusable workflow steps** in the dataplane, without duplication and coupling logic into application code. A filter chain is an ordered list of **mutations** that a request flows through before reaching its final destination —such as an agent, an LLM, or a tool backend. Each filter is a network-addressable service/path that can:
1. Inspect the incoming prompt, metadata, and conversation state.
2. Mutate or enrich the request (for example, rewrite queries or build context).
3. Short-circuit the flow and return a response early (for example, block a request on a compliance failure).
4. Emit structured logs and traces so you can debug and continuously improve your agents.
In other words, filter chains provide a lightweight programming model over HTTP for building reusable steps
in your agent architectures.
Typical Use Cases
-----------------
Without a dataplane programming model, teams tend to spread logic like query rewriting, compliance checks,
context building, and routing decisions across many agents and frameworks. This quickly becomes hard to reason
about and even harder to evolve.
Filter chains show up most often in patterns like:
* **Guardrails and Compliance**: Enforcing content policies, stripping or masking sensitive data, and blocking obviously unsafe or off-topic requests before they reach an agent.
* **Query rewriting, RAG, and Memory**: Rewriting user queries for retrieval, normalizing entities, and assembling RAG context envelopes while pulling in relevant memory (for example, conversation history, user profiles, or prior tool results) before calling a model or tool.
* **Cross-cutting Observability**: Injecting correlation IDs, sampling traces, or logging enriched request metadata at consistent points in the request path.
Because these behaviors live in the dataplane rather than inside individual agents, you define them once, attach them to many agents and prompt targets, and can add, remove, or reorder them without changing application code.
Configuration example
---------------------
The example below shows a configuration where an agent uses a filter chain with two filters: a query rewriter,
and a context builder that prepares retrieval context before the agent runs.
.. literalinclude:: ../../source/resources/includes/plano_config_agents_filters.yaml
:language: yaml
:linenos:
:emphasize-lines: 7-14, 37-39
:caption: Example Configuration
In this setup:
* The ``filters`` section defines the reusable filters, each running as its own HTTP/MCP service.
* The ``listeners`` section wires the ``rag_agent`` behind an ``agent`` listener and attaches a ``filter_chain`` with ``query_rewriter`` followed by ``context_builder``.
* When a request arrives at ``agent_1``, Plano executes the filters in order before handing control to ``rag_agent``.
Filter Chain Programming Model (HTTP and MCP)
---------------------------------------------
Filters are implemented as simple RESTful endpoints reachable via HTTP. If you want to use the `Model Context Protocol (MCP) <https://modelcontextprotocol.io/>`_, you can configure that as well, which makes it easy to write filters in any language. However, you can also write a filter as a plain HTTP service.
When defining a filter in Plano configuration, the following fields are optional:
* ``type``: Controls the filter runtime. Use ``mcp`` for Model Context Protocol filters, or ``http`` for plain HTTP filters. Defaults to ``mcp``.
* ``transport``: Controls how Plano talks to the filter (defaults to ``streamable-http`` for efficient streaming interactions over HTTP). You can omit this for standard HTTP transport.
* ``tool``: Names the MCP tool Plano will invoke (by default, the filter ``id``). You can omit this if the tool name matches your filter id.
In practice, you typically only need to specify ``id`` and ``url`` to get started. Plano's sensible defaults mean a filter can be as simple as an HTTP endpoint. If you want to customize the runtime or protocol, those fields are there, but they're optional.
Filters communicate the outcome of their work via HTTP status codes:
* **HTTP 200 (Success)**: The filter successfully processed the request. If the filter mutated the request (e.g., rewrote a query or enriched context), those mutations are passed downstream.
* **HTTP 4xx (User Error)**: The request violates a filter's rules or constraints—for example, content moderation policies or compliance checks. The request is terminated, and the error is returned to the caller. This is *not* a fatal error; it represents expected user-facing policy enforcement.
* **HTTP 5xx (Fatal Error)**: An unexpected failure in the filter itself (for example, a crash or misconfiguration). Plano will surface the error back to the caller and record it in logs and traces.
This semantics allows filters to enforce guardrails and policies (4xx) without blocking the entire system, while still surfacing critical failures (5xx) for investigation.
If any filter fails or decides to terminate the request early (for example, after a policy violation), Plano will
surface that outcome back to the caller and record it in logs and traces. This makes filter chains a safe and
powerful abstraction for evolving your agent workflows over time.

View file

@ -1,27 +1,16 @@
version: v0.1.0
version: v0.2.0
listeners:
ingress_traffic:
address: 0.0.0.0
port: 10000
message_format: openai
timeout: 30s
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
model_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o
default: true
# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
prompt_targets:
- name: information_extraction
default: true

View file

@ -0,0 +1,79 @@
.. _plano_overview_listeners:
Listeners
---------
**Listeners** are a top-level primitive in Plano that bind network traffic to the dataplane. They simplify the
configuration required to accept incoming connections from downstream clients (edge) and to expose a unified egress
endpoint for calls from your applications to upstream LLMs.
Plano builds on Envoy's Listener subsystem to streamline connection management for developers. It hides most of
Envoy's complexity behind sensible defaults and a focused configuration surface, so you can bind listeners without
deep knowledge of Envoys configuration model while still getting secure, reliable, and performant connections.
Listeners are modular building blocks: you can configure only inbound listeners (for edge proxying and guardrails),
only outbound/model-proxy listeners (for LLM routing from your services), or both together. This lets you fit Plano
cleanly into existing architectures, whether you need it at the edge, behind the firewall, or across the full
request path.
Network Topology
^^^^^^^^^^^^^^^^
The diagram below shows how inbound and outbound traffic flow through Plano and how listeners relate to agents,
prompt targets, and upstream LLMs:
.. image:: /_static/img/network-topology-ingress-egress.png
:width: 100%
:align: center
Inbound (Agent & Prompt Target)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Developers configure **inbound listeners** to accept connections from clients such as web frontends, backend
services, or other gateways. An inbound listener acts as the primary entry point for prompt traffic, handling
initial connection setup, TLS termination, guardrails, and forwarding incoming traffic to the appropriate prompt
targets or agents.
There are two primary types of inbound connections exposed via listeners:
* **Agent Inbound (Edge)**: Clients (web/mobile apps or other services) connect to Plano, send prompts, and receive
responses. This is typically your public/edge listener where Plano applies guardrails, routing, and orchestration
before returning results to the caller.
* **Prompt Target Inbound (Edge)**: Your application server calls Plano's internal listener targeting
:ref:`prompt targets <prompt_target>` that can invoke tools and LLMs directly on its behalf.
Inbound listeners are where you attach :ref:`Filter Chains <filter_chain>` so that safety and context-building happen
consistently at the edge.
Outbound (Model Proxy & Egress)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Plano also exposes an **egress listener** that your applications call when sending requests to upstream LLM providers
or self-hosted models. From your application's perspective this looks like a single OpenAI-compatible HTTP endpoint
(for example, ``http://127.0.0.1:12000/v1``), while Plano handles provider selection, retries, and failover behind
the scenes.
Under the hood, Plano opens outbound HTTP(S) connections to upstream LLM providers using its unified API surface and
smart model routing. For more details on how Plano talks to models and how providers are configured, see
:ref:`LLM providers <llm_providers>`.
Configure Listeners
^^^^^^^^^^^^^^^^^^^
Listeners are configured via the ``listeners`` block in your Plano configuration. You can define one or more inbound
listeners (for example, ``type:edge``) or one or more outbound/model listeners (for example, ``type:model``), or both
in the same deployment.
To configure an inbound (edge) listener, add a ``listeners`` block to your configuration file and define at least one
listener with address, port, and protocol details:
.. literalinclude:: ./includes/plano_config.yaml
:language: yaml
:linenos:
:lines: 1-13
:emphasize-lines: 3-7
:caption: Example Configuration
When you start Plano, you specify a listener address/port that you want to bind downstream. Plano also exposes a
predefined internal listener (``127.0.0.1:12000``) that you can use to proxy egress calls originating from your
application to LLMs (API-based or hosted) via prompt targets.

View file

@ -3,7 +3,7 @@
Client Libraries
================
Arch provides a unified interface that works seamlessly with multiple client libraries and tools. You can use your preferred client library without changing your existing code - just point it to Arch's gateway endpoints.
Plano provides a unified interface that works seamlessly with multiple client libraries and tools. You can use your preferred client library without changing your existing code - just point it to Plano's gateway endpoints.
Supported Clients
------------------
@ -16,7 +16,7 @@ Supported Clients
Gateway Endpoints
-----------------
Arch exposes two main endpoints:
Plano exposes three main endpoints:
.. list-table::
:header-rows: 1
@ -26,13 +26,15 @@ Arch exposes two main endpoints:
- Purpose
* - ``http://127.0.0.1:12000/v1/chat/completions``
- OpenAI-compatible chat completions (LLM Gateway)
* - ``http://127.0.0.1:12000/v1/responses``
- OpenAI Responses API with :ref:`conversational state management <managing_conversational_state>` (LLM Gateway)
* - ``http://127.0.0.1:12000/v1/messages``
- Anthropic-compatible messages (LLM Gateway)
OpenAI (Python) SDK
-------------------
The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint.
The OpenAI SDK works with any provider through Plano's OpenAI-compatible endpoint.
**Installation:**
@ -46,7 +48,7 @@ The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint
from openai import OpenAI
# Point to Arch's LLM Gateway
# Point to Plano's LLM Gateway
client = OpenAI(
api_key="test-key", # Can be any value for local testing
base_url="http://127.0.0.1:12000/v1"
@ -96,7 +98,7 @@ The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint
**Using with Non-OpenAI Models:**
The OpenAI SDK can be used with any provider configured in Arch:
The OpenAI SDK can be used with any provider configured in Plano:
.. code-block:: python
@ -124,10 +126,92 @@ The OpenAI SDK can be used with any provider configured in Arch:
]
)
OpenAI Responses API (Conversational State)
-------------------------------------------
The OpenAI Responses API (``v1/responses``) enables multi-turn conversations with automatic state management. Plano handles conversation history for you, so you don't need to manually include previous messages in each request.
See :ref:`managing_conversational_state` for detailed configuration and storage backend options.
**Installation:**
.. code-block:: bash
pip install openai
**Basic Multi-Turn Conversation:**
.. code-block:: python
from openai import OpenAI
# Point to Plano's LLM Gateway
client = OpenAI(
api_key="test-key",
base_url="http://127.0.0.1:12000/v1"
)
# First turn - creates a new conversation
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "My name is Alice"}
]
)
# Extract response_id for conversation continuity
response_id = response.id
print(f"Assistant: {response.choices[0].message.content}")
# Second turn - continues the conversation
# Plano automatically retrieves and merges previous context
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "What's my name?"}
],
metadata={"response_id": response_id} # Reference previous conversation
)
print(f"Assistant: {response.choices[0].message.content}")
# Output: "Your name is Alice"
**Using with Any Provider:**
The Responses API works with any LLM provider configured in Plano:
.. code-block:: python
# Multi-turn conversation with Claude
response = client.chat.completions.create(
model="claude-3-5-sonnet-20241022",
messages=[
{"role": "user", "content": "Let's discuss quantum physics"}
]
)
response_id = response.id
# Continue conversation - Plano manages state regardless of provider
response = client.chat.completions.create(
model="claude-3-5-sonnet-20241022",
messages=[
{"role": "user", "content": "Tell me more about entanglement"}
],
metadata={"response_id": response_id}
)
**Key Benefits:**
* **Reduced payload size**: No need to send full conversation history in each request
* **Provider flexibility**: Use any configured LLM provider with state management
* **Automatic context merging**: Plano handles conversation continuity behind the scenes
* **Production-ready storage**: Configure :ref:`PostgreSQL or memory storage <managing_conversational_state>` based on your needs
Anthropic (Python) SDK
----------------------
The Anthropic SDK works with any provider through Arch's Anthropic-compatible endpoint.
The Anthropic SDK works with any provider through Plano's Anthropic-compatible endpoint.
**Installation:**
@ -141,7 +225,7 @@ The Anthropic SDK works with any provider through Arch's Anthropic-compatible en
import anthropic
# Point to Arch's LLM Gateway
# Point to Plano's LLM Gateway
client = anthropic.Anthropic(
api_key="test-key", # Can be any value for local testing
base_url="http://127.0.0.1:12000"
@ -192,7 +276,7 @@ The Anthropic SDK works with any provider through Arch's Anthropic-compatible en
**Using with Non-Anthropic Models:**
The Anthropic SDK can be used with any provider configured in Arch:
The Anthropic SDK can be used with any provider configured in Plano:
.. code-block:: python
@ -284,7 +368,7 @@ For direct HTTP requests or integration with any programming language:
Cross-Client Compatibility
--------------------------
One of Arch's key features is cross-client compatibility. You can:
One of Plano's key features is cross-client compatibility. You can:
**Use OpenAI SDK with Claude Models:**

View file

@ -1,16 +1,16 @@
.. _llm_providers:
LLM Providers
=============
**LLM Providers** are a top-level primitive in Arch, helping developers centrally define, secure, observe,
and manage the usage of their LLMs. Arch builds on Envoy's reliable `cluster subsystem <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/upstream/cluster_manager>`_
to manage egress traffic to LLMs, which includes intelligent routing, retry and fail-over mechanisms,
ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly
switch between LLM providers or upgrade LLM versions, simplifying the integration and scaling of LLMs
across applications.
Model (LLM) Providers
=====================
**Model Providers** are a top-level primitive in Plano, helping developers centrally define, secure, observe,
and manage the usage of their models. Plano builds on Envoy's reliable `cluster subsystem <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/upstream/cluster_manager>`_ to manage egress traffic to models, which includes intelligent routing, retry and fail-over mechanisms,
ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly switch between model providers or upgrade model versions, simplifying the integration and scaling of models across applications.
Today, we are enabling you to connect to 11+ different AI providers through a unified interface with advanced routing and management capabilities.
Whether you're using OpenAI, Anthropic, Azure OpenAI, local Ollama models, or any OpenAI-compatible provider, Arch provides seamless integration with enterprise-grade features.
Today, we are enable you to connect to 15+ different AI providers through a unified interface with advanced routing and management capabilities.
Whether you're using OpenAI, Anthropic, Azure OpenAI, local Ollama models, or any OpenAI-compatible provider, Plano provides seamless integration with enterprise-grade features.
.. note::
Please refer to the quickstart guide :ref:`here <llm_routing_quickstart>` to configure and use LLM providers via common client libraries like OpenAI and Anthropic Python SDKs, or via direct HTTP/cURL requests.
Core Capabilities
-----------------
@ -18,29 +18,29 @@ Core Capabilities
**Multi-Provider Support**
Connect to any combination of providers simultaneously (see :ref:`supported_providers` for full details):
- **First-Class Providers**: Native integrations with OpenAI, Anthropic, DeepSeek, Mistral, Groq, Google Gemini, Together AI, xAI, Azure OpenAI, and Ollama
- **OpenAI-Compatible Providers**: Any provider implementing the OpenAI Chat Completions API standard
- First-Class Providers: Native integrations with OpenAI, Anthropic, DeepSeek, Mistral, Groq, Google Gemini, Together AI, xAI, Azure OpenAI, and Ollama
- OpenAI-Compatible Providers: Any provider implementing the OpenAI Chat Completions API standard
**Intelligent Routing**
Three powerful routing approaches to optimize model selection:
- **Model-based Routing**: Direct routing to specific models using provider/model names (see :ref:`supported_providers`)
- **Alias-based Routing**: Semantic routing using custom aliases (see :ref:`model_aliases`)
- **Preference-aligned Routing**: Intelligent routing using the Arch-Router model (see :ref:`preference_aligned_routing`)
- Model-based Routing: Direct routing to specific models using provider/model names (see :ref:`supported_providers`)
- Alias-based Routing: Semantic routing using custom aliases (see :ref:`model_aliases`)
- Preference-aligned Routing: Intelligent routing using the Plano-Router model (see :ref:`preference_aligned_routing`)
**Unified Client Interface**
Use your preferred client library without changing existing code (see :ref:`client_libraries` for details):
- **OpenAI Python SDK**: Full compatibility with all providers
- **Anthropic Python SDK**: Native support with cross-provider capabilities
- **cURL & HTTP Clients**: Direct REST API access for any programming language
- **Custom Integrations**: Standard HTTP interfaces for seamless integration
- OpenAI Python SDK: Full compatibility with all providers
- Anthropic Python SDK: Native support with cross-provider capabilities
- cURL & HTTP Clients: Direct REST API access for any programming language
- Custom Integrations: Standard HTTP interfaces for seamless integration
Key Benefits
------------
- **Provider Flexibility**: Switch between providers without changing client code
- **Three Routing Methods**: Choose from model-based, alias-based, or preference-aligned routing (using `Arch-Router-1.5B <https://huggingface.co/katanemo/Arch-Router-1.5B>`_) strategies
- **Three Routing Methods**: Choose from model-based, alias-based, or preference-aligned routing (using `Plano-Router-1.5B <https://huggingface.co/katanemo/Plano-Router-1.5B>`_) strategies
- **Cost Optimization**: Route requests to cost-effective models based on complexity
- **Performance Optimization**: Use fast models for simple tasks, powerful models for complex reasoning
- **Environment Management**: Configure different models for different environments

View file

@ -3,27 +3,21 @@
Supported Providers & Configuration
===================================
Arch provides first-class support for multiple LLM providers through native integrations and OpenAI-compatible interfaces. This comprehensive guide covers all supported providers, their available chat models, and detailed configuration instructions.
Plano provides first-class support for multiple LLM providers through native integrations and OpenAI-compatible interfaces. This comprehensive guide covers all supported providers, their available chat models, and detailed configuration instructions.
.. note::
**Model Support:** Arch supports all chat models from each provider, not just the examples shown in this guide. The configurations below demonstrate common models for reference, but you can use any chat model available from your chosen provider.
**Model Support:** Plano supports all chat models from each provider, not just the examples shown in this guide. The configurations below demonstrate common models for reference, but you can use any chat model available from your chosen provider.
Please refer to the quuickstart guide :ref:`here <llm_routing_quickstart>` to configure and use LLM providers via common client libraries like OpenAI and Anthropic Python SDKs, or via direct HTTP/cURL requests.
Configuration Structure
-----------------------
All providers are configured in the ``llm_providers`` section of your ``arch_config.yaml`` file:
All providers are configured in the ``llm_providers`` section of your ``plano_config.yaml`` file:
.. code-block:: yaml
version: v0.1
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
# Provider configurations go here
- model: provider/model-name
@ -50,7 +44,7 @@ Any provider that implements the OpenAI API interface can be configured using cu
Supported API Endpoints
------------------------
Arch supports the following standardized endpoints across providers:
Plano supports the following standardized endpoints across providers:
.. list-table::
:header-rows: 1
@ -65,6 +59,9 @@ Arch supports the following standardized endpoints across providers:
* - ``/v1/messages``
- Anthropic-style messages
- Anthropic SDK, cURL, custom clients
* - ``/v1/responses``
- Unified response endpoint for agentic apps
- All SDKs, cURL, custom clients
First-Class Providers
---------------------
@ -78,7 +75,7 @@ OpenAI
**Authentication:** API Key - Get your OpenAI API key from `OpenAI Platform <https://platform.openai.com/api-keys>`_.
**Supported Chat Models:** All OpenAI chat models including GPT-5, GPT-4o, GPT-4, GPT-3.5-turbo, and all future releases.
**Supported Chat Models:** All OpenAI chat models including GPT-5.2, GPT-5, GPT-4o, and all future releases.
.. list-table::
:header-rows: 1
@ -87,21 +84,18 @@ OpenAI
* - Model Name
- Model ID for Config
- Description
* - GPT-5.2
- ``openai/gpt-5.2``
- Next-generation model (use any model name from OpenAI's API)
* - GPT-5
- ``openai/gpt-5``
- Next-generation model (use any model name from OpenAI's API)
* - GPT-4o
- ``openai/gpt-4o``
- Latest multimodal model
* - GPT-4o mini
- ``openai/gpt-4o-mini``
- Fast, cost-effective model
* - GPT-4
- ``openai/gpt-4``
* - GPT-4o
- ``openai/gpt-4o``
- High-capability reasoning model
* - GPT-3.5 Turbo
- ``openai/gpt-3.5-turbo``
- Balanced performance and cost
* - o3-mini
- ``openai/o3-mini``
- Reasoning-focused model (preview)
@ -115,15 +109,15 @@ OpenAI
llm_providers:
# Latest models (examples - use any OpenAI chat model)
- model: openai/gpt-4o-mini
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
- model: openai/gpt-5
access_key: $OPENAI_API_KEY
# Use any model name from OpenAI's API
- model: openai/gpt-5
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
Anthropic
@ -135,7 +129,7 @@ Anthropic
**Authentication:** API Key - Get your Anthropic API key from `Anthropic Console <https://console.anthropic.com/settings/keys>`_.
**Supported Chat Models:** All Anthropic Claude models including Claude Sonnet 4, Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, and all future releases.
**Supported Chat Models:** All Anthropic Claude models including Claude Sonnet 4.5, Claude Opus 4.5, Claude Haiku 4.5, and all future releases.
.. list-table::
:header-rows: 1
@ -144,24 +138,18 @@ Anthropic
* - Model Name
- Model ID for Config
- Description
* - Claude Sonnet 4
- ``anthropic/claude-sonnet-4``
- Next-generation model (use any model name from Anthropic's API)
* - Claude 3.5 Sonnet
- ``anthropic/claude-3-5-sonnet-20241022``
- Latest high-performance model
* - Claude 3.5 Haiku
- ``anthropic/claude-3-5-haiku-20241022``
- Fast and efficient model
* - Claude 3 Opus
- ``anthropic/claude-3-opus-20240229``
* - Claude Opus 4.5
- ``anthropic/claude-opus-4-5``
- Most capable model for complex tasks
* - Claude 3 Sonnet
- ``anthropic/claude-3-sonnet-20240229``
* - Claude Sonnet 4.5
- ``anthropic/claude-sonnet-4-5``
- Balanced performance model
* - Claude 3 Haiku
- ``anthropic/claude-3-haiku-20240307``
- Fastest model
* - Claude Haiku 4.5
- ``anthropic/claude-haiku-4-5``
- Fast and efficient model
* - Claude Sonnet 3.5
- ``anthropic/claude-sonnet-3-5``
- Complex agents and coding
**Configuration Examples:**
@ -169,14 +157,14 @@ Anthropic
llm_providers:
# Latest models (examples - use any Anthropic chat model)
- model: anthropic/claude-3-5-sonnet-20241022
- model: anthropic/claude-opus-4-5
access_key: $ANTHROPIC_API_KEY
- model: anthropic/claude-3-5-haiku-20241022
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
# Use any model name from Anthropic's API
- model: anthropic/claude-sonnet-4
- model: anthropic/claude-haiku-4-5
access_key: $ANTHROPIC_API_KEY
DeepSeek
@ -267,7 +255,7 @@ Groq
**Authentication:** API Key - Get your Groq API key from `Groq Console <https://console.groq.com/keys>`_.
**Supported Chat Models:** All Groq chat models including Llama 3, Mixtral, Gemma, and all future releases.
**Supported Chat Models:** All Groq chat models including Llama 4, GPT OSS, Mixtral, Gemma, and all future releases.
.. list-table::
:header-rows: 1
@ -276,25 +264,28 @@ Groq
* - Model Name
- Model ID for Config
- Description
* - Llama 3.1 8B
- ``groq/llama3-8b-8192``
* - Llama 4 Maverick 17B
- ``groq/llama-4-maverick-17b-128e-instruct``
- Fast inference Llama model
* - Llama 3.1 70B
- ``groq/llama3-70b-8192``
- Larger Llama model
* - Mixtral 8x7B
- ``groq/mixtral-8x7b-32768``
- Mixture of experts model
* - Llama 4 Scout 8B
- ``groq/llama-4-scout-8b-128e-instruct``
- Smaller Llama model
* - GPT OSS 20B
- ``groq/gpt-oss-20b``
- Open source GPT model
**Configuration Examples:**
.. code-block:: yaml
llm_providers:
- model: groq/llama3-8b-8192
- model: groq/llama-4-maverick-17b-128e-instruct
access_key: $GROQ_API_KEY
- model: groq/mixtral-8x7b-32768
- model: groq/llama-4-scout-8b-128e-instruct
access_key: $GROQ_API_KEY
- model: groq/gpt-oss-20b
access_key: $GROQ_API_KEY
Google Gemini
@ -306,7 +297,7 @@ Google Gemini
**Authentication:** API Key - Get your Google AI API key from `Google AI Studio <https://aistudio.google.com/app/apikey>`_.
**Supported Chat Models:** All Google Gemini chat models including Gemini 1.5 Pro, Gemini 1.5 Flash, and all future releases.
**Supported Chat Models:** All Google Gemini chat models including Gemini 3 Pro, Gemini 3 Flash, and all future releases.
.. list-table::
:header-rows: 1
@ -315,11 +306,11 @@ Google Gemini
* - Model Name
- Model ID for Config
- Description
* - Gemini 1.5 Pro
- ``gemini/gemini-1.5-pro``
* - Gemini 3 Pro
- ``gemini/gemini-3-pro``
- Advanced reasoning and creativity
* - Gemini 1.5 Flash
- ``gemini/gemini-1.5-flash``
* - Gemini 3 Flash
- ``gemini/gemini-3-flash``
- Fast and efficient model
**Configuration Examples:**
@ -327,10 +318,10 @@ Google Gemini
.. code-block:: yaml
llm_providers:
- model: gemini/gemini-1.5-pro
- model: gemini/gemini-3-pro
access_key: $GOOGLE_API_KEY
- model: gemini/gemini-1.5-flash
- model: gemini/gemini-3-flash
access_key: $GOOGLE_API_KEY
Together AI
@ -524,7 +515,7 @@ Amazon Bedrock
**Provider Prefix:** ``amazon_bedrock/``
**API Endpoint:** Arch automatically constructs the endpoint as:
**API Endpoint:** Plano automatically constructs the endpoint as:
- Non-streaming: ``/model/{model-id}/converse``
- Streaming: ``/model/{model-id}/converse-stream``
@ -723,7 +714,7 @@ Configure routing preferences for dynamic model selection:
.. code-block:: yaml
llm_providers:
- model: openai/gpt-4o
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
routing_preferences:
- name: complex_reasoning
@ -731,7 +722,7 @@ Configure routing preferences for dynamic model selection:
- name: code_review
description: reviewing and analyzing existing code for bugs and improvements
- model: anthropic/claude-3-5-sonnet-20241022
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: creative_writing
@ -741,15 +732,15 @@ Model Selection Guidelines
--------------------------
**For Production Applications:**
- **High Performance**: OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet
- **Cost-Effective**: OpenAI GPT-4o mini, Anthropic Claude 3.5 Haiku
- **High Performance**: OpenAI GPT-5.2, Anthropic Claude Sonnet 4.5
- **Cost-Effective**: OpenAI GPT-5, Anthropic Claude Haiku 4.5
- **Code Tasks**: DeepSeek Coder, Together AI Code Llama
- **Local Deployment**: Ollama with Llama 3.1 or Code Llama
**For Development/Testing:**
- **Fast Iteration**: Groq models (optimized inference)
- **Local Testing**: Ollama models
- **Cost Control**: Smaller models like GPT-4o mini or Mistral Small
- **Cost Control**: Smaller models like GPT-4o or Mistral Small
See Also
--------

View file

@ -1,15 +1,17 @@
.. _prompt_target:
Prompt Target
==============
=============
A Prompt Target is a deterministic, task-specific backend function or API endpoint that your application calls via Plano.
Unlike agents (which handle wide-ranging, open-ended tasks), prompt targets are designed for focused, specific workloads where Plano can add value through input clarification and validation.
**Prompt Targets** are a core concept in Arch, empowering developers to clearly define how user prompts are interpreted, processed, and routed within their generative AI applications. Prompts can seamlessly be routed either to specialized AI agents capable of handling sophisticated, context-driven tasks or to targeted tools provided by your application, offering users a fast, precise, and personalized experience.
Plano helps by:
This section covers the essentials of prompt targets—what they are, how to configure them, their practical uses, and recommended best practices—to help you fully utilize this feature in your applications.
* **Clarifying and validating input**: Plano enriches incoming prompts with metadata (e.g., detecting follow-ups or clarifying requests) and can extract structured parameters from natural language before passing them to your backend.
* **Enabling high determinism**: Since the task is specific and well-defined, Plano can reliably extract the information your backend needs without ambiguity.
* **Reducing backend work**: Your backend receives clean, validated, structured inputs—so you can focus on business logic instead of parsing and validation.
What Are Prompt Targets?
------------------------
Prompt targets are endpoints within Arch that handle specific types of user prompts. They act as the bridge between user inputs and your backend agents or tools (APIs), enabling Arch to route, process, and manage prompts efficiently. Defining prompt targets helps you decouple your application's core logic from processing and handling complexities, leading to clearer code organization, better scalability, and easier maintenance.
For example, a prompt target might be "schedule a meeting" (specific task, deterministic inputs like date, time, attendees) or "retrieve documents" (well-defined RAG query with clear intent). Prompt targets are typically called from your application code via Plano's internal listener.
.. table::
@ -33,16 +35,11 @@ Below are the key features of prompt targets that empower developers to build ef
- **Input Management**: Specify required and optional parameters for each target.
- **Tools Integration**: Seamlessly connect prompts to backend APIs or functions.
- **Error Handling**: Direct errors to designated handlers for streamlined troubleshooting.
- **Metadata Enrichment**: Attach additional context to prompts for enhanced processing.
Configuring Prompt Targets
--------------------------
Configuring prompt targets involves defining them in Arch's configuration file. Each Prompt target specifies how a particular type of prompt should be handled, including the endpoint to invoke and any parameters required.
- **Multi-Turn Support**: Manage follow-up prompts and clarifications in conversational flows.
Basic Configuration
~~~~~~~~~~~~~~~~~~~
A prompt target configuration includes the following elements:
Configuring prompt targets involves defining them in Plano's configuration file. Each Prompt target specifies how a particular type of prompt should be handled, including the endpoint to invoke and any parameters required. A prompt target configuration includes the following elements:
.. vale Vale.Spelling = NO
@ -55,8 +52,8 @@ A prompt target configuration includes the following elements:
Defining Parameters
~~~~~~~~~~~~~~~~~~~
Parameters are the pieces of information that Arch needs to extract from the user's prompt to perform the desired action.
Each parameter can be marked as required or optional. Here is a full list of parameter attributes that Arch can support:
Parameters are the pieces of information that Plano needs to extract from the user's prompt to perform the desired action.
Each parameter can be marked as required or optional. Here is a full list of parameter attributes that Plano can support:
.. table::
:width: 100%
@ -98,50 +95,92 @@ Example Configuration For Tools
name: api_server
path: /weather
Example Configuration For Agents
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _plano_multi_turn_guide:
.. code-block:: yaml
:caption: Agent Orchestration Configuration Example
Multi-Turn
~~~~~~~~~~
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds latency and token cost for
common scenarios that can be managed more efficiently.
overrides:
use_agent_orchestrator: true
Plano is highly capable of accurately detecting and processing prompts in multi-turn scenarios so that you can buil fast and accurate agents in minutes.
Below are some cnversational examples that you can build via Plano. Each example is enriched with annotations (via ** [Plano] ** ) that illustrates how Plano
processess conversational messages on your behalf.
prompt_targets:
- name: sales_agent
description: handles queries related to sales and purchases
Example 1: Adjusting Retrieval
- name: issues_and_repairs
description: handles issues, repairs, or refunds
.. code-block:: text
- name: escalate_to_human
description: escalates to human agent
User: What are the benefits of renewable energy?
**[Plano]**: Check if there is an available <prompt_target> that can handle this user query.
**[Plano]**: Found "get_info_for_energy_source" prompt_target in arch_config.yaml. Forward prompt to the endpoint configured in "get_info_for_energy_source"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind.
.. note::
Today, you can use Arch to coordinate more specific agentic scenarios via tools and function calling, or use it for high-level agent routing and hand off scenarios. In the future, we plan to offer you the ability to combine these two approaches for more complex scenarios. Please see `github issues <https://github.com/katanemo/archgw/issues/442>`_ for more details.
User: Include cost considerations in the response.
**[Plano]**: Follow-up detected. Forward prompt history to the "get_info_for_energy_source" prompt_target and post the following parameters consideration="cost"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind. While the initial setup costs can be high, long-term savings from reduced fuel expenses and government incentives make it cost-effective.
Routing Logic
-------------
Prompt targets determine where and how user prompts are processed. Arch uses intelligent routing logic to ensure that prompts are directed to the appropriate targets based on their intent and context.
Default Targets
~~~~~~~~~~~~~~~
For general-purpose prompts that do not match any specific prompt target, Arch routes them to a designated default target. This is useful for handling open-ended queries like document summarization or information extraction.
Example 2: Switching Intent
---------------------------
.. code-block:: text
Intent Matching
~~~~~~~~~~~~~~~
Arch analyzes the user's prompt to determine its intent and matches it with the most suitable prompt target based on the name and description defined in the configuration.
User: What are the symptoms of diabetes?
**[Plano]**: Check if there is an available <prompt_target> that can handle this user query.
**[Plano]**: Found "diseases_symptoms" prompt_target in arch_config.yaml. Forward disease=diabeteres to "diseases_symptoms" prompt target
...
Assistant: Common symptoms include frequent urination, excessive thirst, fatigue, and blurry vision.
For example:
User: How is it diagnosed?
**[Plano]**: New intent detected.
**[Plano]**: Found "disease_diagnoses" prompt_target in arch_config.yaml. Forward disease=diabeteres to "disease_diagnoses" prompt target
...
Assistant: Diabetes is diagnosed through blood tests like fasting blood sugar, A1C, or an oral glucose tolerance test.
.. code-block:: bash
Prompt: "Can you reboot the router?"
Matching Target: reboot_device (based on description matching "reboot devices")
Build Multi-Turn RAG Apps
-------------------------
The following section describes how you can easilly add support for multi-turn scenarios via Plano. You process and manage multi-turn prompts
just like you manage single-turn ones. Plano handles the conpleixity of detecting the correct intent based on the last user prompt and
the covnersational history, extracts relevant parameters needed by downstream APIs, and dipatches calls to any upstream LLMs to summarize the
response from your APIs.
.. _multi_turn_subsection_prompt_target:
Step 1: Define Plano Config
---------------------------
.. literalinclude:: ../build_with_plano/includes/multi_turn/prompt_targets_multi_turn.yaml
:language: yaml
:caption: Plano Config
:linenos:
Step 2: Process Request in Flask
--------------------------------
Once the prompt targets are configured as above, handle parameters across multi-turn as if its a single-turn request
.. literalinclude:: ../build_with_plano/includes/multi_turn/multi_turn_rag.py
:language: python
:caption: Parameter handling with Flask
:linenos:
Demo App
--------
For your convenience, we've built a `demo app <https://github.com/katanemo/archgw/tree/main/demos/samples_python/multi_turn_rag_agent>`_
that you can test and modify locally for multi-turn RAG scenarios.
.. figure:: ../build_with_plano/includes/multi_turn/mutli-turn-example.png
:width: 100%
:align: center
Example multi-turn user conversation showing adjusting retrieval
Summary
--------
Prompt targets are essential for defining how user prompts are handled within your generative AI applications using Arch.
By carefully configuring prompt targets, you can ensure that prompts are accurately routed, necessary parameters are extracted, and backend services are invoked seamlessly. This modular approach not only simplifies your application's architecture but also enhances scalability, maintainability, and overall user experience.
~~~~~~~
By carefully designing prompt targets as deterministic, task-specific entry points, you ensure that prompts are routed to the right workload, necessary parameters are cleanly extracted and validated, and backend services are invoked with structured inputs. This clear separation between prompt handling and business logic simplifies your architecture, makes behavior more predictable and testable, and improves the scalability and maintainability of your agentic applications.

Some files were not shown because too many files have changed in this diff Show more