Update docs to Plano (#639)

This commit is contained in:
Salman Paracha 2025-12-23 17:14:50 -08:00 committed by GitHub
parent 15fbb6c3af
commit e224cba3e3
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
139 changed files with 4407 additions and 24735 deletions

View file

@ -0,0 +1,93 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Iterable
from typing import TYPE_CHECKING
if TYPE_CHECKING:
# Only for type-checkers; Sphinx is only required in the docs build environment.
from sphinx.application import Sphinx # type: ignore[import-not-found]
@dataclass(frozen=True)
class LlmsTxtDoc:
docname: str
title: str
text: str
def _iter_docs(app: Sphinx) -> Iterable[LlmsTxtDoc]:
env = app.env
# Sphinx internal pages that shouldn't be included.
excluded = {"genindex", "search"}
for docname in sorted(d for d in env.found_docs if d not in excluded):
title_node = env.titles.get(docname)
title = title_node.astext().strip() if title_node else docname
doctree = env.get_doctree(docname)
text = doctree.astext().strip()
yield LlmsTxtDoc(docname=docname, title=title, text=text)
def _render_llms_txt(app: Sphinx) -> str:
now = datetime.now(timezone.utc).isoformat()
project = str(getattr(app.config, "project", "")).strip()
release = str(getattr(app.config, "release", "")).strip()
header = f"{project} {release}".strip() or "Documentation"
docs = list(_iter_docs(app))
lines: list[str] = []
lines.append(header)
lines.append("llms.txt (auto-generated)")
lines.append(f"Generated (UTC): {now}")
lines.append("")
lines.append("Table of contents")
for d in docs:
lines.append(f"- {d.title} ({d.docname})")
lines.append("")
for d in docs:
lines.append(d.title)
lines.append("-" * max(3, len(d.title)))
lines.append(f"Doc: {d.docname}")
lines.append("")
if d.text:
lines.append(d.text)
else:
lines.append("(empty)")
lines.append("")
lines.append("---")
lines.append("")
return "\n".join(lines).replace("\r\n", "\n").strip() + "\n"
def _on_build_finished(app: Sphinx, exception: Exception | None) -> None:
if exception is not None:
return
# Only generate for HTML-like builders where app.outdir is a website root.
if getattr(app.builder, "format", None) != "html":
return
# Per repo convention, place generated artifacts under an `includes/` folder.
out_path = Path(app.outdir) / "includes" / "llms.txt"
out_path.parent.mkdir(parents=True, exist_ok=True)
out_path.write_text(_render_llms_txt(app), encoding="utf-8")
def setup(app: Sphinx) -> dict[str, object]:
app.connect("build-finished", _on_build_finished)
return {
"version": "0.1.0",
"parallel_read_safe": True,
"parallel_write_safe": True,
}

View file

@ -0,0 +1,6 @@
/* Prevent sphinxawesome-theme's Tailwind utility `dark:invert` from inverting the header logo. */
.dark header img[alt="Logo"],
.dark #left-sidebar img[alt="Logo"] {
--tw-invert: invert(0%) !important;
filter: none !important;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 2.9 KiB

Before After
Before After

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 389 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 365 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 692 KiB

After

Width:  |  Height:  |  Size: 8.6 MiB

Before After
Before After

View file

@ -1,70 +0,0 @@
.. _arch_agent_guide:
Agentic Apps
=============
Arch helps you build personalized agentic applications by calling application-specific (API) functions via user prompts.
This involves any predefined functions or APIs you want to expose to users to perform tasks, gather information,
or manipulate data. This capability is generally referred to as :ref:`function calling <function_calling>`, where
you can support “agentic” apps tailored to specific use cases - from updating insurance claims to creating ad campaigns - via prompts.
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with the user to
gather any missing parameters and makes API calls so that you can focus on writing business logic. Arch does this via its
purpose-built `Arch-Function <https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68>`_ -
the fastest (200ms p50 - 12x faser than GPT-4o) and cheapest (44x than GPT-4o) function calling LLM that matches or outperforms
frontier LLMs.
.. image:: includes/agent/function-calling-flow.jpg
:width: 100%
:align: center
Single Function Call
--------------------
In the most common scenario, users will request a single action via prompts, and Arch efficiently processes the
request by extracting relevant parameters, validating the input, and calling the designated function or API. Here
is how you would go about enabling this scenario with Arch:
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: includes/agent/function-calling-agent.yaml
:language: yaml
:linenos:
:emphasize-lines: 19-49
:caption: Prompt Target Example Configuration
Step 2: Process Request Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the prompt targets are configured as above, handling those parameters is
.. literalinclude:: includes/agent/parameter_handling.py
:language: python
:linenos:
:caption: Parameter handling with Flask
Parallel & Multiple Function Calling
------------------------------------
In more complex use cases, users may request multiple actions or need multiple APIs/functions to be called
simultaneously or sequentially. With Arch, you can handle these scenarios efficiently using parallel or multiple
function calling. This allows your application to engage in a broader range of interactions, such as updating
different datasets, triggering events across systems, or collecting results from multiple services in one prompt.
Arch-FC1B is built to manage these parallel tasks efficiently, ensuring low latency and high throughput, even
when multiple functions are invoked. It provides two mechanisms to handle these cases:
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When enabling multiple function calling, define the prompt targets in a way that supports multiple functions or
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
the user's intent.
Example of Multiple Prompt Targets in YAML:
.. literalinclude:: includes/agent/function-calling-agent.yaml
:language: yaml
:linenos:
:emphasize-lines: 19-49
:caption: Prompt Target Example Configuration

View file

@ -1,90 +0,0 @@
.. _arch_multi_turn_guide:
Multi-Turn
==========
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds latency and token cost for
common scenarios that can be managed more efficiently.
Arch is highly capable of accurately detecting and processing prompts in multi-turn scenarios so that you can buil fast and accurate agents in minutes.
Below are some cnversational examples that you can build via Arch. Each example is enriched with annotations (via ** [Arch] ** ) that illustrates how Arch
processess conversational messages on your behalf.
.. Note::
The following section assumes that you have some knowledge about the core concepts of Arch, such as :ref:`prompt_targets <arch_overview_prompt_handling>`.
If you haven't familizaried yourself with Arch's concepts, we recommend you first read the :ref:`tech overview <tech_overview>` section firtst.
Additionally, the conversation examples below assume the usage of the following :ref:`arch_config.yaml <multi_turn_subsection_prompt_target>` file.
Example 1: Adjusting Retrieval
------------------------------
.. code-block:: text
User: What are the benefits of renewable energy?
**[Arch]**: Check if there is an available <prompt_target> that can handle this user query.
**[Arch]**: Found "get_info_for_energy_source" prompt_target in arch_config.yaml. Forward prompt to the endpoint configured in "get_info_for_energy_source"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind.
User: Include cost considerations in the response.
**[Arch]**: Follow-up detected. Forward prompt history to the "get_info_for_energy_source" prompt_target and post the following parameters consideration="cost"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind. While the initial setup costs can be high, long-term savings from reduced fuel expenses and government incentives make it cost-effective.
Example 2: Switching Intent
---------------------------
.. code-block:: text
User: What are the symptoms of diabetes?
**[Arch]**: Check if there is an available <prompt_target> that can handle this user query.
**[Arch]**: Found "diseases_symptoms" prompt_target in arch_config.yaml. Forward disease=diabeteres to "diseases_symptoms" prompt target
...
Assistant: Common symptoms include frequent urination, excessive thirst, fatigue, and blurry vision.
User: How is it diagnosed?
**[Arch]**: New intent detected.
**[Arch]**: Found "disease_diagnoses" prompt_target in arch_config.yaml. Forward disease=diabeteres to "disease_diagnoses" prompt target
...
Assistant: Diabetes is diagnosed through blood tests like fasting blood sugar, A1C, or an oral glucose tolerance test.
Build Multi-Turn RAG Apps
--------------------------
The following section describes how you can easilly add support for multi-turn scenarios via Arch. You process and manage multi-turn prompts
just like you manage single-turn ones. Arch handles the conpleixity of detecting the correct intent based on the last user prompt and
the covnersational history, extracts relevant parameters needed by downstream APIs, and dipatches calls to any upstream LLMs to summarize the
response from your APIs.
.. _multi_turn_subsection_prompt_target:
Step 1: Define Arch Config
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: includes/multi_turn/prompt_targets_multi_turn.yaml
:language: yaml
:caption: Arch Config
:linenos:
Step 2: Process Request in Flask
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the prompt targets are configured as above, handle parameters across multi-turn as if its a single-turn request
.. literalinclude:: includes/multi_turn/multi_turn_rag.py
:language: python
:caption: Parameter handling with Flask
:linenos:
Demo App
~~~~~~~~
For your convenience, we've built a `demo app <https://github.com/katanemo/archgw/tree/main/demos/samples_python/multi_turn_rag_agent>`_
that you can test and modify locally for multi-turn RAG scenarios.
.. figure:: includes/multi_turn/mutli-turn-example.png
:width: 100%
:align: center
Example multi-turn user conversation showing adjusting retrieval

View file

@ -1,52 +0,0 @@
.. _arch_rag_guide:
RAG Apps
========
The following section describes how Arch can help you build faster, smarter and more accurate
Retrieval-Augmented Generation (RAG) applications, including fast and accurate RAG in multi-turn
converational scenarios.
What is Retrieval-Augmented Generation (RAG)?
---------------------------------------------
RAG applications combine retrieval-based methods with generative AI models to provide more accurate,
contextually relevant, and reliable outputs. These applications leverage external data sources to augment
the capabilities of Large Language Models (LLMs), enabling them to retrieve and integrate specific information
rather than relying solely on the LLM's internal knowledge.
Parameter Extraction for RAG
----------------------------
To build RAG (Retrieval Augmented Generation) applications, you can configure prompt targets with parameters,
enabling Arch to retrieve critical information in a structured way for processing. This approach improves the
retrieval quality and speed of your application. By extracting parameters from the conversation, you can pull
the appropriate chunks from a vector database or SQL-like data store to enhance accuracy. With Arch, you can
streamline data retrieval and processing to build more efficient and precise RAG applications.
Step 1: Define Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: includes/rag/prompt_targets.yaml
:language: yaml
:caption: Prompt Targets
:linenos:
Step 2: Process Request Parameters in Flask
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the prompt targets are configured as above, handling those parameters is
.. literalinclude:: includes/rag/parameter_handling.py
:language: python
:caption: Parameter handling with Flask
:linenos:
Multi-Turn RAG (Follow-up Questions)
-------------------------------------
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds signifcant latency to the
user experience.
Arch is highly capable of accurately detecting and processing prompts in a multi-turn scenarios so that you can buil fast and accurate RAG apps in
minutes. For additional details on how to build multi-turn RAG applications please refer to our :ref:`multi-turn <arch_multi_turn_guide>` docs.

View file

Before

Width:  |  Height:  |  Size: 297 KiB

After

Width:  |  Height:  |  Size: 297 KiB

Before After
Before After

View file

@ -0,0 +1,76 @@
.. _agents:
Agents
======
Agents are autonomous systems that handle wide-ranging, open-ended tasks by calling models in a loop until the work is complete. Unlike deterministic :ref:`prompt targets <prompt_target>`, agents have access to tools, reason about which actions to take, and adapt their behavior based on intermediate results—making them ideal for complex workflows that require multi-step reasoning, external API calls, and dynamic decision-making.
Plano helps developers build and scale multi-agent systems by managing the orchestration layer—deciding which agent(s) or LLM(s) should handle each request, and in what sequence—while developers focus on implementing agent logic in any language or framework they choose.
Agent Orchestration
-------------------
**Plano-Orchestrator** is a family of state-of-the-art routing and orchestration models that decide which agent(s) should handle each request, and in what sequence. Built for real-world multi-agent deployments, it analyzes user intent and conversation context to make precise routing and orchestration decisions while remaining efficient enough for low-latency production use across general chat, coding, and long-context multi-turn conversations.
This allows development teams to:
* **Scale multi-agent systems**: Route requests across multiple specialized agents without hardcoding routing logic in application code.
* **Improve performance**: Direct requests to the most appropriate agent based on intent, reducing unnecessary handoffs and improving response quality.
* **Enhance debuggability**: Centralized routing decisions are observable through Plano's tracing and logging, making it easier to understand why a particular agent was selected.
Inner Loop vs. Outer Loop
--------------------------
Plano distinguishes between the **inner loop** (agent implementation logic) and the **outer loop** (orchestration and routing):
Inner Loop (Agent Logic)
^^^^^^^^^^^^^^^^^^^^^^^^^
The inner loop is where your agent lives—the business logic that decides which tools to call, how to interpret results, and when the task is complete. You implement this in any language or framework:
* **Python agents**: Using frameworks like LangChain, LlamaIndex, CrewAI, or custom Python code.
* **JavaScript/TypeScript agents**: Using frameworks like LangChain.js or custom Node.js implementations.
* **Any other AI famreowkr**: Agents are just HTTP services that Plano can route to.
Your agent controls:
* Which tools or APIs to call in response to a prompt.
* How to interpret tool results and decide next steps.
* When to call the LLM for reasoning or summarization.
* When the task is complete and what response to return.
.. note::
**Making LLM Calls from Agents**
When your agent needs to call an LLM for reasoning, summarization, or completion, you should route those calls through Plano's Model Proxy rather than calling LLM providers directly. This gives you:
* **Consistent responses**: Normalized response formats across all :ref:`LLM providers <llm_providers>`, whether you're using OpenAI, Anthropic, Azure OpenAI, or any OpenAI-compatible provider.
* **Rich agentic signals**: Automatic capture of function calls, tool usage, reasoning steps, and model behavior—surfaced through traces and metrics without instrumenting your agent code.
* **Smart model routing**: Leverage :ref:`model-based, alias-based, or preference-aligned routing <llm_providers>` to dynamically select the best model for each task based on cost, performance, or custom policies.
By routing LLM calls through the Model Proxy, your agents remain decoupled from specific providers and can benefit from centralized policy enforcement, observability, and intelligent routing—all managed in the outer loop. For a step-by-step guide, see :ref:`llm_router` in the LLM Router guide.
Outer Loop (Orchestration)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The outer loop is Plano's orchestration layer—it manages the lifecycle of requests across agents and LLMs:
* **Intent analysis**: Plano-Orchestrator analyzes incoming prompts to determine user intent and conversation context.
* **Routing decisions**: Routes requests to the appropriate agent(s) or LLM(s) based on capabilities, context, and availability.
* **Sequencing**: Determines whether multiple agents need to collaborate and in what order.
* **Lifecycle management**: Handles retries, failover, circuit breaking, and load balancing across agent instances.
By managing the outer loop, Plano allows you to:
* Add new agents without changing routing logic in existing agents.
* Run multiple versions or variants of agents for A/B testing or canary deployments.
* Apply consistent :ref:`filter chains <filter_chain>` (guardrails, context enrichment) before requests reach agents.
* Monitor and debug multi-agent workflows through centralized observability.
Key Benefits
------------
* **Language and framework agnostic**: Write agents in any language; Plano orchestrates them via HTTP.
* **Reduced complexity**: Agents focus on task logic; Plano handles routing, retries, and cross-cutting concerns.
* **Better observability**: Centralized tracing shows which agents were called, in what sequence, and why.
* **Easier scaling**: Add more agent instances or new agent types without refactoring existing code.

View file

@ -0,0 +1,74 @@
.. _filter_chain:
Filter Chains
==============
Filter chains are Plano's way of capturing **reusable workflow steps** in the dataplane, without duplication and coupling logic into application code. A filter chain is an ordered list of **mutations** that a request flows through before reaching its final destination —such as an agent, an LLM, or a tool backend. Each filter is a network-addressable service/path that can:
1. Inspect the incoming prompt, metadata, and conversation state.
2. Mutate or enrich the request (for example, rewrite queries or build context).
3. Short-circuit the flow and return a response early (for example, block a request on a compliance failure).
4. Emit structured logs and traces so you can debug and continuously improve your agents.
In other words, filter chains provide a lightweight programming model over HTTP for building reusable steps
in your agent architectures.
Typical Use Cases
-----------------
Without a dataplane programming model, teams tend to spread logic like query rewriting, compliance checks,
context building, and routing decisions across many agents and frameworks. This quickly becomes hard to reason
about and even harder to evolve.
Filter chains show up most often in patterns like:
* **Guardrails and Compliance**: Enforcing content policies, stripping or masking sensitive data, and blocking obviously unsafe or off-topic requests before they reach an agent.
* **Query rewriting, RAG, and Memory**: Rewriting user queries for retrieval, normalizing entities, and assembling RAG context envelopes while pulling in relevant memory (for example, conversation history, user profiles, or prior tool results) before calling a model or tool.
* **Cross-cutting Observability**: Injecting correlation IDs, sampling traces, or logging enriched request metadata at consistent points in the request path.
Because these behaviors live in the dataplane rather than inside individual agents, you define them once, attach them to many agents and prompt targets, and can add, remove, or reorder them without changing application code.
Configuration example
---------------------
The example below shows a configuration where an agent uses a filter chain with two filters: a query rewriter,
and a context builder that prepares retrieval context before the agent runs.
.. literalinclude:: ../../source/resources/includes/plano_config_agents_filters.yaml
:language: yaml
:linenos:
:emphasize-lines: 7-14, 37-39
:caption: Example Configuration
In this setup:
* The ``filters`` section defines the reusable filters, each running as its own HTTP/MCP service.
* The ``listeners`` section wires the ``rag_agent`` behind an ``agent`` listener and attaches a ``filter_chain`` with ``query_rewriter`` followed by ``context_builder``.
* When a request arrives at ``agent_1``, Plano executes the filters in order before handing control to ``rag_agent``.
Filter Chain Programming Model (HTTP and MCP)
---------------------------------------------
Filters are implemented as simple RESTful endpoints reachable via HTTP. If you want to use the `Model Context Protocol (MCP) <https://modelcontextprotocol.io/>`_, you can configure that as well, which makes it easy to write filters in any language. However, you can also write a filter as a plain HTTP service.
When defining a filter in Plano configuration, the following fields are optional:
* ``type``: Controls the filter runtime. Use ``mcp`` for Model Context Protocol filters, or ``http`` for plain HTTP filters. Defaults to ``mcp``.
* ``transport``: Controls how Plano talks to the filter (defaults to ``streamable-http`` for efficient streaming interactions over HTTP). You can omit this for standard HTTP transport.
* ``tool``: Names the MCP tool Plano will invoke (by default, the filter ``id``). You can omit this if the tool name matches your filter id.
In practice, you typically only need to specify ``id`` and ``url`` to get started. Plano's sensible defaults mean a filter can be as simple as an HTTP endpoint. If you want to customize the runtime or protocol, those fields are there, but they're optional.
Filters communicate the outcome of their work via HTTP status codes:
* **HTTP 200 (Success)**: The filter successfully processed the request. If the filter mutated the request (e.g., rewrote a query or enriched context), those mutations are passed downstream.
* **HTTP 4xx (User Error)**: The request violates a filter's rules or constraints—for example, content moderation policies or compliance checks. The request is terminated, and the error is returned to the caller. This is *not* a fatal error; it represents expected user-facing policy enforcement.
* **HTTP 5xx (Fatal Error)**: An unexpected failure in the filter itself (for example, a crash or misconfiguration). Plano will surface the error back to the caller and record it in logs and traces.
This semantics allows filters to enforce guardrails and policies (4xx) without blocking the entire system, while still surfacing critical failures (5xx) for investigation.
If any filter fails or decides to terminate the request early (for example, after a policy violation), Plano will
surface that outcome back to the caller and record it in logs and traces. This makes filter chains a safe and
powerful abstraction for evolving your agent workflows over time.

View file

@ -1,27 +1,16 @@
version: v0.1.0
version: v0.2.0
listeners:
ingress_traffic:
address: 0.0.0.0
port: 10000
message_format: openai
timeout: 30s
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
model_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o
default: true
# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
prompt_targets:
- name: information_extraction
default: true

View file

@ -0,0 +1,79 @@
.. _plano_overview_listeners:
Listeners
---------
**Listeners** are a top-level primitive in Plano that bind network traffic to the dataplane. They simplify the
configuration required to accept incoming connections from downstream clients (edge) and to expose a unified egress
endpoint for calls from your applications to upstream LLMs.
Plano builds on Envoy's Listener subsystem to streamline connection management for developers. It hides most of
Envoy's complexity behind sensible defaults and a focused configuration surface, so you can bind listeners without
deep knowledge of Envoys configuration model while still getting secure, reliable, and performant connections.
Listeners are modular building blocks: you can configure only inbound listeners (for edge proxying and guardrails),
only outbound/model-proxy listeners (for LLM routing from your services), or both together. This lets you fit Plano
cleanly into existing architectures, whether you need it at the edge, behind the firewall, or across the full
request path.
Network Topology
^^^^^^^^^^^^^^^^
The diagram below shows how inbound and outbound traffic flow through Plano and how listeners relate to agents,
prompt targets, and upstream LLMs:
.. image:: /_static/img/network-topology-ingress-egress.png
:width: 100%
:align: center
Inbound (Agent & Prompt Target)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Developers configure **inbound listeners** to accept connections from clients such as web frontends, backend
services, or other gateways. An inbound listener acts as the primary entry point for prompt traffic, handling
initial connection setup, TLS termination, guardrails, and forwarding incoming traffic to the appropriate prompt
targets or agents.
There are two primary types of inbound connections exposed via listeners:
* **Agent Inbound (Edge)**: Clients (web/mobile apps or other services) connect to Plano, send prompts, and receive
responses. This is typically your public/edge listener where Plano applies guardrails, routing, and orchestration
before returning results to the caller.
* **Prompt Target Inbound (Edge)**: Your application server calls Plano's internal listener targeting
:ref:`prompt targets <prompt_target>` that can invoke tools and LLMs directly on its behalf.
Inbound listeners are where you attach :ref:`Filter Chains <filter_chain>` so that safety and context-building happen
consistently at the edge.
Outbound (Model Proxy & Egress)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Plano also exposes an **egress listener** that your applications call when sending requests to upstream LLM providers
or self-hosted models. From your application's perspective this looks like a single OpenAI-compatible HTTP endpoint
(for example, ``http://127.0.0.1:12000/v1``), while Plano handles provider selection, retries, and failover behind
the scenes.
Under the hood, Plano opens outbound HTTP(S) connections to upstream LLM providers using its unified API surface and
smart model routing. For more details on how Plano talks to models and how providers are configured, see
:ref:`LLM providers <llm_providers>`.
Configure Listeners
^^^^^^^^^^^^^^^^^^^
Listeners are configured via the ``listeners`` block in your Plano configuration. You can define one or more inbound
listeners (for example, ``type:edge``) or one or more outbound/model listeners (for example, ``type:model``), or both
in the same deployment.
To configure an inbound (edge) listener, add a ``listeners`` block to your configuration file and define at least one
listener with address, port, and protocol details:
.. literalinclude:: ./includes/plano_config.yaml
:language: yaml
:linenos:
:lines: 1-13
:emphasize-lines: 3-7
:caption: Example Configuration
When you start Plano, you specify a listener address/port that you want to bind downstream. Plano also exposes a
predefined internal listener (``127.0.0.1:12000``) that you can use to proxy egress calls originating from your
application to LLMs (API-based or hosted) via prompt targets.

View file

@ -3,7 +3,7 @@
Client Libraries
================
Arch provides a unified interface that works seamlessly with multiple client libraries and tools. You can use your preferred client library without changing your existing code - just point it to Arch's gateway endpoints.
Plano provides a unified interface that works seamlessly with multiple client libraries and tools. You can use your preferred client library without changing your existing code - just point it to Plano's gateway endpoints.
Supported Clients
------------------
@ -16,7 +16,7 @@ Supported Clients
Gateway Endpoints
-----------------
Arch exposes two main endpoints:
Plano exposes three main endpoints:
.. list-table::
:header-rows: 1
@ -26,13 +26,15 @@ Arch exposes two main endpoints:
- Purpose
* - ``http://127.0.0.1:12000/v1/chat/completions``
- OpenAI-compatible chat completions (LLM Gateway)
* - ``http://127.0.0.1:12000/v1/responses``
- OpenAI Responses API with :ref:`conversational state management <managing_conversational_state>` (LLM Gateway)
* - ``http://127.0.0.1:12000/v1/messages``
- Anthropic-compatible messages (LLM Gateway)
OpenAI (Python) SDK
-------------------
The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint.
The OpenAI SDK works with any provider through Plano's OpenAI-compatible endpoint.
**Installation:**
@ -46,7 +48,7 @@ The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint
from openai import OpenAI
# Point to Arch's LLM Gateway
# Point to Plano's LLM Gateway
client = OpenAI(
api_key="test-key", # Can be any value for local testing
base_url="http://127.0.0.1:12000/v1"
@ -96,7 +98,7 @@ The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint
**Using with Non-OpenAI Models:**
The OpenAI SDK can be used with any provider configured in Arch:
The OpenAI SDK can be used with any provider configured in Plano:
.. code-block:: python
@ -124,10 +126,92 @@ The OpenAI SDK can be used with any provider configured in Arch:
]
)
OpenAI Responses API (Conversational State)
-------------------------------------------
The OpenAI Responses API (``v1/responses``) enables multi-turn conversations with automatic state management. Plano handles conversation history for you, so you don't need to manually include previous messages in each request.
See :ref:`managing_conversational_state` for detailed configuration and storage backend options.
**Installation:**
.. code-block:: bash
pip install openai
**Basic Multi-Turn Conversation:**
.. code-block:: python
from openai import OpenAI
# Point to Plano's LLM Gateway
client = OpenAI(
api_key="test-key",
base_url="http://127.0.0.1:12000/v1"
)
# First turn - creates a new conversation
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "My name is Alice"}
]
)
# Extract response_id for conversation continuity
response_id = response.id
print(f"Assistant: {response.choices[0].message.content}")
# Second turn - continues the conversation
# Plano automatically retrieves and merges previous context
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "What's my name?"}
],
metadata={"response_id": response_id} # Reference previous conversation
)
print(f"Assistant: {response.choices[0].message.content}")
# Output: "Your name is Alice"
**Using with Any Provider:**
The Responses API works with any LLM provider configured in Plano:
.. code-block:: python
# Multi-turn conversation with Claude
response = client.chat.completions.create(
model="claude-3-5-sonnet-20241022",
messages=[
{"role": "user", "content": "Let's discuss quantum physics"}
]
)
response_id = response.id
# Continue conversation - Plano manages state regardless of provider
response = client.chat.completions.create(
model="claude-3-5-sonnet-20241022",
messages=[
{"role": "user", "content": "Tell me more about entanglement"}
],
metadata={"response_id": response_id}
)
**Key Benefits:**
* **Reduced payload size**: No need to send full conversation history in each request
* **Provider flexibility**: Use any configured LLM provider with state management
* **Automatic context merging**: Plano handles conversation continuity behind the scenes
* **Production-ready storage**: Configure :ref:`PostgreSQL or memory storage <managing_conversational_state>` based on your needs
Anthropic (Python) SDK
----------------------
The Anthropic SDK works with any provider through Arch's Anthropic-compatible endpoint.
The Anthropic SDK works with any provider through Plano's Anthropic-compatible endpoint.
**Installation:**
@ -141,7 +225,7 @@ The Anthropic SDK works with any provider through Arch's Anthropic-compatible en
import anthropic
# Point to Arch's LLM Gateway
# Point to Plano's LLM Gateway
client = anthropic.Anthropic(
api_key="test-key", # Can be any value for local testing
base_url="http://127.0.0.1:12000"
@ -192,7 +276,7 @@ The Anthropic SDK works with any provider through Arch's Anthropic-compatible en
**Using with Non-Anthropic Models:**
The Anthropic SDK can be used with any provider configured in Arch:
The Anthropic SDK can be used with any provider configured in Plano:
.. code-block:: python
@ -284,7 +368,7 @@ For direct HTTP requests or integration with any programming language:
Cross-Client Compatibility
--------------------------
One of Arch's key features is cross-client compatibility. You can:
One of Plano's key features is cross-client compatibility. You can:
**Use OpenAI SDK with Claude Models:**

View file

@ -1,16 +1,16 @@
.. _llm_providers:
LLM Providers
=============
**LLM Providers** are a top-level primitive in Arch, helping developers centrally define, secure, observe,
and manage the usage of their LLMs. Arch builds on Envoy's reliable `cluster subsystem <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/upstream/cluster_manager>`_
to manage egress traffic to LLMs, which includes intelligent routing, retry and fail-over mechanisms,
ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly
switch between LLM providers or upgrade LLM versions, simplifying the integration and scaling of LLMs
across applications.
Model (LLM) Providers
=====================
**Model Providers** are a top-level primitive in Plano, helping developers centrally define, secure, observe,
and manage the usage of their models. Plano builds on Envoy's reliable `cluster subsystem <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/upstream/cluster_manager>`_ to manage egress traffic to models, which includes intelligent routing, retry and fail-over mechanisms,
ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly switch between model providers or upgrade model versions, simplifying the integration and scaling of models across applications.
Today, we are enabling you to connect to 11+ different AI providers through a unified interface with advanced routing and management capabilities.
Whether you're using OpenAI, Anthropic, Azure OpenAI, local Ollama models, or any OpenAI-compatible provider, Arch provides seamless integration with enterprise-grade features.
Today, we are enable you to connect to 15+ different AI providers through a unified interface with advanced routing and management capabilities.
Whether you're using OpenAI, Anthropic, Azure OpenAI, local Ollama models, or any OpenAI-compatible provider, Plano provides seamless integration with enterprise-grade features.
.. note::
Please refer to the quickstart guide :ref:`here <llm_routing_quickstart>` to configure and use LLM providers via common client libraries like OpenAI and Anthropic Python SDKs, or via direct HTTP/cURL requests.
Core Capabilities
-----------------
@ -18,29 +18,29 @@ Core Capabilities
**Multi-Provider Support**
Connect to any combination of providers simultaneously (see :ref:`supported_providers` for full details):
- **First-Class Providers**: Native integrations with OpenAI, Anthropic, DeepSeek, Mistral, Groq, Google Gemini, Together AI, xAI, Azure OpenAI, and Ollama
- **OpenAI-Compatible Providers**: Any provider implementing the OpenAI Chat Completions API standard
- First-Class Providers: Native integrations with OpenAI, Anthropic, DeepSeek, Mistral, Groq, Google Gemini, Together AI, xAI, Azure OpenAI, and Ollama
- OpenAI-Compatible Providers: Any provider implementing the OpenAI Chat Completions API standard
**Intelligent Routing**
Three powerful routing approaches to optimize model selection:
- **Model-based Routing**: Direct routing to specific models using provider/model names (see :ref:`supported_providers`)
- **Alias-based Routing**: Semantic routing using custom aliases (see :ref:`model_aliases`)
- **Preference-aligned Routing**: Intelligent routing using the Arch-Router model (see :ref:`preference_aligned_routing`)
- Model-based Routing: Direct routing to specific models using provider/model names (see :ref:`supported_providers`)
- Alias-based Routing: Semantic routing using custom aliases (see :ref:`model_aliases`)
- Preference-aligned Routing: Intelligent routing using the Plano-Router model (see :ref:`preference_aligned_routing`)
**Unified Client Interface**
Use your preferred client library without changing existing code (see :ref:`client_libraries` for details):
- **OpenAI Python SDK**: Full compatibility with all providers
- **Anthropic Python SDK**: Native support with cross-provider capabilities
- **cURL & HTTP Clients**: Direct REST API access for any programming language
- **Custom Integrations**: Standard HTTP interfaces for seamless integration
- OpenAI Python SDK: Full compatibility with all providers
- Anthropic Python SDK: Native support with cross-provider capabilities
- cURL & HTTP Clients: Direct REST API access for any programming language
- Custom Integrations: Standard HTTP interfaces for seamless integration
Key Benefits
------------
- **Provider Flexibility**: Switch between providers without changing client code
- **Three Routing Methods**: Choose from model-based, alias-based, or preference-aligned routing (using `Arch-Router-1.5B <https://huggingface.co/katanemo/Arch-Router-1.5B>`_) strategies
- **Three Routing Methods**: Choose from model-based, alias-based, or preference-aligned routing (using `Plano-Router-1.5B <https://huggingface.co/katanemo/Plano-Router-1.5B>`_) strategies
- **Cost Optimization**: Route requests to cost-effective models based on complexity
- **Performance Optimization**: Use fast models for simple tasks, powerful models for complex reasoning
- **Environment Management**: Configure different models for different environments

View file

@ -3,27 +3,21 @@
Supported Providers & Configuration
===================================
Arch provides first-class support for multiple LLM providers through native integrations and OpenAI-compatible interfaces. This comprehensive guide covers all supported providers, their available chat models, and detailed configuration instructions.
Plano provides first-class support for multiple LLM providers through native integrations and OpenAI-compatible interfaces. This comprehensive guide covers all supported providers, their available chat models, and detailed configuration instructions.
.. note::
**Model Support:** Arch supports all chat models from each provider, not just the examples shown in this guide. The configurations below demonstrate common models for reference, but you can use any chat model available from your chosen provider.
**Model Support:** Plano supports all chat models from each provider, not just the examples shown in this guide. The configurations below demonstrate common models for reference, but you can use any chat model available from your chosen provider.
Please refer to the quuickstart guide :ref:`here <llm_routing_quickstart>` to configure and use LLM providers via common client libraries like OpenAI and Anthropic Python SDKs, or via direct HTTP/cURL requests.
Configuration Structure
-----------------------
All providers are configured in the ``llm_providers`` section of your ``arch_config.yaml`` file:
All providers are configured in the ``llm_providers`` section of your ``plano_config.yaml`` file:
.. code-block:: yaml
version: v0.1
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
# Provider configurations go here
- model: provider/model-name
@ -50,7 +44,7 @@ Any provider that implements the OpenAI API interface can be configured using cu
Supported API Endpoints
------------------------
Arch supports the following standardized endpoints across providers:
Plano supports the following standardized endpoints across providers:
.. list-table::
:header-rows: 1
@ -65,6 +59,9 @@ Arch supports the following standardized endpoints across providers:
* - ``/v1/messages``
- Anthropic-style messages
- Anthropic SDK, cURL, custom clients
* - ``/v1/responses``
- Unified response endpoint for agentic apps
- All SDKs, cURL, custom clients
First-Class Providers
---------------------
@ -78,7 +75,7 @@ OpenAI
**Authentication:** API Key - Get your OpenAI API key from `OpenAI Platform <https://platform.openai.com/api-keys>`_.
**Supported Chat Models:** All OpenAI chat models including GPT-5, GPT-4o, GPT-4, GPT-3.5-turbo, and all future releases.
**Supported Chat Models:** All OpenAI chat models including GPT-5.2, GPT-5, GPT-4o, and all future releases.
.. list-table::
:header-rows: 1
@ -87,21 +84,18 @@ OpenAI
* - Model Name
- Model ID for Config
- Description
* - GPT-5.2
- ``openai/gpt-5.2``
- Next-generation model (use any model name from OpenAI's API)
* - GPT-5
- ``openai/gpt-5``
- Next-generation model (use any model name from OpenAI's API)
* - GPT-4o
- ``openai/gpt-4o``
- Latest multimodal model
* - GPT-4o mini
- ``openai/gpt-4o-mini``
- Fast, cost-effective model
* - GPT-4
- ``openai/gpt-4``
* - GPT-4o
- ``openai/gpt-4o``
- High-capability reasoning model
* - GPT-3.5 Turbo
- ``openai/gpt-3.5-turbo``
- Balanced performance and cost
* - o3-mini
- ``openai/o3-mini``
- Reasoning-focused model (preview)
@ -115,15 +109,15 @@ OpenAI
llm_providers:
# Latest models (examples - use any OpenAI chat model)
- model: openai/gpt-4o-mini
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
- model: openai/gpt-5
access_key: $OPENAI_API_KEY
# Use any model name from OpenAI's API
- model: openai/gpt-5
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
Anthropic
@ -135,7 +129,7 @@ Anthropic
**Authentication:** API Key - Get your Anthropic API key from `Anthropic Console <https://console.anthropic.com/settings/keys>`_.
**Supported Chat Models:** All Anthropic Claude models including Claude Sonnet 4, Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, and all future releases.
**Supported Chat Models:** All Anthropic Claude models including Claude Sonnet 4.5, Claude Opus 4.5, Claude Haiku 4.5, and all future releases.
.. list-table::
:header-rows: 1
@ -144,24 +138,18 @@ Anthropic
* - Model Name
- Model ID for Config
- Description
* - Claude Sonnet 4
- ``anthropic/claude-sonnet-4``
- Next-generation model (use any model name from Anthropic's API)
* - Claude 3.5 Sonnet
- ``anthropic/claude-3-5-sonnet-20241022``
- Latest high-performance model
* - Claude 3.5 Haiku
- ``anthropic/claude-3-5-haiku-20241022``
- Fast and efficient model
* - Claude 3 Opus
- ``anthropic/claude-3-opus-20240229``
* - Claude Opus 4.5
- ``anthropic/claude-opus-4-5``
- Most capable model for complex tasks
* - Claude 3 Sonnet
- ``anthropic/claude-3-sonnet-20240229``
* - Claude Sonnet 4.5
- ``anthropic/claude-sonnet-4-5``
- Balanced performance model
* - Claude 3 Haiku
- ``anthropic/claude-3-haiku-20240307``
- Fastest model
* - Claude Haiku 4.5
- ``anthropic/claude-haiku-4-5``
- Fast and efficient model
* - Claude Sonnet 3.5
- ``anthropic/claude-sonnet-3-5``
- Complex agents and coding
**Configuration Examples:**
@ -169,14 +157,14 @@ Anthropic
llm_providers:
# Latest models (examples - use any Anthropic chat model)
- model: anthropic/claude-3-5-sonnet-20241022
- model: anthropic/claude-opus-4-5
access_key: $ANTHROPIC_API_KEY
- model: anthropic/claude-3-5-haiku-20241022
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
# Use any model name from Anthropic's API
- model: anthropic/claude-sonnet-4
- model: anthropic/claude-haiku-4-5
access_key: $ANTHROPIC_API_KEY
DeepSeek
@ -267,7 +255,7 @@ Groq
**Authentication:** API Key - Get your Groq API key from `Groq Console <https://console.groq.com/keys>`_.
**Supported Chat Models:** All Groq chat models including Llama 3, Mixtral, Gemma, and all future releases.
**Supported Chat Models:** All Groq chat models including Llama 4, GPT OSS, Mixtral, Gemma, and all future releases.
.. list-table::
:header-rows: 1
@ -276,25 +264,28 @@ Groq
* - Model Name
- Model ID for Config
- Description
* - Llama 3.1 8B
- ``groq/llama3-8b-8192``
* - Llama 4 Maverick 17B
- ``groq/llama-4-maverick-17b-128e-instruct``
- Fast inference Llama model
* - Llama 3.1 70B
- ``groq/llama3-70b-8192``
- Larger Llama model
* - Mixtral 8x7B
- ``groq/mixtral-8x7b-32768``
- Mixture of experts model
* - Llama 4 Scout 8B
- ``groq/llama-4-scout-8b-128e-instruct``
- Smaller Llama model
* - GPT OSS 20B
- ``groq/gpt-oss-20b``
- Open source GPT model
**Configuration Examples:**
.. code-block:: yaml
llm_providers:
- model: groq/llama3-8b-8192
- model: groq/llama-4-maverick-17b-128e-instruct
access_key: $GROQ_API_KEY
- model: groq/mixtral-8x7b-32768
- model: groq/llama-4-scout-8b-128e-instruct
access_key: $GROQ_API_KEY
- model: groq/gpt-oss-20b
access_key: $GROQ_API_KEY
Google Gemini
@ -306,7 +297,7 @@ Google Gemini
**Authentication:** API Key - Get your Google AI API key from `Google AI Studio <https://aistudio.google.com/app/apikey>`_.
**Supported Chat Models:** All Google Gemini chat models including Gemini 1.5 Pro, Gemini 1.5 Flash, and all future releases.
**Supported Chat Models:** All Google Gemini chat models including Gemini 3 Pro, Gemini 3 Flash, and all future releases.
.. list-table::
:header-rows: 1
@ -315,11 +306,11 @@ Google Gemini
* - Model Name
- Model ID for Config
- Description
* - Gemini 1.5 Pro
- ``gemini/gemini-1.5-pro``
* - Gemini 3 Pro
- ``gemini/gemini-3-pro``
- Advanced reasoning and creativity
* - Gemini 1.5 Flash
- ``gemini/gemini-1.5-flash``
* - Gemini 3 Flash
- ``gemini/gemini-3-flash``
- Fast and efficient model
**Configuration Examples:**
@ -327,10 +318,10 @@ Google Gemini
.. code-block:: yaml
llm_providers:
- model: gemini/gemini-1.5-pro
- model: gemini/gemini-3-pro
access_key: $GOOGLE_API_KEY
- model: gemini/gemini-1.5-flash
- model: gemini/gemini-3-flash
access_key: $GOOGLE_API_KEY
Together AI
@ -524,7 +515,7 @@ Amazon Bedrock
**Provider Prefix:** ``amazon_bedrock/``
**API Endpoint:** Arch automatically constructs the endpoint as:
**API Endpoint:** Plano automatically constructs the endpoint as:
- Non-streaming: ``/model/{model-id}/converse``
- Streaming: ``/model/{model-id}/converse-stream``
@ -723,7 +714,7 @@ Configure routing preferences for dynamic model selection:
.. code-block:: yaml
llm_providers:
- model: openai/gpt-4o
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
routing_preferences:
- name: complex_reasoning
@ -731,7 +722,7 @@ Configure routing preferences for dynamic model selection:
- name: code_review
description: reviewing and analyzing existing code for bugs and improvements
- model: anthropic/claude-3-5-sonnet-20241022
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: creative_writing
@ -741,15 +732,15 @@ Model Selection Guidelines
--------------------------
**For Production Applications:**
- **High Performance**: OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet
- **Cost-Effective**: OpenAI GPT-4o mini, Anthropic Claude 3.5 Haiku
- **High Performance**: OpenAI GPT-5.2, Anthropic Claude Sonnet 4.5
- **Cost-Effective**: OpenAI GPT-5, Anthropic Claude Haiku 4.5
- **Code Tasks**: DeepSeek Coder, Together AI Code Llama
- **Local Deployment**: Ollama with Llama 3.1 or Code Llama
**For Development/Testing:**
- **Fast Iteration**: Groq models (optimized inference)
- **Local Testing**: Ollama models
- **Cost Control**: Smaller models like GPT-4o mini or Mistral Small
- **Cost Control**: Smaller models like GPT-4o or Mistral Small
See Also
--------

View file

@ -1,15 +1,17 @@
.. _prompt_target:
Prompt Target
==============
=============
A Prompt Target is a deterministic, task-specific backend function or API endpoint that your application calls via Plano.
Unlike agents (which handle wide-ranging, open-ended tasks), prompt targets are designed for focused, specific workloads where Plano can add value through input clarification and validation.
**Prompt Targets** are a core concept in Arch, empowering developers to clearly define how user prompts are interpreted, processed, and routed within their generative AI applications. Prompts can seamlessly be routed either to specialized AI agents capable of handling sophisticated, context-driven tasks or to targeted tools provided by your application, offering users a fast, precise, and personalized experience.
Plano helps by:
This section covers the essentials of prompt targets—what they are, how to configure them, their practical uses, and recommended best practices—to help you fully utilize this feature in your applications.
* **Clarifying and validating input**: Plano enriches incoming prompts with metadata (e.g., detecting follow-ups or clarifying requests) and can extract structured parameters from natural language before passing them to your backend.
* **Enabling high determinism**: Since the task is specific and well-defined, Plano can reliably extract the information your backend needs without ambiguity.
* **Reducing backend work**: Your backend receives clean, validated, structured inputs—so you can focus on business logic instead of parsing and validation.
What Are Prompt Targets?
------------------------
Prompt targets are endpoints within Arch that handle specific types of user prompts. They act as the bridge between user inputs and your backend agents or tools (APIs), enabling Arch to route, process, and manage prompts efficiently. Defining prompt targets helps you decouple your application's core logic from processing and handling complexities, leading to clearer code organization, better scalability, and easier maintenance.
For example, a prompt target might be "schedule a meeting" (specific task, deterministic inputs like date, time, attendees) or "retrieve documents" (well-defined RAG query with clear intent). Prompt targets are typically called from your application code via Plano's internal listener.
.. table::
@ -33,16 +35,11 @@ Below are the key features of prompt targets that empower developers to build ef
- **Input Management**: Specify required and optional parameters for each target.
- **Tools Integration**: Seamlessly connect prompts to backend APIs or functions.
- **Error Handling**: Direct errors to designated handlers for streamlined troubleshooting.
- **Metadata Enrichment**: Attach additional context to prompts for enhanced processing.
Configuring Prompt Targets
--------------------------
Configuring prompt targets involves defining them in Arch's configuration file. Each Prompt target specifies how a particular type of prompt should be handled, including the endpoint to invoke and any parameters required.
- **Multi-Turn Support**: Manage follow-up prompts and clarifications in conversational flows.
Basic Configuration
~~~~~~~~~~~~~~~~~~~
A prompt target configuration includes the following elements:
Configuring prompt targets involves defining them in Plano's configuration file. Each Prompt target specifies how a particular type of prompt should be handled, including the endpoint to invoke and any parameters required. A prompt target configuration includes the following elements:
.. vale Vale.Spelling = NO
@ -55,8 +52,8 @@ A prompt target configuration includes the following elements:
Defining Parameters
~~~~~~~~~~~~~~~~~~~
Parameters are the pieces of information that Arch needs to extract from the user's prompt to perform the desired action.
Each parameter can be marked as required or optional. Here is a full list of parameter attributes that Arch can support:
Parameters are the pieces of information that Plano needs to extract from the user's prompt to perform the desired action.
Each parameter can be marked as required or optional. Here is a full list of parameter attributes that Plano can support:
.. table::
:width: 100%
@ -98,50 +95,92 @@ Example Configuration For Tools
name: api_server
path: /weather
Example Configuration For Agents
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _plano_multi_turn_guide:
.. code-block:: yaml
:caption: Agent Orchestration Configuration Example
Multi-Turn
~~~~~~~~~~
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds latency and token cost for
common scenarios that can be managed more efficiently.
overrides:
use_agent_orchestrator: true
Plano is highly capable of accurately detecting and processing prompts in multi-turn scenarios so that you can buil fast and accurate agents in minutes.
Below are some cnversational examples that you can build via Plano. Each example is enriched with annotations (via ** [Plano] ** ) that illustrates how Plano
processess conversational messages on your behalf.
prompt_targets:
- name: sales_agent
description: handles queries related to sales and purchases
Example 1: Adjusting Retrieval
- name: issues_and_repairs
description: handles issues, repairs, or refunds
.. code-block:: text
- name: escalate_to_human
description: escalates to human agent
User: What are the benefits of renewable energy?
**[Plano]**: Check if there is an available <prompt_target> that can handle this user query.
**[Plano]**: Found "get_info_for_energy_source" prompt_target in arch_config.yaml. Forward prompt to the endpoint configured in "get_info_for_energy_source"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind.
.. note::
Today, you can use Arch to coordinate more specific agentic scenarios via tools and function calling, or use it for high-level agent routing and hand off scenarios. In the future, we plan to offer you the ability to combine these two approaches for more complex scenarios. Please see `github issues <https://github.com/katanemo/archgw/issues/442>`_ for more details.
User: Include cost considerations in the response.
**[Plano]**: Follow-up detected. Forward prompt history to the "get_info_for_energy_source" prompt_target and post the following parameters consideration="cost"
...
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind. While the initial setup costs can be high, long-term savings from reduced fuel expenses and government incentives make it cost-effective.
Routing Logic
-------------
Prompt targets determine where and how user prompts are processed. Arch uses intelligent routing logic to ensure that prompts are directed to the appropriate targets based on their intent and context.
Default Targets
~~~~~~~~~~~~~~~
For general-purpose prompts that do not match any specific prompt target, Arch routes them to a designated default target. This is useful for handling open-ended queries like document summarization or information extraction.
Example 2: Switching Intent
---------------------------
.. code-block:: text
Intent Matching
~~~~~~~~~~~~~~~
Arch analyzes the user's prompt to determine its intent and matches it with the most suitable prompt target based on the name and description defined in the configuration.
User: What are the symptoms of diabetes?
**[Plano]**: Check if there is an available <prompt_target> that can handle this user query.
**[Plano]**: Found "diseases_symptoms" prompt_target in arch_config.yaml. Forward disease=diabeteres to "diseases_symptoms" prompt target
...
Assistant: Common symptoms include frequent urination, excessive thirst, fatigue, and blurry vision.
For example:
User: How is it diagnosed?
**[Plano]**: New intent detected.
**[Plano]**: Found "disease_diagnoses" prompt_target in arch_config.yaml. Forward disease=diabeteres to "disease_diagnoses" prompt target
...
Assistant: Diabetes is diagnosed through blood tests like fasting blood sugar, A1C, or an oral glucose tolerance test.
.. code-block:: bash
Prompt: "Can you reboot the router?"
Matching Target: reboot_device (based on description matching "reboot devices")
Build Multi-Turn RAG Apps
-------------------------
The following section describes how you can easilly add support for multi-turn scenarios via Plano. You process and manage multi-turn prompts
just like you manage single-turn ones. Plano handles the conpleixity of detecting the correct intent based on the last user prompt and
the covnersational history, extracts relevant parameters needed by downstream APIs, and dipatches calls to any upstream LLMs to summarize the
response from your APIs.
.. _multi_turn_subsection_prompt_target:
Step 1: Define Plano Config
---------------------------
.. literalinclude:: ../build_with_plano/includes/multi_turn/prompt_targets_multi_turn.yaml
:language: yaml
:caption: Plano Config
:linenos:
Step 2: Process Request in Flask
--------------------------------
Once the prompt targets are configured as above, handle parameters across multi-turn as if its a single-turn request
.. literalinclude:: ../build_with_plano/includes/multi_turn/multi_turn_rag.py
:language: python
:caption: Parameter handling with Flask
:linenos:
Demo App
--------
For your convenience, we've built a `demo app <https://github.com/katanemo/archgw/tree/main/demos/samples_python/multi_turn_rag_agent>`_
that you can test and modify locally for multi-turn RAG scenarios.
.. figure:: ../build_with_plano/includes/multi_turn/mutli-turn-example.png
:width: 100%
:align: center
Example multi-turn user conversation showing adjusting retrieval
Summary
--------
Prompt targets are essential for defining how user prompts are handled within your generative AI applications using Arch.
By carefully configuring prompt targets, you can ensure that prompts are accurately routed, necessary parameters are extracted, and backend services are invoked seamlessly. This modular approach not only simplifies your application's architecture but also enhances scalability, maintainability, and overall user experience.
~~~~~~~
By carefully designing prompt targets as deterministic, task-specific entry points, you ensure that prompts are routed to the right workload, necessary parameters are cleanly extracted and validated, and backend services are invoked with structured inputs. This clear separation between prompt handling and business logic simplifies your architecture, makes behavior more predictable and testable, and improves the scalability and maintainability of your agentic applications.

View file

@ -1,53 +0,0 @@
.. _error_target:
Error Target
=============
**Error targets** are designed to capture and manage specific issues or exceptions that occur during Arch's function or system's execution.
These endpoints receive errors forwarded from Arch when issues arise, such as improper function/API calls, guardrail violations, or other processing errors.
The errors are communicated to the application via headers like ``X-Arch-[ERROR-TYPE]``, enabling you to respond appropriately and handle errors gracefully.
Key Concepts
------------
- **Error Type**: Categorizes the nature of the error, such as "ValidationError" or "RuntimeError." These error types help in identifying what kind of issue occurred and provide context for troubleshooting.
- **Error Message**: A clear, human-readable message describing the error. This should provide enough detail to inform users or developers of the root cause or required action.
- **Parameter-Specific Errors**: Errors that arise due to invalid or missing parameters when invoking a function. These errors are critical for ensuring the correctness of inputs.
Error Header Example
--------------------
.. code-block:: bash
:caption: Error Header Example
HTTP/1.1 400 Bad Request
X-Arch-Error-Type: FunctionValidationError
X-Arch-Error-Message: Tools call parsing failure
X-Arch-Target-Prompt: createUser
Content-Type: application/json
"messages": [
{
"role": "user",
"content": "Please create a user with the following ID: 1234"
},
{
"role": "system",
"content": "Expected a string for 'user_id', but got an integer."
}
]
Best Practices and Tips
-----------------------
- **Graceful Degradation**: If an error occurs, fail gracefully by providing fallback logic or alternative flows when possible.
- **Log Errors**: Always log errors on the server side for later analysis.
- **Client-Side Handling**: Make sure the client can interpret error responses and provide meaningful feedback to the user. Clients should not display raw error codes or stack traces but rather handle them gracefully.

View file

@ -1,37 +0,0 @@
.. _arch_overview_listeners:
Listener
---------
**Listener** is a top level primitive in Arch, which simplifies the configuration required to bind incoming
connections from downstream clients, and for egress connections to LLMs (hosted or API)
Arch builds on Envoy's Listener subsystem to streamline connection management for developers. Arch minimizes
the complexity of Envoy's listener setup by using best-practices and exposing only essential settings,
making it easier for developers to bind connections without deep knowledge of Envoys configuration model. This
simplification ensures that connections are secure, reliable, and optimized for performance.
Downstream (Ingress)
^^^^^^^^^^^^^^^^^^^^^^
Developers can configure Arch to accept connections from downstream clients. A downstream listener acts as the
primary entry point for incoming traffic, handling initial connection setup, including network filtering, guardrails,
and additional network security checks. For more details on prompt security and safety,
see :ref:`here <arch_overview_prompt_handling>`.
Upstream (Egress)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Arch automatically configures a listener to route requests from your application to upstream LLM API providers (or hosts).
When you start Arch, it creates a listener for egress traffic based on the presence of the ``listener`` configuration
section in the configuration file. Arch binds itself to a local address such as ``127.0.0.1:12000/v1`` or a DNS-based
address like ``arch.local:12000/v1`` for outgoing traffic. For more details on LLM providers, read :ref:`here <llm_providers>`.
Configure Listener
^^^^^^^^^^^^^^^^^^
To configure a Downstream (Ingress) Listener, simply add the ``listener`` directive to your configuration file:
.. literalinclude:: ../includes/arch_config.yaml
:language: yaml
:linenos:
:lines: 1-18
:emphasize-lines: 3-7
:caption: Example Configuration

View file

@ -1,45 +0,0 @@
.. _model_serving:
Model Serving
=============
Arch is a set of `two` self-contained processes that are designed to run alongside your application
servers (or on a separate host connected via a network). The first process is designated to manage low-level
networking and HTTP related concerns, and the other process is for model serving, which helps Arch make
intelligent decisions about the incoming prompts. The model server is designed to call the purpose-built
LLMs in Arch.
.. image:: /_static/img/arch-system-architecture.jpg
:align: center
:width: 40%
Arch' is designed to be deployed in your cloud VPC, on a on-premises host, and can work on devices that don't
have a GPU. Note, GPU devices are need for fast and cost-efficient use, so that Arch (model server, specifically)
can process prompts quickly and forward control back to the application host. There are three modes in which Arch
can be configured to run its **model server** subsystem:
Local Serving (CPU - Moderate)
------------------------------
The following bash commands enable you to configure the model server subsystem in Arch to run local on device
and only use CPU devices. This will be the slowest option but can be useful in dev/test scenarios where GPUs
might not be available.
.. code-block:: console
$ archgw up --local-cpu
Cloud Serving (GPU - Blazing Fast)
----------------------------------
The command below instructs Arch to intelligently use GPUs locally for fast intent detection, but default to
cloud serving for function calling and guardrails scenarios to dramatically improve the speed and overall performance
of your applications.
.. code-block:: console
$ archgw up
.. Note::
Arch's model serving in the cloud is priced at $0.05M/token (156x cheaper than GPT-4o) with average latency
of 200ms (10x faster than GPT-4o). Please refer to our :ref:`Get Started <quickstart>` to know
how to generate API keys for model serving

View file

@ -1,127 +0,0 @@
.. _arch_overview_prompt_handling:
Prompts
=======
Arch's primary design point is to securely accept, process and handle prompts. To do that effectively,
Arch relies on Envoy's HTTP `connection management <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/http/http_connection_management>`_,
subsystem and its **prompt handler** subsystem engineered with purpose-built LLMs to
implement critical functionality on behalf of developers so that you can stay focused on business logic.
Arch's **prompt handler** subsystem interacts with the **model subsystem** through Envoy's cluster manager system to ensure robust, resilient and fault-tolerant experience in managing incoming prompts.
.. seealso::
Read more about the :ref:`model subsystem <model_serving>` and how the LLMs are hosted in Arch.
Messages
--------
Arch accepts messages directly from the body of the HTTP request in a format that follows the `Hugging Face Messages API <https://huggingface.co/docs/text-generation-inference/en/messages_api>`_.
This design allows developers to pass a list of messages, where each message is represented as a dictionary
containing two key-value pairs:
- **Role**: Defines the role of the message sender, such as "user" or "assistant".
- **Content**: Contains the actual text of the message.
Prompt Guard
-----------------
Arch is engineered with `Arch-Guard <https://huggingface.co/collections/katanemo/arch-guard-6702bdc08b889e4bce8f446d>`_, an industry leading safety layer, powered by a
compact and high-performing LLM that monitors incoming prompts to detect and reject jailbreak attempts -
ensuring that unauthorized or harmful behaviors are intercepted early in the process.
To add jailbreak guardrails, see example below:
.. literalinclude:: ../includes/arch_config.yaml
:language: yaml
:linenos:
:lines: 1-25
:emphasize-lines: 21-25
:caption: Example Configuration
.. Note::
As a roadmap item, Arch will expose the ability for developers to define custom guardrails via Arch-Guard,
and add support for additional safety checks defined by developers and hazardous categories like, violent crimes, privacy, hate,
etc. To offer feedback on our roadmap, please visit our `github page <https://github.com/orgs/katanemo/projects/1>`_
Prompt Targets
--------------
Once a prompt passes any configured guardrail checks, Arch processes the contents of the incoming conversation
and identifies where to forward the conversation to via its ``prompt target`` primitive. Prompt targets are endpoints
that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with metadata like knowing
when a user's intent has changed so that you can build faster, more accurate RAG apps.
Configuring ``prompt_targets`` is simple. See example below:
.. literalinclude:: ../includes/arch_config.yaml
:language: yaml
:linenos:
:emphasize-lines: 39-53
:caption: Example Configuration
.. seealso::
Check :ref:`Prompt Target <prompt_target>` for more details!
Intent Matching
^^^^^^^^^^^^^^^
Arch uses fast text embedding and intent recognition approaches to first detect the intent of each incoming prompt.
This intent matching phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint.
Archs intent matching framework considers both the name and description of each prompt target, and uses a composite matching score between embedding similarity and intent classification scores to enhance accuracy in forwarding decisions.
- **Intent Recognition**: NLI techniques further refine the matching process by evaluating the semantic alignment between the prompt and potential targets.
- **Text Embedding**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match, ensuring that the prompt is handled by the correct downstream service.
Agentic Apps via Prompt Targets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of prompts) needed by a downstream backend API or function call before calling it directly.
For more details on how you can build agentic applications using Arch, see our full guide :ref:`here <arch_agent_guide>`:
.. Note::
`Arch-Function <https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68>`_ is a collection of dedicated agentic models engineered in Arch to extract information from a (set of) prompts and executes necessary backend API calls.
This allows for efficient handling of agentic tasks, such as scheduling data retrieval, by dynamically interacting with backend services.
Arch-Function achieves state-of-the-art performance, comparable with frontier models like Claude Sonnet 3.5 ang GPT-4, while being 44x cheaper ($0.10M/token hosted) and 10x faster (p50 latencies of 200ms).
Prompting LLMs
--------------
Arch is a single piece of software that is designed to manage both ingress and egress prompt traffic, drawing its distributed proxy nature from the robust `Envoy <https://envoyproxy.io>`_.
This makes it extremely efficient and capable of handling upstream connections to LLMs.
If your application is originating code to an API-based LLM, simply use the OpenAI client and configure it with Arch.
By sending traffic through Arch, you can propagate traces, manage and monitor traffic, apply rate limits, and utilize a large set of traffic management capabilities in a centralized way.
.. Attention::
When you start Arch, it automatically creates a listener port for egress calls to upstream LLMs. This is based on the
``llm_providers`` configuration section in the ``arch_config.yml`` file. Arch binds itself to a local address such as
``127.0.0.1:12000``.
Example: Using OpenAI Client with Arch as an Egress Gateway
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: python
import openai
# Set the OpenAI API base URL to the Arch gateway endpoint
openai.api_base = "http://127.0.0.1:12000"
# No need to set openai.api_key since it's configured in Arch's gateway
# Use the OpenAI client as usual
response = openai.Completion.create(
model="text-davinci-003",
prompt="What is the capital of France?"
)
print("OpenAI Response:", response.choices[0].text.strip())
In these examples, the OpenAI client is used to send traffic directly through the Arch egress proxy to the LLM of your choice, such as OpenAI.
The OpenAI client is configured to route traffic via Arch by setting the proxy to ``127.0.0.1:12000``, assuming Arch is running locally and bound to that address and port.
This setup allows you to take advantage of Arch's advanced traffic management features while interacting with LLM APIs like OpenAI.

View file

@ -1,170 +0,0 @@
.. _lifecycle_of_a_request:
Request Lifecycle
=================
Below we describe the events in the lifecycle of a request passing through an Arch gateway instance. We first
describe how Arch fits into the request path and then the internal events that take place following
the arrival of a request at Arch from downstream clients. We follow the request until the corresponding
dispatch upstream and the response path.
.. image:: /_static/img/network-topology-ingress-egress.jpg
:width: 100%
:align: center
Terminology
-----------
We recommend that you get familiar with some of the :ref:`terminology <arch_terminology>` used in Arch
before reading this section.
Network topology
----------------
How a request flows through the components in a network (including Arch) depends on the networks topology.
Arch can be used in a wide variety of networking topologies. We focus on the inner operation of Arch below,
but briefly we address how Arch relates to the rest of the network in this section.
- **Downstream(Ingress)** listeners take requests from upstream clients like a web UI or clients that forward
prompts to you local application responses from the application flow back through Arch to the downstream.
- **Upstream(Egress)** listeners take requests from the application and forward them to LLMs.
.. image:: /_static/img/network-topology-ingress-egress.jpg
:width: 100%
:align: center
In practice, Arch can be deployed on the edge and as an internal load balancer between AI agents. A request path may
traverse multiple Arch gateways:
.. image:: /_static/img/network-topology-agent.jpg
:width: 100%
:align: center
High level architecture
-----------------------
Arch is a set of **two** self-contained processes that are designed to run alongside your application servers
(or on a separate server connected to your application servers via a network). The first process is designated
to manage HTTP-level networking and connection management concerns (protocol management, request id generation,
header sanitization, etc.), and the other process is for **model serving**, which helps Arch make intelligent
decisions about the incoming prompts. The model server hosts the purpose-built LLMs to
manage several critical, but undifferentiated, prompt related tasks on behalf of developers.
The request processing path in Arch has three main parts:
* :ref:`Listener subsystem <arch_overview_listeners>` which handles **downstream** and **upstream** request
processing. It is responsible for managing the downstream (ingress) and the upstream (egress) request
lifecycle. The downstream and upstream HTTP/2 codec lives here.
* :ref:`Prompt handler subsystem <arch_overview_prompt_handling>` which is responsible for selecting and
forwarding prompts ``prompt_targets`` and establishes the lifecycle of any **upstream** connection to a
hosted endpoint that implements domain-specific business logic for incoming prompts. This is where knowledge
of targets and endpoint health, load balancing and connection pooling exists.
* :ref:`Model serving subsystem <model_serving>` which helps Arch make intelligent decisions about the
incoming prompts. The model server is designed to call the purpose-built LLMs in Arch.
The three subsystems are bridged with either the HTTP router filter, and the cluster manager subsystems of Envoy.
Also, Arch utilizes `Envoy event-based thread model <https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310>`_.
A main thread is responsible for the server lifecycle, configuration processing, stats, etc. and some number of
:ref:`worker threads <arch_overview_threading>` process requests. All threads operate around an event loop (`libevent <https://libevent.org/>`_)
and any given downstream TCP connection will be handled by exactly one worker thread for its lifetime. Each worker
thread maintains its own pool of TCP connections to upstream endpoints.
Worker threads rarely share state and operate in a trivially parallel fashion. This threading model
enables scaling to very high core count CPUs.
Configuration
-------------
Today, only support a static bootstrap configuration file for simplicity today:
.. literalinclude:: ../includes/arch_config.yaml
:language: yaml
Request Flow (Ingress)
----------------------
A brief outline of the lifecycle of a request and response using the example configuration above:
1. **TCP Connection Establishment**:
A TCP connection from downstream is accepted by an Arch listener running on a worker thread.
The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS,
decrypts incoming data for processing.
2. **Prompt Guardrails Check**:
Arch first checks the incoming prompts for guardrails such as jailbreak attempts. This ensures
that harmful or unwanted behaviors are detected early in the request processing pipeline.
3. **Intent Matching**:
The decrypted data stream is de-framed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs
intent matching via is **prompt-handler** subsystem using the name and description of the defined prompt targets,
determining which endpoint should handle the prompt.
4. **Parameter Gathering with Arch-Function**:
If a prompt target requires specific parameters, Arch engages Arch-FC to extract the necessary details
from the incoming prompt(s). This process gathers the critical information needed for downstream API calls.
5. **API Call Execution**:
Arch routes the prompt to the appropriate backend API or function call. If an endpoint cluster is identified,
load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint.
6. **Default Summarization by Upstream LLM**:
By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization.
This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval Augmented Generation)
and agentic applications.
7. **Error Handling and Forwarding**:
Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to
designated error targets. Error details are communicated through specific headers to the application:
- ``X-Function-Error-Code``: Code indicating the type of function call error.
- ``X-Prompt-Guard-Error-Code``: Code specifying violations detected by prompt guardrails.
- Additional headers carry messages and timestamps to aid in debugging and logging.
8. **Response Handling**:
The upstream endpoints TLS transport socket encrypts the response, which is then proxied back downstream.
Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.
Request Flow (Egress)
---------------------
A brief outline of the lifecycle of a request and response in the context of egress traffic from an application to Large Language Models (LLMs) via Arch:
1. **HTTP Connection Establishment to LLM**:
Arch initiates an HTTP connection to the upstream LLM service. This connection is handled by Archs egress listener
running on a worker thread. The connection typically uses a secure transport protocol such as HTTPS, ensuring the
prompt data is encrypted before being sent to the LLM service.
2. **Rate Limiting**:
Before sending the request to the LLM, Arch applies rate-limiting policies to ensure that the upstream LLM service
is not overwhelmed by excessive traffic. Rate limits are enforced per client or service, ensuring fair usage and
preventing accidental or malicious overload. If the rate limit is exceeded, Arch may return an appropriate HTTP
error (e.g., 429 Too Many Requests) without sending the prompt to the LLM.
3. **Load Balancing to (hosted) LLM Endpoints**:
After passing the rate-limiting checks, Arch routes the prompt to the appropriate LLM endpoint.
If multiple LLM providers instances are available, load balancing is performed to distribute traffic evenly
across the instances. Arch checks the health of the LLM endpoints using circuit breakers and health checks,
ensuring that the prompt is only routed to a healthy, responsive instance.
4. **Response Reception and Forwarding**:
Once the LLM processes the prompt, Arch receives the response from the LLM service. The response is typically a
generated text, completion, or summarization. Upon reception, Arch decrypts (if necessary) and handles the response,
passing it through any egress processing pipeline defined by the application, such as logging or additional response filtering.
Post-request processing
^^^^^^^^^^^^^^^^^^^^^^^^
Once a request completes, the stream is destroyed. The following also takes places:
* The post-request :ref:`monitoring <monitoring>` are updated (e.g. timing, active requests, upgrades, health checks).
Some statistics are updated earlier however, during request processing. Stats are batched and written by the main
thread periodically.
* :ref:`Access logs <arch_access_logging>` are written to the access log
* :ref:`Trace <arch_overview_tracing>` spans are finalized. If our example request was traced, a
trace span, describing the duration and details of the request would be created by the HCM when
processing request headers and then finalized by the HCM during post-request processing.

View file

@ -1,50 +0,0 @@
.. _arch_terminology:
Terminology
============
A few definitions before we dive into the main architecture documentation. Also note, Arch borrows from Envoy's terminology
to keep things consistent in logs and traces, and introduces and clarifies concepts are is relates to LLM applications.
**Agent**: An application that uses LLMs to handle wide-ranging tasks from users via prompts. This could be as simple
as retrieving or summarizing data from an API, or being able to trigger complex actions like adjusting ad campaigns, or
changing travel plans via prompts.
**Arch Config**: Arch operates based on a configuration that controls the behavior of a single instance of the Arch gateway.
This where you enable capabilities like LLM routing, fast function calling (via prompt_targets), applying guardrails, and enabling critical
features like metrics and tracing. For the full configuration reference of `arch_config.yaml` see :ref:`here <configuration_reference>`.
**Downstream(Ingress)**: An downstream client (web application, etc.) connects to Arch, sends prompts, and receives responses.
**Upstream(Egress)**: An upstream host that receives connections and prompts from Arch, and returns context or responses for a prompt
.. image:: /_static/img/network-topology-ingress-egress.jpg
:width: 100%
:align: center
**Listener**: A :ref:`listener <arch_overview_listeners>` is a named network location (e.g., port, address, path etc.) that Arch
listens on to process prompts before forwarding them to your application server endpoints. rch enables you to configure one listener
for downstream connections (like port 80, 443) and creates a separate internal listener for calls that initiate from your application
code to LLMs.
.. Note::
When you start Arch, you specify a listener address/port that you want to bind downstream. But, Arch uses are predefined port
that you can use (``127.0.0.1:12000``) to proxy egress calls originating from your application to LLMs (API-based or hosted).
For more details, check out :ref:`LLM providers <llm_providers>`.
**Prompt Target**: Arch offers a primitive called :ref:`prompt target <prompt_target>` to help separate business logic from
undifferentiated work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Arch.
For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you
can build faster, more accurate retrieval (RAG) apps. To support agentic apps, like scheduling travel plans or sharing comments on a
document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of
prompts) needed by a downstream backend API or function call before calling it directly.
**Model Serving**: Arch is a set of `two` self-contained processes that are designed to run alongside your application servers
(or on a separate host connected via a network).The :ref:`model serving <model_serving>` process helps Arch make intelligent decisions
about the incoming prompts. The model server is designed to call the (fast) purpose-built LLMs in Arch.
**Error Target**: :ref:`Error targets <error_target>` are those endpoints that receive forwarded errors from Arch when issues arise,
such as failing to properly call a function/API, detecting violations of guardrails, or encountering other processing errors.
These errors are communicated to the application via headers ``X-Arch-[ERROR-TYPE]``, allowing it to handle the errors gracefully
and take appropriate actions.

View file

@ -5,6 +5,8 @@
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
import os
import sys
from dataclasses import asdict
from sphinx.application import Sphinx
@ -12,10 +14,10 @@ from sphinx.util.docfields import Field
from sphinxawesome_theme import ThemeOptions
from sphinxawesome_theme.postprocess import Icons
project = "Arch Docs"
project = "Plano Docs"
copyright = "2025, Katanemo Labs, Inc"
author = "Katanemo Labs, Inc"
release = " v0.3.22"
release = " v0.4"
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
@ -34,6 +36,8 @@ extensions = [
"sphinx.ext.viewcode",
"sphinx_sitemap",
"sphinx_design",
# Local extensions
"llms_txt",
]
# Paths that contain templates, relative to this directory.
@ -43,6 +47,9 @@ templates_path = ["_templates"]
# to ignore when looking for source files.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# Allow importing extensions from docs/source/_ext (robust to current working directory)
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "_ext")))
# -- Options for HTML output -------------------------------------------------
html_theme = "sphinxawesome_theme" # You can change the theme to 'sphinx_rtd_theme' or another of your choice.
@ -72,7 +79,7 @@ theme_options = ThemeOptions(
awesome_external_links=True,
extra_header_link_icons={
"repository on GitHub": {
"link": "https://github.com/katanemo/arch",
"link": "https://github.com/katanemo/plano",
"icon": (
'<svg height="26px" style="margin-top:-2px;display:inline" '
'viewBox="0 0 45 44" '
@ -107,6 +114,7 @@ html_theme_options = asdict(theme_options)
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
html_css_files = ["css/custom.css"]
pygments_style = "lovelace"
pygments_style_dark = "github-dark"
@ -115,7 +123,7 @@ sitemap_url_scheme = "{link}"
# Add this configuration at the bottom of your conf.py
html_context = {
"google_analytics_id": "G-K2LXXSX6HB", # Replace with your Google Analytics tracking ID
"google_analytics_id": "G-EH2VW19FXE", # Replace with your Google Analytics tracking ID
}
templates_path = ["_templates"]
@ -142,5 +150,3 @@ def setup(app: Sphinx) -> None:
)
],
)
app.add_css_file("_static/custom.css")

View file

@ -1,70 +0,0 @@
.. _intro_to_arch:
Intro to Arch
=============
AI demos are easy to build. But past the thrill of a quick hack, you are left building, maintaining and scaling low-level plumbing code for agents that slows down AI innovation.
For example:
- You want to build specialized agents, but get stuck writing **routing and handoff** code.
- You bogged down with prompt engineering work to **clarify user intent and validate inputs**.
- You want to **quickly and safely use new LLMs** but get stuck writing integration code.
- You waste cycles writing and maintaining **observability** code, when it can be transparent.
- You want to **apply guardrails**, but have to write custom code for each prompt and LLM.
Arch is designed to solve these problems by providing a unified, out-of-process architecture that integrates with your existing application stack, enabling you to focus on building high-level features rather than plumbing — all without locking you into a framework.
.. figure:: /_static/img/arch_network_diagram_high_level.png
:width: 100%
:align: center
High-level network flow of where Arch Gateway sits in your agentic stack. Designed for both ingress and egress prompt traffic.
`Arch <https://github.com/katanemo/arch>`_ is a smart edge and AI gateway for AI-native apps - built by the contributors of Envoy Proxy with the belief that:
*Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests
including secure handling, intelligent routing, robust observability, and integration with backend (API)
systems for personalization - all outside business logic.*
In practice, achieving the above goal is incredibly difficult. Arch attempts to do so by providing the following high level features:
**Out-of-process architecture, built on** `Envoy <http://envoyproxy.io/>`_:
Arch takes a dependency on Envoy and is a self-contained process that is designed to run alongside your application servers.
Arch uses Envoy's HTTP connection management subsystem, HTTP L7 filtering and telemetry capabilities to extend the functionality exclusively for prompts and LLMs.
This gives Arch several advantages:
* Arch builds on Envoy's proven success. Envoy is used at massive scale by the leading technology companies of our time including `AirBnB <https://www.airbnb.com>`_, `Dropbox <https://www.dropbox.com>`_, `Google <https://www.google.com>`_, `Reddit <https://www.reddit.com>`_, `Stripe <https://www.stripe.com>`_, etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters: application features and business logic.
* Arch works with any application language. A single Arch deployment can act as gateway for AI applications written in Python, Java, C++, Go, Php, etc.
* Arch can be deployed and upgraded quickly across your infrastructure transparently without the horrid pain of deploying library upgrades in your applications.
**Engineered with Fast Task-Specific LLMs (TLMs):** Arch is engineered with specialized LLMs that are designed for the fast, cost-effective and accurate handling of prompts.
These LLMs are designed to be best-in-class for critical tasks like:
* **Function Calling:** Arch helps you easily personalize your applications by enabling calls to application-specific (API) operations via user prompts.
This involves any predefined functions or APIs you want to expose to users to perform tasks, gather information, or manipulate data.
With function calling, you have flexibility to support "agentic" experiences tailored to specific use cases - from updating insurance claims to creating ad campaigns - via prompts.
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation to gather any missing parameters and makes API calls so that you can focus on writing business logic.
For more details, read :ref:`Function Calling <function_calling>`.
* **Prompt Guard:** Arch helps you improve the safety of your application by applying prompt guardrails in a centralized way for better governance hygiene.
With prompt guardrails you can prevent ``jailbreak attempts`` present in user's prompts without having to write a single line of code.
To learn more about how to configure guardrails available in Arch, read :ref:`Prompt Guard <prompt_guard>`.
**Traffic Management:** Arch offers several capabilities for LLM calls originating from your applications, including smart retries on errors from upstream LLMs, and automatic cut-over to other LLMs configured in Arch for continuous availability and disaster recovery scenarios.
Arch extends Envoy's `cluster subsystem <https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager>`_ to manage upstream connections to LLMs so that you can build resilient AI applications.
**Front/edge Gateway:** There is substantial benefit in using the same software at the edge (observability, traffic shaping algorithms, applying guardrails, etc.) as for outbound LLM inference use cases.
Arch has the feature set that makes it exceptionally well suited as an edge gateway for AI applications.
This includes TLS termination, applying guardrail early in the process, intelligent parameter gathering from prompts, and prompt-based routing to backend APIs.
**Best-In Class Monitoring:** Arch offers several monitoring metrics that help you understand three critical aspects of
your application: latency, token usage, and error rates by an upstream LLM provider. Latency measures the speed at which
your application is responding to users, which includes metrics like time to first token (TFT), time per output token (TOT)
metrics, and the total latency as perceived by users.
**End-to-End Tracing:** Arch propagates trace context using the W3C Trace Context standard, specifically through the ``traceparent`` header.
This allows each component in the system to record its part of the request flow, enabling end-to-end tracing across the entire application.
By using OpenTelemetry, Arch ensures that developers can capture this trace data consistently and in a format compatible with various observability tools.
For more details, read :ref:`Tracing <arch_overview_tracing>`.

View file

@ -0,0 +1,56 @@
.. _intro_to_plano:
Intro to Plano
==============
Building agentic demos is easy. Delivering agentic applications safely, reliably, and repeatably to production is hard. After a quick hack, you end up building the "hidden AI middleware" to reach production: routing logic to reach the right agent, guardrail hooks for safety and moderation, evaluation and observability glue for continuous learning, and model/provider quirks — scattered across frameworks and application code.
Plano solves this by moving core delivery concerns into a unified, out-of-process dataplane. Core capabilities:
- **🚦 Orchestration:** Low-latency orchestration between agents, and add new agents without changing app code. When routing lives inside app code, it becomes hard to evolve and easy to duplicate. Moving orchestration into a centrally managed dataplane lets you change strategies without touching your agents, improving performance and reducing maintenance burden while avoiding tight coupling.
- **🛡️ Guardrails & Memory Hooks:** Apply jailbreak protection, content policies, and context workflows (e.g., rewriting, retrieval, redaction) once via :ref:`Filter Chains <filter_chain>` at the dataplane. Instead of re-implementing these in every agentic service, you get centralized governance, reduced code duplication, and consistent behavior across your stack.
- **🔗 Model Agility:** Route by model, alias (semantic names), or automatically via preferences so agents stay decoupled from specific providers. Swap or add models without refactoring prompts, tool-calling, or streaming handlers throughout your codebase by using Plano's smart routing and unified API.
- **🕵 Agentic Signals™:** Zero-code capture of behavior signals, traces, and metrics consistently across every agent. Rather than stitching together logging and metrics per framework, Plano surfaces traces, token usage, and learning signals in one place so you can iterate safely.
Built by core contributors to the widely adopted Envoy Proxy <https://www.envoyproxy.io/>_, Plano gives you a productiongrade foundation for agentic applications. It helps **developers** stay focused on the core logic of their agents, helps **product teams** shorten feedback loops for learning, and helps **engineering teams** standardize policy and safety across agents and LLMs. Plano is grounded in open protocols (de facto: OpenAIstyle v1/responses, de jure: MCP) and proven patterns like sidecar deployments, so it plugs in cleanly while remaining robust, scalable, and flexible.
In practice, achieving the above goal is incredibly difficult. Plano attempts to do so by providing the following high level features:
.. figure:: /_static/img/plano_network_diagram_high_level.png
:width: 100%
:align: center
High-level network flow of where Plano sits in your agentic stack. Designed for both ingress and egress prompt traffic.
**Engineered with Task-Specific LLMs (TLMs):** Plano is engineered with specialized LLMs that are designed for fast, cost-effective and accurate handling of prompts.
These LLMs are designed to be best-in-class for critical tasks like:
* **Agent Orchestration:** `Plano-Orchestrator <https://huggingface.co/collections/katanemo/plano-orchestrator>`_ is a family of state-of-the-art routing and orchestration models that decide which agent(s) or LLM(s) should handle each request, and in what sequence. Built for real-world multi-agent deployments, it analyzes user intent and conversation context to make precise routing and orchestration decisions while remaining efficient enough for low-latency production use across general chat, coding, and long-context multi-turn conversations.
* **Function Calling:** Plano lets you expose application-specific (API) operations as tools so that your agents can update records, fetch data, or trigger determininistic workflows via prompts. Under the hood this is backed by Arch-Function-Chat; for more details, read :ref:`Function Calling <function_calling>`.
* **Guardrails:** Plano helps you improve the safety of your application by applying prompt guardrails in a centralized way for better governance hygiene.
With prompt guardrails you can prevent ``jailbreak attempts`` present in user's prompts without having to write a single line of code.
To learn more about how to configure guardrails available in Plano, read :ref:`Prompt Guard <prompt_guard>`.
**Model Proxy:** Plano offers several capabilities for LLM calls originating from your applications, including smart retries on errors from upstream LLMs and automatic cut-over to other LLMs configured in Plano for continuous availability and disaster recovery scenarios. From your application's perspective you keep using an OpenAI-compatible API, while Plano owns resiliency and failover policies in one place.
Plano extends Envoy's `cluster subsystem <https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager>`_ to manage upstream connections to LLMs so that you can build resilient, provider-agnostic AI applications.
**Edge Proxy:** There is substantial benefit in using the same software at the edge (observability, traffic shaping algorithms, applying guardrails, etc.) as for outbound LLM inference use cases. Plano has the feature set that makes it exceptionally well suited as an edge gateway for AI applications.
This includes TLS termination, applying guardrails early in the request flow, and intelligently deciding which agent(s) or LLM(s) should handle each request and in what sequence. In practice, you configure listeners and policies once, and every inbound and outbound call flows through the same hardened gateway.
**Zero-Code Agent Signals™ & Tracing:** Zero-code capture of behavior signals, traces, and metrics consistently across every agent. Plano propagates trace context using the W3C Trace Context standard, specifically through the ``traceparent`` header. This allows each component in the system to record its part of the request flow, enabling end-to-end tracing across the entire application. By using OpenTelemetry, Plano ensures that developers can capture this trace data consistently and in a format compatible with various observability tools.
**Best-In Class Monitoring:** Plano offers several monitoring metrics that help you understand three critical aspects of your application: latency, token usage, and error rates by an upstream LLM provider. Latency measures the speed at which your application is responding to users, which includes metrics like time to first token (TFT), time per output token (TOT) metrics, and the total latency as perceived by users.
**Out-of-process architecture, built on** `Envoy <http://envoyproxy.io/>`_:
Plano takes a dependency on Envoy and is a self-contained process that is designed to run alongside your application servers. Plano uses Envoy's HTTP connection management subsystem, HTTP L7 filtering and telemetry capabilities to extend the functionality exclusively for prompts and LLMs.
This gives Plano several advantages:
* Plano builds on Envoy's proven success. Envoy is used at massive scale by the leading technology companies of our time including `AirBnB <https://www.airbnb.com>`_, `Dropbox <https://www.dropbox.com>`_, `Google <https://www.google.com>`_, `Reddit <https://www.reddit.com>`_, `Stripe <https://www.stripe.com>`_, etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters: application features and business logic.
* Plano works with any application language. A single Plano deployment can act as gateway for AI applications written in Python, Java, C++, Go, Php, etc.
* Plano can be deployed and upgraded quickly across your infrastructure transparently without the horrid pain of deploying library upgrades in your applications.

View file

@ -1,38 +1,38 @@
.. _overview:
Overview
============
`Arch <https://github.com/katanemo/arch>`_ is a smart edge and AI gateway for AI agents - one that is natively designed to handle and process prompts, not just network traffic.
========
`Plano <https://github.com/katanemo/plano>`_ is delivery infrastructure for agentic apps. A models-native proxy server and data plane designed to help you build agents faster, and deliver them reliably to production.
Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy.io/>`_, Arch handles the *pesky low-level work* in building agentic apps — like applying guardrails, clarifying vague user input, routing prompts to the right agent, and unifying access to any LLM. Its a protocol-friendly and framework-agnostic infrastructure layer designed to help you build and ship agentic apps faster.
Plano pulls out the rote plumbing work (the “hidden AI middleware”) and decouples you from brittle, everchanging framework abstractions. It centralizes what shouldnt be bespoke in every codebase like agent routing and orchestration, rich agentic signals and traces for continuous improvement, guardrail filters for safety and moderation, and smart LLM routing APIs for UX and DX agility. Use any language or AI framework, and ship agents to production faster with Plano.
In this documentation, you will learn how to quickly set up Arch to trigger API calls via prompts, apply prompt guardrails without writing any application-level logic,
simplify the interaction with upstream LLMs, and improve observability all while simplifying your application development process.
Built by core contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy.io/>`_, Plano gives you a productiongrade foundation for agentic applications. It helps **developers** stay focused on the core logic of their agents, helps **product teams** shorten feedback loops for learning, and helps **engineering teams** standardize policy and safety across agents and LLMs. Plano is grounded in open protocols (de facto: OpenAIstyle v1/responses, de jure: MCP) and proven patterns like sidecar deployments, so it plugs in cleanly while remaining robust, scalable, and flexible.
.. figure:: /_static/img/arch_network_diagram_high_level.png
In this documentation, youll learn how to set up Plano quickly, trigger API calls via prompts, apply guardrails without tight coupling with application code, simplify model and provider integration, and improve observability — so that you can focus on what matters most: the core product logic of your agents.
.. figure:: /_static/img/plano_network_diagram_high_level.png
:width: 100%
:align: center
High-level network flow of where Arch Gateway sits in your agentic stack. Designed for both ingress and egress prompt traffic.
High-level network flow of where Plano sits in your agentic stack. Designed for both ingress and egress traffic.
Get Started
-----------
This section introduces you to Arch and helps you get set up quickly:
This section introduces you to Plano and helps you get set up quickly:
.. grid:: 3
.. grid-item-card:: :octicon:`apps` Overview
:link: overview.html
Overview of Arch and Doc navigation
Overview of Plano and Doc navigation
.. grid-item-card:: :octicon:`book` Intro to Arch
:link: intro_to_arch.html
.. grid-item-card:: :octicon:`book` Intro to Plano
:link: intro_to_plano.html
Explore Arch's features and developer workflow
Explore Plano's features and developer workflow
.. grid-item-card:: :octicon:`rocket` Quickstart
:link: quickstart.html
@ -43,61 +43,61 @@ This section introduces you to Arch and helps you get set up quickly:
Concepts
--------
Deep dive into essential ideas and mechanisms behind Arch:
Deep dive into essential ideas and mechanisms behind Plano:
.. grid:: 3
.. grid-item-card:: :octicon:`package` Tech Overview
:link: ../concepts/tech_overview/tech_overview.html
.. grid-item-card:: :octicon:`package` Agents
:link: ../concepts/agents.html
Learn about the technology stack
Learn about how to build and scale agents with Plano
.. grid-item-card:: :octicon:`webhook` LLM Providers
.. grid-item-card:: :octicon:`webhook` Model Providers
:link: ../concepts/llm_providers/llm_providers.html
Explore Archs LLM integration options
Explore Plano's LLM integration options
.. grid-item-card:: :octicon:`workflow` Prompt Target
:link: ../concepts/prompt_target.html
Understand how Arch handles prompts
Understand how Plano handles prompts
Guides
------
Step-by-step tutorials for practical Arch use cases and scenarios:
Step-by-step tutorials for practical Plano use cases and scenarios:
.. grid:: 3
.. grid-item-card:: :octicon:`shield-check` Prompt Guard
.. grid-item-card:: :octicon:`shield-check` Guardrails
:link: ../guides/prompt_guard.html
Instructions on securing and validating prompts
.. grid-item-card:: :octicon:`code-square` Function Calling
:link: ../guides/function_calling.html
.. grid-item-card:: :octicon:`code-square` LLM Routing
:link: ../guides/llm_router.html
A guide to effective function calling
A guide to effective model selection strategies
.. grid-item-card:: :octicon:`issue-opened` Observability
:link: ../guides/observability/observability.html
.. grid-item-card:: :octicon:`issue-opened` State Management
:link: ../guides/state.html
Learn to monitor and troubleshoot Arch
Learn to manage conversation and application state
Build with Arch
---------------
Build with Plano
----------------
For developers extending and customizing Arch for specialized needs:
End to end examples demonstrating how to build agentic applications using Plano:
.. grid:: 2
.. grid-item-card:: :octicon:`dependabot` Agentic Workflow
:link: ../build_with_arch/agent.html
.. grid-item-card:: :octicon:`dependabot` Build Agentic Apps
:link: ../get_started/quickstart.html#build-agentic-apps-with-plano
Discover how to create and manage custom agents within Arch
Discover how to create and manage custom agents within Plano
.. grid-item-card:: :octicon:`stack` RAG Application
:link: ../build_with_arch/rag.html
.. grid-item-card:: :octicon:`stack` Build Multi-LLM Apps
:link: ../get_started/quickstart.html#use-plano-as-a-model-proxy-gateway
Integrate RAG for knowledge-driven responses
Learn how to route LLM calls through Plano for enhanced control and observability

View file

@ -1,10 +1,18 @@
.. _quickstart:
Quickstart
================
==========
Follow this guide to learn how to quickly set up Arch and integrate it into your generative AI applications.
Follow this guide to learn how to quickly set up Plano and integrate it into your generative AI applications. You can:
- :ref:`Build agents <quickstart_agents>` for multi-step workflows (e.g., travel assistants with flights and hotels).
- :ref:`Call deterministic APIs via prompt targets <quickstart_prompt_targets>` to turn instructions directly into function calls.
- :ref:`Use Plano as a model proxy (Gateway) <llm_routing_quickstart>` to standardize access to multiple LLM providers.
.. note::
This quickstart assumes basic familiarity with agents and prompt targets from the Concepts section. For background, see :ref:`Agents <agents>` and :ref:`Prompt Target <prompt_target>`.
The full agent and backend API implementations used here are available in the `plano-quickstart repository <https://github.com/plano-ai/plano-quickstart>`_. This guide focuses on wiring and configuring Plano (orchestration, prompt targets, and the model proxy), not application code.
Prerequisites
-------------
@ -15,32 +23,113 @@ Before you begin, ensure you have the following:
2. `Docker Compose <https://docs.docker.com/compose/install/>`_ (v2.29)
3. `Python <https://www.python.org/downloads/>`_ (v3.10+)
Arch's CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:
Plano's CLI allows you to manage and interact with the Plano efficiently. To install the CLI, simply run the following command:
.. tip::
We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that ``archgw`` and its dependencies do not interfere with other packages on your system.
We recommend that developers create a new Python virtual environment to isolate dependencies before installing Plano. This ensures that ``plano`` and its dependencies do not interfere with other packages on your system.
.. code-block:: console
$ python -m venv venv
$ source venv/bin/activate # On Windows, use: venv\Scripts\activate
$ pip install archgw==0.3.22
$ pip install plano==0.4.0
Build AI Agent with Arch Gateway
--------------------------------
Build Agentic Apps with Plano
-----------------------------
In the following quickstart, we will show you how easy it is to build an AI agent with the Arch gateway. We will build a currency exchange agent using the following simple steps. For this demo, we will use `https://api.frankfurter.dev/` to fetch the latest prices for currencies and assume USD as the base currency.
Plano helps you build agentic applications in two complementary ways:
Step 1. Create arch config file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* **Orchestrate agents**: Let Plano decide which agent or LLM should handle each request and in what sequence.
* **Call deterministic backends**: Use prompt targets to turn natural-language prompts into structured, validated API calls.
Create ``arch_config.yaml`` file with the following content:
.. _quickstart_agents:
Building agents with Plano orchestration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Agents are where your business logic lives (the "inner loop"). Plano takes care of the "outer loop"—routing, sequencing, and managing calls across agents and LLMs.
At a high level, building agents with Plano looks like this:
1. **Implement your agent** in your framework of choice (Python, JS/TS, etc.), exposing it as an HTTP service.
2. **Route LLM calls through Plano's Model Proxy**, so all models share a consistent interface and observability.
3. **Configure Plano to orchestrate**: define which agent(s) can handle which kinds of prompts, and let Plano decide when to call an agent vs. an LLM.
This quickstart uses a simplified version of the Travel Booking Assistant; for the full multi-agent walkthrough, see :ref:`Orchestration <agent_routing>`.
Step 1. Minimal orchestration config
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here is a minimal configuration that wires Plano-Orchestrator to two HTTP services: one for flights and one for hotels.
.. code-block:: yaml
version: v0.1.0
version: v0.1.0
agents:
- id: flight_agent
url: http://host.docker.internal:10520 # your flights service
- id: hotel_agent
url: http://host.docker.internal:10530 # your hotels service
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
listeners:
- type: agent
name: travel_assistant
port: 8001
router: plano_orchestrator_v1
agents:
- id: flight_agent
description: Search for flights and provide flight status.
- id: hotel_agent
description: Find hotels and check availability.
tracing:
random_sampling: 100
Step 2. Start your agents and Plano
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run your ``flight_agent`` and ``hotel_agent`` services (see :ref:`Orchestration <agent_routing>` for a full Travel Booking example), then start Plano with the config above:
.. code-block:: console
$ plano up plano_config.yaml
Plano will start the orchestrator and expose an agent listener on port ``8001``.
Step 3. Send a prompt and let Plano route
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now send a request to Plano using the OpenAI-compatible chat completions API—the orchestrator will analyze the prompt and route it to the right agent based on intent:
.. code-block:: bash
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "Find me flights from SFO to JFK tomorrow"}], "model": "openai/gpt-4o"}' \
http://localhost:8001/v1/chat/completions
You can then ask a follow-up like "Also book me a hotel near JFK" and Plano-Orchestrator will route to ``hotel_agent``—your agents stay focused on business logic while Plano handles routing.
.. _quickstart_prompt_targets:
Deterministic API calls with prompt targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Next, we'll show Plano's deterministic API calling using a single prompt target. We'll build a currency exchange backend powered by `https://api.frankfurter.dev/`, assuming USD as the base currency.
Step 1. Create plano config file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create ``plano_config.yaml`` file with the following content:
.. code-block:: yaml
version: v0.1.0
listeners:
ingress_traffic:
@ -49,19 +138,13 @@ Create ``arch_config.yaml`` file with the following content:
message_format: openai
timeout: 30s
llm_providers:
model_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: currency_exchange
description: Get currency exchange rate from USD to other currencies
@ -88,16 +171,16 @@ Create ``arch_config.yaml`` file with the following content:
endpoint: api.frankfurter.dev:443
protocol: https
Step 2. Start arch gateway with currency conversion config
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Step 2. Start plano with currency conversion config
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sh
$ archgw up arch_config.yaml
2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.1.5
$ plano up plano_config.yaml
2024-12-05 16:56:27,979 - cli.main - INFO - Starting plano cli version: 0.1.5
...
2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful!
2024-12-05 16:56:28,485 - cli.main - INFO - Starting arch model server and arch gateway
2024-12-05 16:56:28,485 - cli.main - INFO - Starting plano model server and plano gateway
...
2024-12-05 16:56:51,647 - cli.core - INFO - Container is healthy!
@ -106,7 +189,7 @@ Once the gateway is up, you can start interacting with it at port 10000 using th
Some sample queries you can ask include: ``what is currency rate for gbp?`` or ``show me list of currencies for conversion``.
Step 3. Interacting with gateway using curl command
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here is a sample curl command you can use to interact:
@ -129,15 +212,17 @@ And to get the list of supported currencies:
"Here is a list of the currencies that are supported for conversion from USD, along with their symbols:\n\n1. AUD - Australian Dollar\n2. BGN - Bulgarian Lev\n3. BRL - Brazilian Real\n4. CAD - Canadian Dollar\n5. CHF - Swiss Franc\n6. CNY - Chinese Renminbi Yuan\n7. CZK - Czech Koruna\n8. DKK - Danish Krone\n9. EUR - Euro\n10. GBP - British Pound\n11. HKD - Hong Kong Dollar\n12. HUF - Hungarian Forint\n13. IDR - Indonesian Rupiah\n14. ILS - Israeli New Sheqel\n15. INR - Indian Rupee\n16. ISK - Icelandic Króna\n17. JPY - Japanese Yen\n18. KRW - South Korean Won\n19. MXN - Mexican Peso\n20. MYR - Malaysian Ringgit\n21. NOK - Norwegian Krone\n22. NZD - New Zealand Dollar\n23. PHP - Philippine Peso\n24. PLN - Polish Złoty\n25. RON - Romanian Leu\n26. SEK - Swedish Krona\n27. SGD - Singapore Dollar\n28. THB - Thai Baht\n29. TRY - Turkish Lira\n30. USD - United States Dollar\n31. ZAR - South African Rand\n\nIf you want to convert USD to any of these currencies, you can select the one you are interested in."
Use Arch Gateway as LLM Router
------------------------------
.. _llm_routing_quickstart:
Step 1. Create arch config file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use Plano as a Model Proxy (Gateway)
------------------------------------
Arch operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines OpenAI and Mistral LLM providers.
Step 1. Create plano config file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create ``arch_config.yaml`` file with the following content:
Plano operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines OpenAI and Mistral LLM providers.
Create ``plano_config.yaml`` file with the following content:
.. code-block:: yaml
@ -150,7 +235,7 @@ Create ``arch_config.yaml`` file with the following content:
message_format: openai
timeout: 30s
llm_providers:
model_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o
default: true
@ -158,19 +243,19 @@ Create ``arch_config.yaml`` file with the following content:
- access_key: $MISTRAL_API_KEY
model: mistralministral-3b-latest
Step 2. Start arch gateway
~~~~~~~~~~~~~~~~~~~~~~~~~~
Step 2. Start plano
~~~~~~~~~~~~~~~~~~~
Once the config file is created, ensure that you have environment variables set up for ``MISTRAL_API_KEY`` and ``OPENAI_API_KEY`` (or these are defined in a ``.env`` file).
Start the Arch gateway:
Start Plano:
.. code-block:: console
$ archgw up arch_config.yaml
2024-12-05 11:24:51,288 - cli.main - INFO - Starting archgw cli version: 0.1.5
$ plano up plano_config.yaml
2024-12-05 11:24:51,288 - cli.main - INFO - Starting plano cli version: 0.1.5
2024-12-05 11:24:51,825 - cli.utils - INFO - Schema validation successful!
2024-12-05 11:24:51,825 - cli.main - INFO - Starting arch model server and arch gateway
2024-12-05 11:24:51,825 - cli.main - INFO - Starting plano
...
2024-12-05 11:25:16,131 - cli.core - INFO - Container is healthy!
@ -178,9 +263,9 @@ Step 3: Interact with LLM
~~~~~~~~~~~~~~~~~~~~~~~~~
Step 3.1: Using OpenAI Python client
++++++++++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Make outbound calls via the Arch gateway:
Make outbound calls via the Plano gateway:
.. code-block:: python
@ -188,14 +273,14 @@ Make outbound calls via the Arch gateway:
# Use the OpenAI client as usual
client = OpenAI(
# No need to set a specific openai.api_key since it's configured in Arch's gateway
# No need to set a specific openai.api_key since it's configured in Plano's gateway
api_key='--',
# Set the OpenAI API base URL to the Arch gateway endpoint
# Set the OpenAI API base URL to the Plano gateway endpoint
base_url="http://127.0.0.1:12000/v1"
)
response = client.chat.completions.create(
# we select model from arch_config file
# we select model from plano_config file
model="--",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
@ -203,7 +288,7 @@ Make outbound calls via the Arch gateway:
print("OpenAI Response:", response.choices[0].message.content)
Step 3.2: Using curl command
++++++++++++++++++++++++++++
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
@ -225,38 +310,13 @@ Step 3.2: Using curl command
],
}
You can override model selection using the ``x-arch-llm-provider-hint`` header. For example, to use Mistral, use the following curl command:
.. code-block:: bash
$ curl --header 'Content-Type: application/json' \
--header 'x-arch-llm-provider-hint: ministral-3b' \
--data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
http://localhost:12000/v1/chat/completions
{
...
"model": "ministral-3b-latest",
"choices": [
{
"messages": {
"role": "assistant",
"content": "The capital of France is Paris. It is the most populous city in France and is known for its iconic landmarks such as the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. Paris is also a major global center for art, fashion, gastronomy, and culture.",
},
...
}
],
...
}
Next Steps
==========
Congratulations! You've successfully set up Arch and made your first prompt-based request. To further enhance your GenAI applications, explore the following resources:
Congratulations! You've successfully set up Plano and made your first prompt-based request. To further enhance your GenAI applications, explore the following resources:
- :ref:`Full Documentation <overview>`: Comprehensive guides and references.
- `GitHub Repository <https://github.com/katanemo/arch>`_: Access the source code, contribute, and track updates.
- `Support <https://github.com/katanemo/arch#contact>`_: Get help and connect with the Arch community .
- `GitHub Repository <https://github.com/katanemo/plano>`_: Access the source code, contribute, and track updates.
- `Support <https://github.com/katanemo/plano#contact>`_: Get help and connect with the Plano community .
With Arch, building scalable, fast, and personalized GenAI applications has never been easier. Dive deeper into Arch's capabilities and start creating innovative AI-driven experiences today!
With Plano, building scalable, fast, and personalized GenAI applications has never been easier. Dive deeper into Plano's capabilities and start creating innovative AI-driven experiences today!

View file

@ -1,105 +0,0 @@
.. _agent_routing:
Agent Routing and Hand Off
===========================
Agent Routing and Hand Off is a key feature in Arch that enables intelligent routing of user prompts to specialized AI agents or human agents based on the nature and complexity of the user's request.
This capability significantly enhances the efficiency and personalization of interactions, ensuring each prompt receives the most appropriate and effective handling. The following section describes
the workflow, configuration, and implementation of Agent routing and hand off in Arch.
#. **Agent Selection**
When a user submits a prompt, Arch analyzes the input to determine the intent and complexity. Based on the analysis, Arch selects the most suitable agent configured within your application to handle the specific category of the user's request—such as sales inquiries, technical issues, or complex scenarios requiring human attention.
#. **Prompt Routing**
After selecting the appropriate agent, Arch routes the user's prompt to the designated agent's endpoint and waits for the agent to respond back with the processed output or further instructions.
#. **Hand Off**
Based on follow-up queries from the user, Arch repeats the process of analysis, agent selection, and routing to ensure a seamless hand off between AI agents as needed.
.. code-block:: yaml
:caption: Agent Routing and Hand Off Configuration Example
prompt_targets:
- name: sales_agent
description: Handles queries related to sales and purchases
- name: issues_and_repairs
description: handles issues, repairs, or refunds
- name: escalate_to_human
description: escalates to human agent
.. code-block:: python
:caption: Agent Routing and Hand Off Implementation Example via FastAPI
class Agent:
def __init__(self, role: str, instructions: str):
self.system_prompt = f"You are a {role}.\n{instructions}"
def handle(self, req: ChatCompletionsRequest):
messages = [{"role": "system", "content": self.get_system_prompt()}] + [
message.model_dump() for message in req.messages
]
return call_openai(messages, req.stream) #call_openai is a placeholder for the actual API call
def get_system_prompt(self) -> str:
return self.system_prompt
# Define your agents
AGENTS = {
"sales_agent": Agent(
role="sales agent",
instructions=(
"Always answer in a sentence or less.\n"
"Follow the following routine with the user:\n"
"1. Engage\n"
"2. Quote ridiculous price\n"
"3. Reveal caveat if user agrees."
),
),
"issues_and_repairs": Agent(
role="issues and repairs agent",
instructions="Propose a solution, offer refund if necessary.",
),
"escalate_to_human": Agent(
role="human escalation agent", instructions="Escalate issues to a human."
),
"unknown_agent": Agent(
role="general assistant", instructions="Assist the user in general queries."
),
}
#handle the request from arch gateway
@app.post("/v1/chat/completions")
def completion_api(req: ChatCompletionsRequest, request: Request):
agent_name = req.metadata.get("agent-name", "unknown_agent")
agent = AGENTS.get(agent_name)
logger.info(f"Routing to agent: {agent_name}")
return agent.handle(req)
.. note::
The above example demonstrates a simple implementation of Agent Routing and Hand Off using FastAPI. For the full implementation of this example
please see our `GitHub demo <https://github.com/katanemo/archgw/tree/main/demos/use_cases/orchestrating_agents>`_.
Example Use Cases
-----------------
Agent Routing and Hand Off is particularly beneficial in scenarios such as:
- **Customer Support**: Routing common customer queries to automated support agents, while escalating complex or sensitive issues to human support staff.
- **Sales and Marketing**: Automatically directing potential leads and sales inquiries to specialized sales agents for timely and targeted follow-ups.
- **Technical Assistance**: Managing user-reported issues, repairs, or refunds by assigning them to the correct technical or support agent efficiently.
Best Practices and Tips
------------------------
When implementing Agent Routing and Hand Off in your applications, consider these best practices:
- Clearly define agent responsibilities: Ensure each agent or human endpoint has a clear, specific description of the prompts they handle, reducing mis-routing.
- Monitor and optimize routes: Regularly review how prompts are routed to adjust and optimize agent definitions and configurations.
.. note::
To observe traffic to and from agents, please read more about :ref:`observability <observability>` in Arch.
By carefully configuring and managing your Agent routing and hand off, you can significantly improve your application's responsiveness, performance, and overall user satisfaction.

View file

@ -3,7 +3,7 @@
Function Calling
================
**Function Calling** is a powerful feature in Arch that allows your application to dynamically execute backend functions or services based on user prompts.
**Function Calling** is a powerful feature in Plano that allows your application to dynamically execute backend functions or services based on user prompts.
This enables seamless integration between natural language interactions and backend operations, turning user inputs into actionable results.
@ -18,15 +18,15 @@ Function Calling Workflow
#. **Prompt Parsing**
When a user submits a prompt, Arch analyzes it to determine the intent. Based on this intent, the system identifies whether a function needs to be invoked and which parameters should be extracted.
When a user submits a prompt, Plano analyzes it to determine the intent. Based on this intent, the system identifies whether a function needs to be invoked and which parameters should be extracted.
#. **Parameter Extraction**
Archs advanced natural language processing capabilities automatically extract parameters from the prompt that are necessary for executing the function. These parameters can include text, numbers, dates, locations, or other relevant data points.
Planos advanced natural language processing capabilities automatically extract parameters from the prompt that are necessary for executing the function. These parameters can include text, numbers, dates, locations, or other relevant data points.
#. **Function Invocation**
Once the necessary parameters have been extracted, Arch invokes the relevant backend function. This function could be an API, a database query, or any other form of backend logic. The function is executed with the extracted parameters to produce the desired output.
Once the necessary parameters have been extracted, Plano invokes the relevant backend function. This function could be an API, a database query, or any other form of backend logic. The function is executed with the extracted parameters to produce the desired output.
#. **Response Handling**
@ -34,7 +34,7 @@ Function Calling Workflow
Arch-Function
-------------------------
-------------
The `Arch-Function <https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68>`_ collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for **function calling** tasks.
The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts.
Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
@ -64,11 +64,11 @@ Key Features
Implementing Function Calling
-----------------------------
Heres a step-by-step guide to configuring function calling within your Arch setup:
Heres a step-by-step guide to configuring function calling within your Plano setup:
Step 1: Define the Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, create or identify the backend function you want Arch to call. This could be an API endpoint, a script, or any other executable backend logic.
First, create or identify the backend function you want Plano to call. This could be an API endpoint, a script, or any other executable backend logic.
.. code-block:: python
@ -96,8 +96,8 @@ First, create or identify the backend function you want Arch to call. This could
Step 2: Configure Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Next, map the function to a prompt target, defining the intent and parameters that Arch will extract from the users prompt.
Specify the parameters your function needs and how Arch should interpret these.
Next, map the function to a prompt target, defining the intent and parameters that Plano will extract from the users prompt.
Specify the parameters your function needs and how Plano should interpret these.
.. code-block:: yaml
:caption: Prompt Target Example Configuration
@ -121,22 +121,22 @@ Specify the parameters your function needs and how Arch should interpret these.
.. Note::
For a complete refernce of attributes that you can configure in a prompt target, see :ref:`here <defining_prompt_target_parameters>`.
Step 3: Arch Takes Over
~~~~~~~~~~~~~~~~~~~~~~~
Once you have defined the functions and configured the prompt targets, Arch Gateway takes care of the remaining work.
Step 3: Plano Takes Over
~~~~~~~~~~~~~~~~~~~~~~~~
Once you have defined the functions and configured the prompt targets, Plano takes care of the remaining work.
It will automatically validate parameters, and ensure that the required parameters (e.g., location) are present in the prompt, and add validation rules if necessary.
.. figure:: /_static/img/arch_network_diagram_high_level.png
.. figure:: /_static/img/plano_network_diagram_high_level.png
:width: 100%
:align: center
High-level network flow of where Arch Gateway sits in your agentic stack. Managing incoming and outgoing prompt traffic
High-level network flow of where Plano sits in your agentic stack. Managing incoming and outgoing prompt traffic
Once a downstream function (API) is called, Arch Gateway takes the response and sends it an upstream LLM to complete the request (for summarization, Q/A, text generation tasks).
For more details on how Arch Gateway enables you to centralize usage of LLMs, please read :ref:`LLM providers <llm_providers>`.
Once a downstream function (API) is called, Plano takes the response and sends it an upstream LLM to complete the request (for summarization, Q/A, text generation tasks).
For more details on how Plano enables you to centralize usage of LLMs, please read :ref:`LLM providers <llm_providers>`.
By completing these steps, you enable Arch to manage the process from validation to response, ensuring users receive consistent, reliable results - and that you are focused
By completing these steps, you enable Plano to manage the process from validation to response, ensuring users receive consistent, reliable results - and that you are focused
on the stuff that matters most.
Example Use Cases
@ -152,7 +152,7 @@ Here are some common use cases where Function Calling can be highly beneficial:
Best Practices and Tips
-----------------------
When integrating function calling into your generative AI applications, keep these tips in mind to get the most out of our Arch-Function models:
When integrating function calling into your generative AI applications, keep these tips in mind to get the most out of our Plano-Function models:
- **Keep it clear and simple**: Your function names and parameters should be straightforward and easy to understand. Think of it like explaining a task to a smart colleague - the clearer you are, the better the results.

View file

@ -16,12 +16,6 @@ llm_providers:
# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
prompt_targets:
- name: information_extraction
default: true

View file

@ -3,130 +3,199 @@
LLM Routing
==============================================================
With the rapid proliferation of large language models (LLM) — each optimized for different strengths, style, or latency/cost profile — routing has become an essential technique to operationalize the use of different models.
Arch provides three distinct routing approaches to meet different use cases:
1. **Model-based Routing**: Direct routing to specific models using provider/model names
2. **Alias-based Routing**: Semantic routing using custom aliases that map to underlying models
3. **Preference-aligned Routing**: Intelligent routing using the Arch-Router model based on context and user-defined preferences
This enables optimal performance, cost efficiency, and response quality by matching requests with the most suitable model from your available LLM fleet.
With the rapid proliferation of large language models (LLMs) — each optimized for different strengths, style, or latency/cost profile — routing has become an essential technique to operationalize the use of different models. Plano provides three distinct routing approaches to meet different use cases: :ref:`Model-based routing <model_based_routing>`, :ref:`Alias-based routing <alias_based_routing>`, and :ref:`Preference-aligned routing <preference_aligned_routing>`. This enables optimal performance, cost efficiency, and response quality by matching requests with the most suitable model from your available LLM fleet.
.. note::
For details on supported model providers, configuration options, and client libraries, see :ref:`LLM Providers <llm_providers>`.
Routing Methods
---------------
Model-based Routing
.. _model_based_routing:
Model-based routing
~~~~~~~~~~~~~~~~~~~
Direct routing allows you to specify exact provider and model combinations using the format ``provider/model-name``:
- Use provider-specific names like ``openai/gpt-4o`` or ``anthropic/claude-3-5-sonnet-20241022``
- Use provider-specific names like ``openai/gpt-5.2`` or ``anthropic/claude-sonnet-4-5``
- Provides full control and transparency over which model handles each request
- Ideal for production workloads where you want predictable routing behavior
Alias-based Routing
Configuration
^^^^^^^^^^^^^
Configure your LLM providers with specific provider/model names:
.. code-block:: yaml
:caption: Model-based Routing Configuration
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-5
access_key: $OPENAI_API_KEY
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
Client usage
^^^^^^^^^^^^
Clients specify exact models:
.. code-block:: python
# Direct provider/model specification
response = client.chat.completions.create(
model="openai/gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}]
)
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-5",
messages=[{"role": "user", "content": "Write a story"}]
)
.. _alias_based_routing:
Alias-based routing
~~~~~~~~~~~~~~~~~~~
Alias-based routing lets you create semantic model names that decouple your application from specific providers:
- Use meaningful names like ``fast-model``, ``reasoning-model``, or ``arch.summarize.v1`` (see :ref:`model_aliases`)
- Use meaningful names like ``fast-model``, ``reasoning-model``, or ``plano.summarize.v1`` (see :ref:`model_aliases`)
- Maps semantic names to underlying provider models for easier experimentation and provider switching
- Ideal for applications that want abstraction from specific model names while maintaining control
Configuration
^^^^^^^^^^^^^
Configure semantic aliases that map to underlying models:
.. code-block:: yaml
:caption: Alias-based Routing Configuration
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
- model: openai/gpt-5
access_key: $OPENAI_API_KEY
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
model_aliases:
# Model aliases - friendly names that map to actual provider names
fast-model:
target: gpt-5.2
reasoning-model:
target: gpt-5
creative-model:
target: claude-sonnet-4-5
Client usage
^^^^^^^^^^^^
Clients use semantic names:
.. code-block:: python
# Using semantic aliases
response = client.chat.completions.create(
model="fast-model", # Routes to best available fast model
messages=[{"role": "user", "content": "Quick summary please"}]
)
response = client.chat.completions.create(
model="reasoning-model", # Routes to best reasoning model
messages=[{"role": "user", "content": "Solve this complex problem"}]
)
.. _preference_aligned_routing:
Preference-aligned Routing (Arch-Router)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Preference-aligned routing (Arch-Router)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Traditional LLM routing approaches face significant limitations: they evaluate performance using benchmarks that often fail to capture human preferences, select from fixed model pools, and operate as "black boxes" without practical mechanisms for encoding user preferences.
Preference-aligned routing uses the `Arch-Router <https://huggingface.co/katanemo/Arch-Router-1.5B>`_ model to pick the best LLM based on domain, action, and your configured preferences instead of hard-coding a model.
Arch's preference-aligned routing addresses these challenges by applying a fundamental engineering principle: decoupling. The framework separates route selection (matching queries to human-readable policies) from model assignment (mapping policies to specific LLMs). This separation allows you to define routing policies using descriptive labels like ``Domain: 'finance', Action: 'analyze_earnings_report'`` rather than cryptic identifiers, while independently configuring which models handle each policy.
- **Domain**: High-level topic of the request (e.g., legal, healthcare, programming).
- **Action**: What the user wants to do (e.g., summarize, generate code, translate).
- **Routing preferences**: Your mapping from (domain, action) to preferred models.
The `Arch-Router <https://huggingface.co/katanemo/Arch-Router-1.5B>`_ model automatically selects the most appropriate LLM based on:
Arch-Router analyzes each prompt to infer domain and action, then applies your preferences to select a model. This decouples **routing policy** (how to choose) from **model assignment** (what to run), making routing transparent, controllable, and easy to extend as you add or swap models.
- Domain Analysis: Identifies the subject matter (e.g., legal, healthcare, programming)
- Action Classification: Determines the type of operation (e.g., summarization, code generation, translation)
- User-Defined Preferences: Maps domains and actions to preferred models using transparent, configurable routing decisions
- Human Preference Alignment: Uses domain-action mappings that capture subjective evaluation criteria, ensuring routing aligns with real-world user needs rather than just benchmark scores
Configuration
^^^^^^^^^^^^^
This approach supports seamlessly adding new models without retraining and is ideal for dynamic, context-aware routing that adapts to request content and intent.
To configure preference-aligned dynamic routing, define routing preferences that map domains and actions to specific models:
.. code-block:: yaml
:caption: Preference-Aligned Dynamic Routing Configuration
Model-based Routing Workflow
----------------------------
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
For direct model routing, the process is straightforward:
llm_providers:
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
default: true
#. **Client Request**
- model: openai/gpt-5
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code understanding
description: understand and explain existing code snippets, functions, or libraries
- name: complex reasoning
description: deep analysis, mathematical problem solving, and logical reasoning
The client specifies the exact model using provider/model format (``openai/gpt-4o``).
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: creative writing
description: creative content generation, storytelling, and writing assistance
- name: code generation
description: generating new code snippets, functions, or boilerplate based on user prompts
#. **Provider Validation**
Client usage
^^^^^^^^^^^^
Arch validates that the specified provider and model are configured and available.
Clients can let the router decide or still specify aliases:
#. **Direct Routing**
.. code-block:: python
The request is sent directly to the specified model without analysis or decision-making.
# Let Arch-Router choose based on content
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Write a creative story about space exploration"}]
# No model specified - router will analyze and choose claude-sonnet-4-5
)
#. **Response Handling**
The response is returned to the client with optional metadata about the routing decision.
Alias-based Routing Workflow
-----------------------------
For alias-based routing, the process includes name resolution:
#. **Client Request**
The client specifies a semantic alias name (``reasoning-model``).
#. **Alias Resolution**
Arch resolves the alias to the actual provider/model name based on configuration.
#. **Model Selection**
If the alias maps to multiple models, Arch selects one based on availability and load balancing.
#. **Request Forwarding**
The request is forwarded to the resolved model.
#. **Response Handling**
The response is returned with optional metadata about the alias resolution.
.. _preference_aligned_routing_workflow:
Preference-aligned Routing Workflow (Arch-Router)
-------------------------------------------------
For preference-aligned dynamic routing, the process involves intelligent analysis:
#. **Prompt Analysis**
When a user submits a prompt without specifying a model, the Arch-Router analyzes it to determine the domain (subject matter) and action (type of operation requested).
#. **Model Selection**
Based on the analyzed intent and your configured routing preferences, the Router selects the most appropriate model from your available LLM fleet.
#. **Request Forwarding**
Once the optimal model is identified, our gateway forwards the original prompt to the selected LLM endpoint. The routing decision is transparent and can be logged for monitoring and optimization purposes.
#. **Response Handling**
After the selected model processes the request, the response is returned through the gateway. The gateway can optionally add routing metadata or performance metrics to help you understand and optimize your routing decisions.
Arch-Router
-------------------------
-----------
The `Arch-Router <https://huggingface.co/katanemo/Arch-Router-1.5B>`_ is a state-of-the-art **preference-based routing model** specifically designed to address the limitations of traditional LLM routing. This compact 1.5B model delivers production-ready performance with low latency and high accuracy while solving key routing challenges.
**Addressing Traditional Routing Limitations:**
@ -159,145 +228,6 @@ In summary, Arch-Router demonstrates:
- **Production-Ready Performance**: Optimized for low-latency, high-throughput applications in multi-model environments.
Implementing Routing
--------------------
**Model-based Routing**
For direct model routing, configure your LLM providers with specific provider/model names:
.. code-block:: yaml
:caption: Model-based Routing Configuration
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
Clients specify exact models:
.. code-block:: python
# Direct provider/model specification
response = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
response = client.chat.completions.create(
model="anthropic/claude-3-5-sonnet-20241022",
messages=[{"role": "user", "content": "Write a story"}]
)
**Alias-based Routing**
Configure semantic aliases that map to underlying models:
.. code-block:: yaml
:caption: Alias-based Routing Configuration
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
model_aliases:
# Model aliases - friendly names that map to actual provider names
fast-model:
target: gpt-4o-mini
reasoning-model:
target: gpt-4o
creative-model:
target: claude-3-5-sonnet-20241022
Clients use semantic names:
.. code-block:: python
# Using semantic aliases
response = client.chat.completions.create(
model="fast-model", # Routes to best available fast model
messages=[{"role": "user", "content": "Quick summary please"}]
)
response = client.chat.completions.create(
model="reasoning-model", # Routes to best reasoning model
messages=[{"role": "user", "content": "Solve this complex problem"}]
)
**Preference-aligned Routing (Arch-Router)**
To configure preference-aligned dynamic routing, you need to define routing preferences that map domains and actions to specific models:
.. code-block:: yaml
:caption: Preference-Aligned Dynamic Routing Configuration
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: code understanding
description: understand and explain existing code snippets, functions, or libraries
- name: complex reasoning
description: deep analysis, mathematical problem solving, and logical reasoning
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: creative writing
description: creative content generation, storytelling, and writing assistance
- name: code generation
description: generating new code snippets, functions, or boilerplate based on user prompts
Clients can let the router decide or use aliases:
.. code-block:: python
# Let Arch-Router choose based on content
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Write a creative story about space exploration"}]
# No model specified - router will analyze and choose claude-3-5-sonnet-20241022
)
Combining Routing Methods
-------------------------
@ -307,17 +237,17 @@ You can combine static model selection with dynamic routing preferences for maxi
:caption: Hybrid Routing Configuration
llm_providers:
- model: openai/gpt-4o-mini
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
- model: openai/gpt-5
access_key: $OPENAI_API_KEY
routing_preferences:
- name: complex_reasoning
description: deep analysis and complex problem solving
- model: anthropic/claude-3-5-sonnet-20241022
- model: anthropic/claude-sonnet-4-5
access_key: $ANTHROPIC_API_KEY
routing_preferences:
- name: creative_tasks
@ -326,14 +256,14 @@ You can combine static model selection with dynamic routing preferences for maxi
model_aliases:
# Model aliases - friendly names that map to actual provider names
fast-model:
target: gpt-4o-mini
target: gpt-5.2
reasoning-model:
target: gpt-4o
target: gpt-5
# Aliases that can also participate in dynamic routing
creative-model:
target: claude-3-5-sonnet-20241022
target: claude-sonnet-4-5
This configuration allows clients to:
@ -341,7 +271,7 @@ This configuration allows clients to:
2. **Let the router decide**: No model specified, router analyzes content
Example Use Cases
-------------------------
-----------------
Here are common scenarios where Arch-Router excels:
- **Coding Tasks**: Distinguish between code generation requests ("write a Python function"), debugging needs ("fix this error"), and code optimization ("make this faster"), routing each to appropriately specialized models.
@ -352,9 +282,8 @@ Here are common scenarios where Arch-Router excels:
- **Conversational Routing**: Track conversation context to identify when topics shift between domains or when the type of assistance needed changes mid-conversation.
Best practicesm
-------------------------
Best practices
--------------
- **💡Consistent Naming:** Route names should align with their descriptions.
- ❌ Bad:
@ -379,18 +308,15 @@ Best practicesm
- **💡Nouns Descriptor:** Preference-based routers perform better with noun-centric descriptors, as they offer more stable and semantically rich signals for matching.
- **💡Domain Inclusion:** for best user experience, you should always include domain route. This help the router fall back to domain when action is not
- **💡Domain Inclusion:** for best user experience, you should always include a domain route. This helps the router fall back to domain when action is not confidently inferred.
.. Unsupported Features
.. -------------------------
Unsupported Features
--------------------
.. The following features are **not supported** by the Arch-Router model:
The following features are **not supported** by the Arch-Router model:
.. - **❌ Multi-Modality:**
.. The model is not trained to process raw image or audio inputs. While it can handle textual queries *about* these modalities (e.g., "generate an image of a cat"), it cannot interpret encoded multimedia data directly.
- **Multi-modality**: The model is not trained to process raw image or audio inputs. It can handle textual queries *about* these modalities (e.g., "generate an image of a cat"), but cannot interpret encoded multimedia data directly.
.. - **❌ Function Calling:**
.. This model is designed for **semantic preference matching**, not exact intent classification or tool execution. For structured function invocation, use models in the **Arch-Function-Calling** collection.
- **Function calling**: Arch-Router is designed for **semantic preference matching**, not exact intent classification or tool execution. For structured function invocation, use models in the Plano Function Calling collection instead.
.. - **❌ System Prompt Dependency:**
.. Arch-Router routes based solely on the users conversation history. It does not use or rely on system prompts for routing decisions.
- **System prompt dependency**: Arch-Router routes based solely on the users conversation history. It does not use or rely on system prompts for routing decisions.

View file

@ -3,14 +3,14 @@
Access Logging
==============
Access logging in Arch refers to the logging of detailed information about each request and response that flows through Arch.
It provides visibility into the traffic passing through Arch, which is crucial for monitoring, debugging, and analyzing the
Access logging in Plano refers to the logging of detailed information about each request and response that flows through Plano.
It provides visibility into the traffic passing through Plano, which is crucial for monitoring, debugging, and analyzing the
behavior of AI applications and their interactions.
Key Features
^^^^^^^^^^^^
* **Per-Request Logging**:
Each request that passes through Arch is logged. This includes important metadata such as HTTP method,
Each request that passes through Plano is logged. This includes important metadata such as HTTP method,
path, response status code, request duration, upstream host, and more.
* **Integration with Monitoring Tools**:
Access logs can be exported to centralized logging systems (e.g., ELK stack or Fluentd) or used to feed monitoring and alerting systems.
@ -19,24 +19,24 @@ Key Features
How It Works
^^^^^^^^^^^^
Arch gateway exposes access logs for every call it manages on your behalf. By default these access logs can be found under ``~/archgw_logs``. For example:
Plano exposes access logs for every call it manages on your behalf. By default these access logs can be found under ``~/plano_logs``. For example:
.. code-block:: console
$ tail -F ~/archgw_logs/access_*.log
$ tail -F ~/plano_logs/access_*.log
==> /Users/adilhafeez/archgw_logs/access_llm.log <==
==> /Users/username/plano_logs/access_llm.log <==
[2024-10-10T03:55:49.537Z] "POST /v1/chat/completions HTTP/1.1" 0 DC 0 0 770 - "-" "OpenAI/Python 1.51.0" "469793af-b25f-9b57-b265-f376e8d8c586" "api.openai.com" "162.159.140.245:443"
==> /Users/adilhafeez/archgw_logs/access_internal.log <==
==> /Users/username/plano_logs/access_internal.log <==
[2024-10-10T03:56:03.906Z] "POST /embeddings HTTP/1.1" 200 - 52 21797 54 53 "-" "-" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "model_server" "192.168.65.254:51000"
[2024-10-10T03:56:03.961Z] "POST /zeroshot HTTP/1.1" 200 - 106 218 87 87 "-" "-" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "model_server" "192.168.65.254:51000"
[2024-10-10T03:56:04.050Z] "POST /v1/chat/completions HTTP/1.1" 200 - 1301 614 441 441 "-" "-" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "model_server" "192.168.65.254:51000"
[2024-10-10T03:56:04.492Z] "POST /hallucination HTTP/1.1" 200 - 556 127 104 104 "-" "-" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "model_server" "192.168.65.254:51000"
[2024-10-10T03:56:04.598Z] "POST /insurance_claim_details HTTP/1.1" 200 - 447 125 17 17 "-" "-" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "api_server" "192.168.65.254:18083"
==> /Users/adilhafeez/archgw_logs/access_ingress.log <==
[2024-10-10T03:56:03.905Z] "POST /v1/chat/completions HTTP/1.1" 200 - 463 1022 1695 984 "-" "OpenAI/Python 1.51.0" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "arch_llm_listener" "0.0.0.0:12000"
==> /Users/username/plano_logs/access_ingress.log <==
[2024-10-10T03:56:03.905Z] "POST /v1/chat/completions HTTP/1.1" 200 - 463 1022 1695 984 "-" "OpenAI/Python 1.51.0" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "plano_llm_listener" "0.0.0.0:12000"
Log Format
@ -58,6 +58,6 @@ For example for following request:
.. code-block:: console
[2024-10-10T03:56:03.905Z] "POST /v1/chat/completions HTTP/1.1" 200 - 463 1022 1695 984 "-" "OpenAI/Python 1.51.0" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "arch_llm_listener" "0.0.0.0:12000"
[2024-10-10T03:56:03.905Z] "POST /v1/chat/completions HTTP/1.1" 200 - 463 1022 1695 984 "-" "OpenAI/Python 1.51.0" "604197fe-2a5b-95a2-9367-1d6b30cfc845" "plano_llm_listener" "0.0.0.0:12000"
Total duration was 1695ms, and the upstream service took 984ms to process the request. Bytes received and sent were 463 and 1022 respectively.

View file

@ -8,11 +8,11 @@ and instrumentation for generating, collecting, processing, and exporting teleme
metrics, and logs. Its flexible design supports a wide range of backends and seamlessly integrates with
modern application tools.
Arch acts a *source* for several monitoring metrics related to **prompts** and **LLMs** natively integrated
Plano acts a *source* for several monitoring metrics related to **agents** and **LLMs** natively integrated
via `OpenTelemetry <https://opentelemetry.io/>`_ to help you understand three critical aspects of your application:
latency, token usage, and error rates by an upstream LLM provider. Latency measures the speed at which your application
is responding to users, which includes metrics like time to first token (TFT), time per output token (TOT) metrics, and
the total latency as perceived by users. Below are some screenshots how Arch integrates natively with tools like
the total latency as perceived by users. Below are some screenshots how Plano integrates natively with tools like
`Grafana <https://grafana.com/grafana/dashboards/>`_ via `Promethus <https://prometheus.io/>`_
@ -32,7 +32,7 @@ Metrics Dashboard (via Grafana)
Configure Monitoring
~~~~~~~~~~~~~~~~~~~~
Arch gateway publishes stats endpoint at http://localhost:19901/stats. As noted above, Arch is a source for metrics. To view and manipulate dashbaords, you will
Plano publishes stats endpoint at http://localhost:19901/stats. As noted above, Plano is a source for metrics. To view and manipulate dashbaords, you will
need to configiure `Promethus <https://prometheus.io/>`_ (as a metrics store) and `Grafana <https://grafana.com/grafana/dashboards/>`_ for dashboards. Below
are some sample configuration files for both, respectively.
@ -51,7 +51,7 @@ are some sample configuration files for both, respectively.
timeout: 10s
api_version: v2
scrape_configs:
- job_name: archgw
- job_name: plano
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s

View file

@ -17,9 +17,9 @@ requests in an AI application. With tracing, you can capture a detailed view of
through various services and components, which is crucial for **debugging**, **performance optimization**,
and understanding complex AI agent architectures like Co-pilots.
**Arch** propagates trace context using the W3C Trace Context standard, specifically through the
**Plano** propagates trace context using the W3C Trace Context standard, specifically through the
``traceparent`` header. This allows each component in the system to record its part of the request
flow, enabling **end-to-end tracing** across the entire application. By using OpenTelemetry, Arch ensures
flow, enabling **end-to-end tracing** across the entire application. By using OpenTelemetry, Plano ensures
that developers can capture this trace data consistently and in a format compatible with various observability
tools.
@ -41,9 +41,9 @@ Benefits of Using ``Traceparent`` Headers
How to Initiate A Trace
-----------------------
1. **Enable Tracing Configuration**: Simply add the ``random_sampling`` in ``tracing`` section to 100`` flag to in the :ref:`listener <arch_overview_listeners>` config
1. **Enable Tracing Configuration**: Simply add the ``random_sampling`` in ``tracing`` section to 100`` flag to in the :ref:`listener <plano_overview_listeners>` config
2. **Trace Context Propagation**: Arch automatically propagates the ``traceparent`` header. When a request is received, Arch will:
2. **Trace Context Propagation**: Plano automatically propagates the ``traceparent`` header. When a request is received, Plano will:
- Generate a new ``traceparent`` header if one is not present.
- Extract the trace context from the ``traceparent`` header if it exists.
@ -57,7 +57,7 @@ How to Initiate A Trace
Trace Propagation
-----------------
Arch uses the W3C Trace Context standard for trace propagation, which relies on the ``traceparent`` header.
Plano uses the W3C Trace Context standard for trace propagation, which relies on the ``traceparent`` header.
This header carries tracing information in a standardized format, enabling interoperability between different
tracing systems.
@ -77,7 +77,7 @@ Instrumentation
~~~~~~~~~~~~~~~
To integrate AI tracing, your application needs to follow a few simple steps. The steps
below are very common practice, and not unique to Arch, when you reading tracing headers and export
below are very common practice, and not unique to Plano, when you reading tracing headers and export
`spans <https://docs.lightstep.com/docs/understand-distributed-tracing>`_ for distributed tracing.
- Read the ``traceparent`` header from incoming requests.
@ -148,66 +148,6 @@ Handle incoming requests:
print(f"Payment service response: {response.content}")
AI Agent Tracing Visualization Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is an example of tracing for an AI-powered customer support system.
A customer interacts with AI agents, which forward their requests through different
specialized services and external systems.
::
+--------------------------+
| Customer Interaction |
+--------------------------+
|
v
+--------------------------+ +--------------------------+
| Agent 1 (Main - Arch) | ----> | External Payment Service |
+--------------------------+ +--------------------------+
| |
v v
+--------------------------+ +--------------------------+
| Agent 2 (Support - Arch)| ----> | Internal Tech Support |
+--------------------------+ +--------------------------+
| |
v v
+--------------------------+ +--------------------------+
| Agent 3 (Orders- Arch) | ----> | Inventory Management |
+--------------------------+ +--------------------------+
Trace Breakdown:
****************
- Customer Interaction:
- Span 1: Customer initiates a request via the AI-powered chatbot for billing support (e.g., asking for payment details).
- AI Agent 1 (Main - Arch):
- Span 2: AI Agent 1 (Main) processes the request and identifies it as related to billing, forwarding the request
to an external payment service.
- Span 3: AI Agent 1 determines that additional technical support is needed for processing and forwards the request
to AI Agent 2.
- External Payment Service:
- Span 4: The external payment service processes the payment-related request (e.g., verifying payment status) and sends
the response back to AI Agent 1.
- AI Agent 2 (Tech - Arch):
- Span 5: AI Agent 2, responsible for technical queries, processes a request forwarded from AI Agent 1 (e.g., checking for
any account issues).
- Span 6: AI Agent 2 forwards the query to Internal Tech Support for further investigation.
- Internal Tech Support:
- Span 7: Internal Tech Support processes the request (e.g., resolving account access issues) and responds to AI Agent 2.
- AI Agent 3 (Orders - Arch):
- Span 8: AI Agent 3 handles order-related queries. AI Agent 1 forwards the request to AI Agent 3 after payment verification
is completed.
- Span 9: AI Agent 3 forwards a request to the Inventory Management system to confirm product availability for a pending order.
- Inventory Management:
- Span 10: The Inventory Management system checks stock and availability and returns the information to AI Agent 3.
Integrating with Tracing Tools
------------------------------
@ -292,11 +232,11 @@ To send tracing data to `Datadog <https://docs.datadoghq.com/getting_started/tra
Langtrace
~~~~~~~~~
Langtrace is an observability tool designed specifically for large language models (LLMs). It helps you capture, analyze, and understand how LLMs are used in your applications including those built using Arch.
Langtrace is an observability tool designed specifically for large language models (LLMs). It helps you capture, analyze, and understand how LLMs are used in your applications including those built using Plano.
To send tracing data to `Langtrace <https://docs.langtrace.ai/supported-integrations/llm-tools/arch>`_:
1. **Configure Arch**: Make sure Arch is installed and setup correctly. For more information, refer to the `installation guide <https://github.com/katanemo/archgw?tab=readme-ov-file#prerequisites>`_.
1. **Configure Plano**: Make sure Plano is installed and setup correctly. For more information, refer to the `installation guide <https://github.com/katanemo/archgw?tab=readme-ov-file#prerequisites>`_.
2. **Install Langtrace**: Install the Langtrace SDK.:
@ -348,7 +288,7 @@ Best Practices
Summary
----------
By leveraging the ``traceparent`` header for trace context propagation, Arch enables developers to implement
By leveraging the ``traceparent`` header for trace context propagation, Plano enables developers to implement
tracing efficiently. This approach simplifies the process of collecting and analyzing tracing data in common
tools like AWS X-Ray and Datadog, enhancing observability and facilitating faster debugging and optimization.

View file

@ -0,0 +1,350 @@
.. _agent_routing:
Orchestration
==============
Building multi-agent systems allow you to route requests across multiple specialized agents, each designed to handle specific types of tasks.
Plano makes it easy to build and scale these systems by managing the orchestration layer—deciding which agent(s) should handle each request—while you focus on implementing individual agent logic.
This guide shows you how to configure and implement multi-agent orchestration in Plano using a real-world example: a **Travel Booking Assistant** that routes queries to specialized agents for weather and flights.
How It Works
------------
Plano's orchestration layer analyzes incoming prompts and routes them to the most appropriate agent based on user intent and conversation context. The workflow is:
1. **User submits a prompt**: The request arrives at Plano's agent listener.
2. **Agent selection**: Plano uses an LLM to analyze the prompt and determine user intent and complexity. By default, this uses `Plano-Orchestrator-30B-A3B <https://huggingface.co/collections/katanemo/plano-orchestrator>`_, which offers performance of foundation models at 1/10th the cost. The LLM routes the request to the most suitable agent configured in your system—such as a weather agent or flight agent.
3. **Agent handles request**: Once the selected agent receives the request object from Plano, it manages its own :ref:`inner loop <agents>` until the task is complete. This means the agent autonomously calls models, invokes tools, processes data, and reasons about next steps—all within its specialized domain—before returning the final response.
4. **Seamless handoffs**: For multi-turn conversations, Plano repeats the intent analysis for each follow-up query, enabling smooth handoffs between agents as the conversation evolves.
Example: Travel Booking Assistant
----------------------------------
Let's walk through a complete multi-agent system: a Travel Booking Assistant that helps users plan trips by providing weather forecasts and flight information. This system uses two specialized agents:
* **Weather Agent**: Provides real-time weather conditions and multi-day forecasts
* **Flight Agent**: Searches for flights between airports with real-time tracking
Configuration
-------------
Configure your agents in the ``listeners`` section of your ``plano_config.yaml``:
.. literalinclude:: ../resources/includes/agents/agents_config.yaml
:language: yaml
:linenos:
:caption: Travel Booking Multi-Agent Configuration
**Key Configuration Elements:**
* **agent listener**: A listener of ``type: agent`` tells Plano to perform intent analysis and routing for incoming requests.
* **agents list**: Define each agent with an ``id``, ``description`` (used for routing decisions)
* **router**: The ``plano_orchestrator_v1`` router uses Plano-Orchestrator to analyze user intent and select the appropriate agent.
* **filter_chain**: Optionally attach :ref:`filter chains <filter_chain>` to agents for guardrails, query rewriting, or context enrichment.
**Writing Effective Agent Descriptions**
Agent descriptions are critical—they're used by Plano-Orchestrator to make routing decisions. Effective descriptions should include:
* **Clear introduction**: A concise statement explaining what the agent is and its primary purpose
* **Capabilities section**: A bulleted list of specific capabilities, including:
* What APIs or data sources it uses (e.g., "Open-Meteo API", "FlightAware AeroAPI")
* What information it provides (e.g., "current temperature", "multi-day forecasts", "gate information")
* How it handles context (e.g., "Understands conversation context to resolve location references")
* What question patterns it handles (e.g., "What's the weather in [city]?")
* How it handles multi-part queries (e.g., "When queries include both weather and flights, this agent answers ONLY the weather part")
Here's an example of a well-structured agent description:
.. code-block:: yaml
- id: weather_agent
description: |
WeatherAgent is a specialized AI assistant for real-time weather information
and forecasts. It provides accurate weather data for any city worldwide using
the Open-Meteo API, helping travelers plan their trips with up-to-date weather
conditions.
Capabilities:
* Get real-time weather conditions and multi-day forecasts for any city worldwide
* Provides current temperature, weather conditions, sunrise/sunset times
* Provides detailed weather information including multi-day forecasts
* Understands conversation context to resolve location references from previous messages
* Handles weather-related questions including "What's the weather in [city]?"
* When queries include both weather and other travel questions (e.g., flights),
this agent answers ONLY the weather part
.. note::
We will soon support "Agents as Tools" via Model Context Protocol (MCP), enabling agents to dynamically discover and invoke other agents as tools. Track progress on `GitHub Issue #646 <https://github.com/katanemo/archgw/issues/646>`_.
Implementation
--------------
Agents are HTTP services that receive routed requests from Plano. Each agent implements the OpenAI Chat Completions API format, making them compatible with standard LLM clients.
Agent Structure
^^^^^^^^^^^^^^^
Let's examine the Weather Agent implementation:
.. literalinclude:: ../resources/includes/agents/weather.py
:language: python
:linenos:
:lines: 262-283
:caption: Weather Agent - Core Structure
**Key Points:**
* Agents expose a ``/v1/chat/completions`` endpoint that matches OpenAI's API format
* They use Plano's LLM gateway (via ``LLM_GATEWAY_ENDPOINT``) for all LLM calls
* They receive the full conversation history in ``request_body.messages``
Information Extraction with LLMs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Agents use LLMs to extract structured information from natural language queries. This enables them to understand user intent and extract parameters needed for API calls.
The Weather Agent extracts location information:
.. literalinclude:: ../resources/includes/agents/weather.py
:language: python
:linenos:
:lines: 73-119
:caption: Weather Agent - Location Extraction
The Flight Agent extracts more complex information—origin, destination, and dates:
.. literalinclude:: ../resources/includes/agents/flights.py
:language: python
:linenos:
:lines: 69-120
:caption: Flight Agent - Flight Information Extraction
**Key Points:**
* Use smaller, faster models (like ``gpt-4o-mini``) for extraction tasks
* Include conversation context to handle follow-up questions and pronouns
* Use structured prompts with clear output formats (JSON)
* Handle edge cases with fallback values
Calling External APIs
^^^^^^^^^^^^^^^^^^^^^^
After extracting information, agents call external APIs to fetch real-time data:
.. literalinclude:: ../resources/includes/agents/weather.py
:language: python
:linenos:
:lines: 136-197
:caption: Weather Agent - External API Call
The Flight Agent calls FlightAware's AeroAPI:
.. literalinclude:: ../resources/includes/agents/flights.py
:language: python
:linenos:
:lines: 156-260
:caption: Flight Agent - External API Call
**Key Points:**
* Use async HTTP clients (like ``httpx.AsyncClient``) for non-blocking API calls
* Transform external API responses into consistent, structured formats
* Handle errors gracefully with fallback values
* Cache or validate data when appropriate (e.g., airport code validation)
Preparing Context and Generating Responses
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Agents combine extracted information, API data, and conversation history to generate responses:
.. literalinclude:: ../resources/includes/agents/weather.py
:language: python
:linenos:
:lines: 290-370
:caption: Weather Agent - Context Preparation and Response Generation
**Key Points:**
* Use system messages to provide structured data to the LLM
* Include full conversation history for context-aware responses
* Stream responses for better user experience
* Route all LLM calls through Plano's gateway for consistent behavior and observability
Best Practices
--------------
**Write Clear Agent Descriptions**
Agent descriptions are used by Plano-Orchestrator to make routing decisions. Be specific about what each agent handles:
.. code-block:: yaml
# Good - specific and actionable
- id: flight_agent
description: Get live flight information between airports using FlightAware AeroAPI. Shows real-time flight status, scheduled/estimated/actual departure and arrival times, gate and terminal information, delays, aircraft type, and flight status. Automatically resolves city names to airport codes (IATA/ICAO). Understands conversation context to infer origin/destination from follow-up questions.
# Less ideal - too vague
- id: flight_agent
description: Handles flight queries
**Use Conversation Context Effectively**
Include conversation history in your extraction and response generation:
.. code-block:: python
# Include conversation context for extraction
conversation_context = []
for msg in messages:
conversation_context.append({"role": msg.role, "content": msg.content})
# Use recent context (last 10 messages)
context_messages = conversation_context[-10:] if len(conversation_context) > 10 else conversation_context
**Route LLM Calls Through Plano's Model Proxy**
Always route LLM calls through Plano's :ref:`Model Proxy <llm_providers>` for consistent responses, smart routing, and rich observability:
.. code-block:: python
openai_client_via_plano = AsyncOpenAI(
base_url=LLM_GATEWAY_ENDPOINT, # Plano's LLM gateway
api_key="EMPTY",
)
response = await openai_client_via_plano.chat.completions.create(
model="openai/gpt-4o",
messages=messages,
stream=True,
)
**Handle Errors Gracefully**
Provide fallback values and clear error messages:
.. code-block:: python
async def get_weather_data(request: Request, messages: list, days: int = 1):
try:
# ... extraction and API logic ...
location = response.choices[0].message.content.strip().strip("\"'`.,!?")
if not location or location.upper() == "NOT_FOUND":
location = "New York" # Fallback to default
return weather_data
except Exception as e:
logger.error(f"Error getting weather data: {e}")
return {"location": "New York", "weather": {"error": "Could not retrieve weather data"}}
**Use Appropriate Models for Tasks**
Use smaller, faster models for extraction tasks and larger models for final responses:
.. code-block:: python
# Extraction: Use smaller, faster model
LOCATION_MODEL = "openai/gpt-4o-mini"
# Final response: Use larger, more capable model
WEATHER_MODEL = "openai/gpt-4o"
**Stream Responses**
Stream responses for better user experience:
.. code-block:: python
async def invoke_weather_agent(request: Request, request_body: dict, traceparent_header: str = None):
# ... prepare messages with weather data ...
stream = await openai_client_via_plano.chat.completions.create(
model=WEATHER_MODEL,
messages=response_messages,
temperature=request_body.get("temperature", 0.7),
max_tokens=request_body.get("max_tokens", 1000),
stream=True,
extra_headers=extra_headers,
)
async for chunk in stream:
if chunk.choices:
yield f"data: {chunk.model_dump_json()}\n\n"
yield "data: [DONE]\n\n"
Common Use Cases
----------------
Multi-agent orchestration is particularly powerful for:
**Travel and Booking Systems**
Route queries to specialized agents for weather and flights:
.. code-block:: yaml
agents:
- id: weather_agent
description: Get real-time weather conditions and forecasts
- id: flight_agent
description: Search for flights and provide flight status
**Customer Support**
Route common queries to automated support agents while escalating complex issues:
.. code-block:: yaml
agents:
- id: tier1_support
description: Handles common FAQs, password resets, and basic troubleshooting
- id: tier2_support
description: Handles complex technical issues requiring deep product knowledge
- id: human_escalation
description: Escalates sensitive issues or unresolved problems to human agents
**Sales and Marketing**
Direct leads and inquiries to specialized sales agents:
.. code-block:: yaml
agents:
- id: product_recommendation
description: Recommends products based on user needs and preferences
- id: pricing_agent
description: Provides pricing information and quotes
- id: sales_closer
description: Handles final negotiations and closes deals
**Technical Documentation and Support**
Combine RAG agents for documentation lookup with specialized troubleshooting agents:
.. code-block:: yaml
agents:
- id: docs_agent
description: Retrieves relevant documentation and guides
filter_chain:
- query_rewriter
- context_builder
- id: troubleshoot_agent
description: Diagnoses and resolves technical issues step by step
Next Steps
----------
* Learn more about :ref:`agents <agents>` and the inner vs. outer loop model
* Explore :ref:`filter chains <filter_chain>` for adding guardrails and context enrichment
* See :ref:`observability <observability>` for monitoring multi-agent workflows
* Review the :ref:`LLM Providers <llm_providers>` guide for model routing within agents
* Check out the complete `Travel Booking demo <https://github.com/katanemo/plano/tree/main/demos/use_cases/travel_booking>`_ on GitHub
.. note::
To observe traffic to and from agents, please read more about :ref:`observability <observability>` in Plano.
By carefully configuring and managing your Agent routing and hand off, you can significantly improve your application's responsiveness, performance, and overall user satisfaction.

View file

@ -1,66 +1,118 @@
.. _prompt_guard:
Prompt Guard
=============
Guardrails
==========
**Prompt guard** is a security and validation feature offered in Arch to protect agents, by filtering and analyzing prompts before they reach your application logic.
In applications where prompts generate responses or execute specific actions based on user inputs, prompt guard minimizes risks like malicious inputs (or misaligned outputs).
By adding a layer of input scrutiny, prompt guards ensures safer, more reliable, and accurate interactions with agents.
**Guardrails** are Plano's way of applying safety and validation checks to prompts before they reach your application logic. They are typically implemented as
filters in a :ref:`Filter Chain <filter_chain>` attached to an agent, so every request passes through a consistent processing layer.
Why Guardrails
--------------
Guardrails are essential for maintaining control over AI-driven applications. They help enforce organizational policies, ensure compliance with regulations
(like GDPR or HIPAA), and protect users from harmful or inappropriate content. In applications where prompts generate responses or trigger actions, guardrails
minimize risks like malicious inputs, off-topic queries, or misaligned outputs—adding a consistent layer of input scrutiny that makes interactions safer,
more reliable, and easier to reason about.
Why Prompt Guard
----------------
.. vale Vale.Spelling = NO
- **Prompt Sanitization via Arch-Guard**
- **Jailbreak Prevention**: Detects and filters inputs that might attempt jailbreak attacks, like alternating LLM intended behavior, exposing the system prompt, or bypassing ethnics safety.
- **Dynamic Error Handling**
- **Automatic Correction**: Applies error-handling techniques to suggest corrections for minor input errors, such as typos or misformatted data.
- **Feedback Mechanism**: Provides informative error messages to users, helping them understand how to correct input mistakes or adhere to guidelines.
.. Note::
Today, Arch offers support for jailbreak via Arch-Guard. We will be adding support for additional guards in Q1, 2025 (including response guardrails)
What Is Arch-Guard
~~~~~~~~~~~~~~~~~~
`Arch-Guard <https://huggingface.co/collections/katanemo/arch-guard-6702bdc08b889e4bce8f446d>`_ is a robust classifier model specifically trained on a diverse corpus of prompt attacks.
It excels at detecting explicitly malicious prompts, providing an essential layer of security for LLM applications.
By embedding Arch-Guard within the Arch architecture, we empower developers to build robust, LLM-powered applications while prioritizing security and safety. With Arch-Guard, you can navigate the complexities of prompt management with confidence, knowing you have a reliable defense against malicious input.
- **Jailbreak Prevention**: Detect and filter inputs that attempt to change LLM behavior, expose system prompts, or bypass safety policies.
- **Domain and Topicality Enforcement**: Ensure that agents only respond to prompts within an approved domain (for example, finance-only or healthcare-only use cases) and reject unrelated queries.
- **Dynamic Error Handling**: Provide clear error messages when requests violate policy, helping users correct their inputs.
Example Configuration
~~~~~~~~~~~~~~~~~~~~~
Here is an example of using Arch-Guard in Arch:
How Guardrails Work
-------------------
.. literalinclude:: includes/arch_config.yaml
:language: yaml
:linenos:
:lines: 22-26
:caption: Arch-Guard Example Configuration
Guardrails can be implemented as either in-process MCP filters or as HTTP-based filters. HTTP filters are external services that receive the request over HTTP, validate it, and return a response to allow or reject the request. This makes it easy to use filters written in any language or run them as independent services.
How Arch-Guard Works
----------------------
Each filter receives the chat messages, evaluates them against policy, and either lets the request continue or raises a ``ToolError`` (or returns an error response) to reject it with a helpful error message.
#. **Pre-Processing Stage**
The example below shows an input guard for TechCorp's customer support system that validates queries are within the company's domain:
As a request or prompt is received, Arch Guard first performs validation. If any violations are detected, the input is flagged, and a tailored error message may be returned.
.. code-block:: python
:caption: Example domain validation guard using FastMCP
#. **Error Handling and Feedback**
from typing import List
from fastmcp.exceptions import ToolError
from . import mcp
If the prompt contains errors or does not meet certain criteria, the user receives immediate feedback or correction suggestions, enhancing usability and reducing the chance of repeated input mistakes.
@mcp.tool
async def input_guards(messages: List[ChatMessage]) -> List[ChatMessage]:
"""Validates queries are within TechCorp's domain."""
Benefits of Using Arch Guard
------------------------------
# Get the user's query
user_query = next(
(msg.content for msg in reversed(messages) if msg.role == "user"),
""
)
- **Enhanced Security**: Protects against injection attacks, harmful content, and misuse, securing both system and user data.
# Use an LLM to validate the query scope (simplified)
is_valid = await validate_with_llm(user_query)
- **Better User Experience**: Clear feedback and error correction improve user interactions by guiding them to correct input formats and constraints.
if not is_valid:
raise ToolError(
"I can only assist with questions related to TechCorp and its services. "
"Please ask about TechCorp's products, pricing, SLAs, or technical support."
)
return messages
Summary
-------
To wire this guardrail into Plano, define the filter and add it to your agent's filter chain:
Prompt guard is an essential tool for any prompt-based system that values security, accuracy, and compliance.
By implementing Prompt Guard, developers can provide a robust layer of input validation and security, leading to better-performing, reliable, and safer applications.
.. code-block:: yaml
:caption: Plano configuration with input guard filter
filters:
- id: input_guards
url: http://localhost:10500
listeners:
- type: agent
name: agent_1
port: 8001
router: plano_orchestrator_v1
agents:
- id: rag_agent
description: virtual assistant for retrieval augmented generation tasks
filter_chain:
- input_guards
When a request arrives at ``agent_1``, Plano invokes the ``input_guards`` filter first. If validation passes, the request continues to
the agent. If validation fails (``ToolError`` raised), Plano returns an error response to the caller.
Testing the Guardrail
---------------------
Here's an example of the guardrail in action, rejecting a query about Apple Corporation (outside TechCorp's domain):
.. code-block:: bash
:caption: Request that violates the guardrail policy
curl -X POST http://localhost:8001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{
"role": "user",
"content": "what is sla for apple corporation?"
}
],
"stream": false
}'
.. code-block:: json
:caption: Error response from the guardrail
{
"error": "ClientError",
"agent": "input_guards",
"status": 400,
"agent_response": "I apologize, but I can only assist with questions related to TechCorp and its services. Your query appears to be outside this scope. The query is about SLA for Apple Corporation, which is unrelated to TechCorp.\n\nPlease ask me about TechCorp's products, services, pricing, SLAs, or technical support."
}
This prevents out-of-scope queries from reaching your agent while providing clear feedback to users about why their request was rejected.

View file

@ -0,0 +1,255 @@
.. _managing_conversational_state:
Conversational State
=====================
The OpenAI Responses API (``v1/responses``) is designed for multi-turn conversations where context needs to persist across requests. Plano provides a unified ``v1/responses`` API that works with **any LLM provider**—OpenAI, Anthropic, Azure OpenAI, DeepSeek, or any OpenAI-compatible provider—while automatically managing conversational state for you.
Unlike the traditional Chat Completions API where you manually manage conversation history by including all previous messages in each request, Plano handles state management behind the scenes. This means you can use the Responses API with any model provider, and Plano will persist conversation context across requests—making it ideal for building conversational agents that remember context without bloating every request with full message history.
How It Works
------------
When a client calls the Responses API:
1. **First request**: Plano generates a unique ``resp_id`` and stores the conversation state (messages, model, provider, timestamp).
2. **Subsequent requests**: The client includes the ``previous_resp_id`` from the previous response. Plano retrieves the stored conversation state, merges it with the new input, and sends the combined context to the LLM.
3. **Response**: The LLM sees the full conversation history without the client needing to resend all previous messages.
This pattern dramatically reduces bandwidth and makes it easier to build multi-turn agents—Plano handles the state plumbing so you can focus on agent logic.
**Example Using OpenAI Python SDK:**
.. code-block:: python
from openai import OpenAI
# Point to Plano's Model Proxy endpoint
client = OpenAI(
api_key="test-key",
base_url="http://127.0.0.1:12000/v1"
)
# First turn - Plano creates a new conversation state
response = client.responses.create(
model="claude-sonnet-4-5", # Works with any configured provider
input="My name is Alice and I like Python"
)
# Save the response_id for conversation continuity
resp_id = response.id
print(f"Assistant: {response.output_text}")
# Second turn - Plano automatically retrieves previous context
resp2 = client.responses.create(
model="claude-sonnet-4-5", # Make sure its configured in plano_config.yaml
input="Please list all the messages you have received in our conversation, numbering each one.",
previous_response_id=resp_id,
)
print(f"Assistant: {resp2.output_text}")
# Output: "Your name is Alice and your favorite language is Python"
Notice how the second request only includes the new user message—Plano automatically merges it with the stored conversation history before sending to the LLM.
Configuration Overview
----------------------
State storage is configured in the ``state_storage`` section of your ``plano_config.yaml``:
.. literalinclude:: ../resources/includes/arch_config_state_storage_example.yaml
:language: yaml
:lines: 21-30
:linenos:
:emphasize-lines: 3,6-10
Plano supports two storage backends:
* **Memory**: Fast, ephemeral storage for development and testing. State is lost when Plano restarts.
* **PostgreSQL**: Durable, production-ready storage with support for Supabase and self-hosted PostgreSQL instances.
.. note::
If you don't configure ``state_storage``, conversation state management is **disabled**. The Responses API will still work, but clients must manually include full conversation history in each request (similar to the Chat Completions API behavior).
Memory Storage (Development)
----------------------------
Memory storage keeps conversation state in-memory using a thread-safe ``HashMap``. It's perfect for local development, demos, and testing, but all state is lost when Plano restarts.
**Configuration**
Add this to your ``plano_config.yaml``:
.. code-block:: yaml
state_storage:
type: memory
That's it. No additional setup required.
**When to Use Memory Storage**
* Local development and debugging
* Demos and proof-of-concepts
* Automated testing environments
* Single-instance deployments where persistence isn't critical
**Limitations**
* State is lost on restart
* Not suitable for production workloads
* Cannot scale across multiple Plano instances
PostgreSQL Storage (Production)
--------------------------------
PostgreSQL storage provides durable, production-grade conversation state management. It works with both self-hosted PostgreSQL and Supabase (PostgreSQL-as-a-service), making it ideal for scaling multi-agent systems in production.
Prerequisites
^^^^^^^^^^^^^
Before configuring PostgreSQL storage, you need:
1. A PostgreSQL database (version 12 or later)
2. Database credentials (host, user, password)
3. The ``conversation_states`` table created in your database
**Setting Up the Database**
Run the SQL schema to create the required table:
.. literalinclude:: ../resources/db_setup/conversation_states.sql
:language: sql
:linenos:
**Using psql:**
.. code-block:: bash
psql $DATABASE_URL -f docs/db_setup/conversation_states.sql
**Using Supabase Dashboard:**
1. Log in to your Supabase project
2. Navigate to the SQL Editor
3. Copy and paste the SQL from ``docs/db_setup/conversation_states.sql``
4. Run the query
Configuration
^^^^^^^^^^^^^
Once the database table is created, configure Plano to use PostgreSQL storage:
.. code-block:: yaml
state_storage:
type: postgres
connection_string: "postgresql://user:password@host:5432/database"
**Using Environment Variables**
You should **never** hardcode credentials. Use environment variables instead:
.. code-block:: yaml
state_storage:
type: postgres
connection_string: "postgresql://myuser:$DB_PASSWORD@db.example.com:5432/postgres"
Then set the environment variable before running Plano:
.. code-block:: bash
export DB_PASSWORD="your-secure-password"
# Run Plano or config validation
./plano
.. warning::
**Special Characters in Passwords**: If your password contains special characters like ``#``, ``@``, or ``&``, you must URL-encode them in the connection string. For example, ``MyPass#123`` becomes ``MyPass%23123``.
Supabase Connection Strings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Supabase requires different connection strings depending on your network setup. Most users should use the **Session Pooler** connection string.
**IPv4 Networks (Most Common)**
Use the Session Pooler connection string (port 5432):
.. code-block:: text
postgresql://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
**IPv6 Networks**
Use the direct connection (port 5432):
.. code-block:: text
postgresql://postgres:[PASSWORD]@db.[PROJECT-REF].supabase.co:5432/postgres
**Finding Your Connection String**
1. Go to your Supabase project dashboard
2. Navigate to **Settings → Database → Connection Pooling**
3. Copy the **Session mode** connection string
4. Replace ``[YOUR-PASSWORD]`` with your actual database password
5. URL-encode special characters in the password
**Example Configuration**
.. code-block:: yaml
state_storage:
type: postgres
connection_string: "postgresql://postgres.myproject:$DB_PASSWORD@aws-0-us-west-2.pooler.supabase.com:5432/postgres"
Then set the environment variable:
.. code-block:: bash
# If your password is "MyPass#123", encode it as "MyPass%23123"
export DB_PASSWORD="MyPass%23123"
Troubleshooting
---------------
**"Table 'conversation_states' does not exist"**
Run the SQL schema from ``docs/db_setup/conversation_states.sql`` against your database.
**Connection errors with Supabase**
* Verify you're using the correct connection string format (Session Pooler for IPv4)
* Check that your password is URL-encoded if it contains special characters
* Ensure your Supabase project hasn't paused due to inactivity (free tier)
**Permission errors**
Ensure your database user has the following permissions:
.. code-block:: sql
GRANT SELECT, INSERT, UPDATE, DELETE ON conversation_states TO your_user;
**State not persisting across requests**
* Verify ``state_storage`` is configured in your ``plano_config.yaml``
* Check Plano logs for state storage initialization messages
* Ensure the client is sending the ``prev_response_id={$response_id}`` from previous responses
Best Practices
--------------
1. **Use environment variables for credentials**: Never hardcode database passwords in configuration files.
2. **Start with memory storage for development**: Switch to PostgreSQL when moving to production.
3. **Implement cleanup policies**: Prevent unbounded growth by regularly archiving or deleting old conversations.
4. **Monitor storage usage**: Track conversation state table size and query performance in production.
5. **Test failover scenarios**: Ensure your application handles storage backend failures gracefully.
Next Steps
----------
* Learn more about building :ref:`agents <agents>` that leverage conversational state
* Explore :ref:`filter chains <filter_chain>` for enriching conversation context
* See the :ref:`LLM Providers <llm_providers>` guide for configuring model routing

View file

@ -1,17 +1,15 @@
Welcome to Arch!
================
Welcome to Plano!
=================
.. image:: /_static/img/arch-logo.png
.. image:: /_static/img/PlanoTagline.svg
:width: 100%
:align: center
.. raw:: html
`Plano <https://github.com/katanemo/plano>`_ is delivery infrastructure for agentic apps. A models-native proxy server and data plane designed to help you build agents faster, and deliver them reliably to production.
<a href="https://www.producthunt.com/posts/arch-3?embed=true&utm_source=badge-top-post-badge&utm_medium=badge&utm_souce=badge-arch&#0045;3" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/top-post-badge.svg?post_id=565761&theme=dark&period=daily&t=1742433071161" alt="Arch - Build&#0032;fast&#0044;&#0032;hyper&#0045;personalized&#0032;agents&#0032;with&#0032;intelligent&#0032;infra | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>
Plano pulls out the rote plumbing work (aka “hidden AI middleware”) and decouples you from brittle, everchanging framework abstractions. It centralizes what shouldnt be bespoke in every codebase like agent routing and orchestration, rich agentic signals and traces for continuous improvement, guardrail filters for safety and moderation, and smart LLM routing APIs for UX and DX agility. Use any language or AI framework, and ship agents to production faster with Plano.
`Arch <https://github.com/katanemo/arch>`_ is a models-native edge and LLM proxy/gateway for AI agents - one that is natively designed to handle and process prompts, not just network traffic.
Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy.io/>`_, Arch handles the *pesky low-level work* in building agentic apps — like applying guardrails, clarifying vague user input, routing prompts to the right agent, and unifying access to any LLM. Its a language and framework friendly infrastructure layer designed to help you build and ship agentic apps faster.
Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy.io/>`_, Plano **helps developers** focus more on the core product logic of agents, **product teams** accelerate feedback loops for reinforcement learning, and **engineering teams** standardize policies and access controls across every agent and LLM for safer, more reliable scaling.
.. tab-set::
@ -23,7 +21,7 @@ Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy
:maxdepth: 2
get_started/overview
get_started/intro_to_arch
get_started/intro_to_plano
get_started/quickstart
.. tab-item:: Concepts
@ -33,7 +31,9 @@ Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy
:titlesonly:
:maxdepth: 2
concepts/tech_overview/tech_overview
concepts/listeners
concepts/agents
concepts/filter_chain
concepts/llm_providers/llm_providers
concepts/prompt_target
@ -44,22 +44,12 @@ Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy
:titlesonly:
:maxdepth: 2
guides/prompt_guard
guides/agent_routing
guides/function_calling
guides/orchestration
guides/llm_router
guides/function_calling
guides/observability/observability
.. tab-item:: Build with Arch
.. toctree::
:caption: Build with Arch
:titlesonly:
:maxdepth: 2
build_with_arch/agent
build_with_arch/rag
build_with_arch/multi_turn
guides/prompt_guard
guides/state
.. tab-item:: Resources
@ -68,5 +58,7 @@ Built by contributors to the widely adopted `Envoy Proxy <https://www.envoyproxy
:titlesonly:
:maxdepth: 2
resources/tech_overview/tech_overview
resources/deployment
resources/configuration_reference
resources/llms_txt

View file

@ -3,11 +3,11 @@
Configuration Reference
=======================
The following is a complete reference of the ``arch_config.yml`` that controls the behavior of a single instance of
The following is a complete reference of the ``plano_config.yml`` that controls the behavior of a single instance of
the Arch gateway. This where you enable capabilities like routing to upstream LLm providers, defining prompt_targets
where prompts get routed to, apply guardrails, and enable critical agent observability features.
.. literalinclude:: includes/arch_config_full_reference.yaml
:language: yaml
:linenos:
:caption: :download:`Arch Configuration - Full Reference <includes/arch_config_full_reference.yaml>`
:caption: :download:`Plano Configuration - Full Reference <includes/arch_config_full_reference.yaml>`

View file

@ -0,0 +1,109 @@
# Database Setup for Conversation State Storage
This directory contains SQL scripts needed to set up database tables for storing conversation state when using the OpenAI Responses API.
## Prerequisites
- PostgreSQL database (Supabase or self-hosted)
- Database connection credentials
- `psql` CLI tool or database admin access
## Setup Instructions
### Option 1: Using psql
```bash
psql $DATABASE_URL -f docs/db_setup/conversation_states.sql
```
### Option 2: Using Supabase Dashboard
1. Log in to your Supabase project dashboard
2. Navigate to the SQL Editor
3. Copy and paste the contents of `conversation_states.sql`
4. Run the query
### Option 3: Direct Database Connection
Connect to your PostgreSQL database using your preferred client and execute the SQL from `conversation_states.sql`.
## Verification
After running the setup, verify the table was created:
```sql
SELECT tablename FROM pg_tables WHERE tablename = 'conversation_states';
```
You should see `conversation_states` in the results.
## Configuration
After setting up the database table, configure your application to use Supabase storage by setting the appropriate environment variable or configuration parameter with your database connection string.
### Supabase Connection String
**Important:** Supabase requires different connection strings depending on your network:
- **IPv4 Networks (Most Common)**: Use the **Session Pooler** connection string (port 5432):
```
postgresql://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
- **IPv6 Networks**: Use the direct connection (port 5432):
```
postgresql://postgres:[PASSWORD]@db.[PROJECT-REF].supabase.co:5432/postgres
```
**How to get your connection string:**
1. Go to your Supabase project dashboard
2. Settings → Database → Connection Pooling
3. Copy the **Session mode** connection string
4. Replace `[YOUR-PASSWORD]` with your actual database password
5. URL-encode special characters in the password (e.g., `#` becomes `%23`)
**Example:**
```bash
# If your password is "MyPass#123", encode it as "MyPass%23123"
export DATABASE_URL="postgresql://postgres.myproject:MyPass%23123@aws-0-us-west-2.pooler.supabase.com:5432/postgres"
```
### Testing the Connection
To test your connection string works:
```bash
export TEST_DATABASE_URL="your-connection-string-here"
cd crates/brightstaff
cargo test supabase -- --nocapture
```
## Table Schema
The `conversation_states` table stores:
- `response_id` (TEXT, PRIMARY KEY): Unique identifier for each conversation
- `input_items` (JSONB): Array of conversation messages and context
- `created_at` (BIGINT): Unix timestamp when conversation started
- `model` (TEXT): Model name used for the conversation
- `provider` (TEXT): LLM provider name
- `updated_at` (TIMESTAMP): Last update time (auto-managed)
## Maintenance
### Cleanup Old Conversations
To prevent unbounded growth, consider periodically cleaning up old conversation states:
```sql
-- Delete conversations older than 7 days
DELETE FROM conversation_states
WHERE updated_at < NOW() - INTERVAL '7 days';
```
You can automate this with a cron job or database trigger.
## Troubleshooting
If you encounter errors on first use:
- **"Table 'conversation_states' does not exist"**: Run the setup SQL
- **Connection errors**: Verify your DATABASE_URL is correct
- **Permission errors**: Ensure your database user has CREATE TABLE privileges

View file

@ -0,0 +1,31 @@
-- Conversation State Storage Table
-- This table stores conversational context for the OpenAI Responses API
-- Run this SQL against your PostgreSQL/Supabase database before enabling conversation state storage
CREATE TABLE IF NOT EXISTS conversation_states (
response_id TEXT PRIMARY KEY,
input_items JSONB NOT NULL,
created_at BIGINT NOT NULL,
model TEXT NOT NULL,
provider TEXT NOT NULL,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Indexes for common query patterns
CREATE INDEX IF NOT EXISTS idx_conversation_states_created_at
ON conversation_states(created_at);
CREATE INDEX IF NOT EXISTS idx_conversation_states_provider
ON conversation_states(provider);
-- Optional: Add a policy for automatic cleanup of old conversations
-- Uncomment and adjust the retention period as needed
-- CREATE INDEX IF NOT EXISTS idx_conversation_states_updated_at
-- ON conversation_states(updated_at);
COMMENT ON TABLE conversation_states IS 'Stores conversation history for OpenAI Responses API continuity';
COMMENT ON COLUMN conversation_states.response_id IS 'Unique identifier for the conversation state';
COMMENT ON COLUMN conversation_states.input_items IS 'JSONB array of conversation messages and context';
COMMENT ON COLUMN conversation_states.created_at IS 'Unix timestamp (seconds) when the conversation started';
COMMENT ON COLUMN conversation_states.model IS 'Model name used for this conversation';
COMMENT ON COLUMN conversation_states.provider IS 'LLM provider (e.g., openai, anthropic, bedrock)';

View file

@ -3,17 +3,17 @@
Deployment
==========
This guide shows how to deploy Arch directly using Docker without the archgw CLI, including basic runtime checks for routing and health monitoring.
This guide shows how to deploy Plano directly using Docker without the ``plano`` CLI, including basic runtime checks for routing and health monitoring.
Docker Deployment
-----------------
Below is a minimal, production-ready example showing how to deploy the Arch Docker image directly and run basic runtime checks. Adjust image names, tags, and the ``arch_config.yaml`` path to match your environment.
Below is a minimal, production-ready example showing how to deploy the Plano Docker image directly and run basic runtime checks. Adjust image names, tags, and the ``plano_config.yaml`` path to match your environment.
.. note::
You will need to pass all required environment variables that are referenced in your ``arch_config.yaml`` file.
You will need to pass all required environment variables that are referenced in your ``plano_config.yaml`` file.
For ``arch_config.yaml``, you can use any sample configuration defined earlier in the documentation. For example, you can try the :ref:`LLM Routing <llm_router>` sample config.
For ``plano_config.yaml``, you can use any sample configuration defined earlier in the documentation. For example, you can try the :ref:`LLM Routing <llm_router>` sample config.
Docker Compose Setup
~~~~~~~~~~~~~~~~~~~~
@ -24,14 +24,14 @@ Create a ``docker-compose.yml`` file with the following configuration:
# docker-compose.yml
services:
archgw:
image: katanemo/archgw:0.3.22
container_name: archgw
plano:
image: katanemo/plano:0.4.0
container_name: plano
ports:
- "10000:10000" # ingress (client -> arch)
- "12000:12000" # egress (arch -> upstream/llm proxy)
- "10000:10000" # ingress (client -> plano)
- "12000:12000" # egress (plano -> upstream/llm proxy)
volumes:
- ./arch_config.yaml:/app/arch_config.yaml:ro
- ./plano_config.yaml:/app/plano_config.yaml:ro
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY:?error}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:?error}
@ -39,7 +39,7 @@ Create a ``docker-compose.yml`` file with the following configuration:
Starting the Stack
~~~~~~~~~~~~~~~~~~
Start the services from the directory containing ``docker-compose.yml`` and ``arch_config.yaml``:
Start the services from the directory containing ``docker-compose.yml`` and ``plano_config.yaml``:
.. code-block:: bash
@ -51,7 +51,7 @@ Check container health and logs:
.. code-block:: bash
docker compose ps
docker compose logs -f archgw
docker compose logs -f plano
Runtime Tests
-------------
@ -65,7 +65,7 @@ Test the chat completion endpoint with automatic routing:
.. code-block:: bash
# Request handled by the gateway. 'model: "none"' lets Arch decide routing
# Request handled by the gateway. 'model: "none"' lets Plano decide routing
curl --header 'Content-Type: application/json' \
--data '{"messages":[{"role":"user","content":"tell me a joke"}], "model":"none"}' \
http://localhost:12000/v1/chat/completions | jq .model
@ -74,7 +74,7 @@ Expected output:
.. code-block:: json
"gpt-4o-2024-08-06"
"gpt-5.2"
Model-Based Routing
~~~~~~~~~~~~~~~~~~~
@ -84,14 +84,14 @@ Test explicit provider and model routing:
.. code-block:: bash
curl -s -H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Explain quantum computing"}], "model":"anthropic/claude-3-5-sonnet-20241022"}' \
-d '{"messages":[{"role":"user","content":"Explain quantum computing"}], "model":"anthropic/claude-sonnet-4-5"}' \
http://localhost:12000/v1/chat/completions | jq .model
Expected output:
.. code-block:: json
"claude-3-5-sonnet-20241022"
"claude-sonnet-4-5"
Troubleshooting
---------------
@ -100,19 +100,19 @@ Common Issues and Solutions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Environment Variables**
Ensure all environment variables (``OPENAI_API_KEY``, ``ANTHROPIC_API_KEY``, etc.) used by ``arch_config.yaml`` are set before starting services.
Ensure all environment variables (``OPENAI_API_KEY``, ``ANTHROPIC_API_KEY``, etc.) used by ``plano_config.yaml`` are set before starting services.
**TLS/Connection Errors**
If you encounter TLS or connection errors to upstream providers:
- Check DNS resolution
- Verify proxy settings
- Confirm correct protocol and port in your ``arch_config`` endpoints
- Confirm correct protocol and port in your ``plano_config`` endpoints
**Verbose Logging**
To enable more detailed logs for debugging:
- Run archgw with a higher component log level
- Run plano with a higher component log level
- See the :ref:`Observability <observability>` guide for logging and monitoring details
- Rebuild the image if required with updated log configuration

View file

@ -0,0 +1,57 @@
version: v0.3.0
agents:
- id: weather_agent
url: http://host.docker.internal:10510
- id: flight_agent
url: http://host.docker.internal:10520
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY # smaller, faster, cheaper model for extracting entities like location
listeners:
- type: agent
name: travel_booking_service
port: 8001
router: plano_orchestrator_v1
agents:
- id: weather_agent
description: |
WeatherAgent is a specialized AI assistant for real-time weather information and forecasts. It provides accurate weather data for any city worldwide using the Open-Meteo API, helping travelers plan their trips with up-to-date weather conditions.
Capabilities:
* Get real-time weather conditions and multi-day forecasts for any city worldwide using Open-Meteo API (free, no API key needed)
* Provides current temperature
* Provides multi-day forecasts
* Provides weather conditions
* Provides sunrise/sunset times
* Provides detailed weather information
* Understands conversation context to resolve location references from previous messages
* Handles weather-related questions including "What's the weather in [city]?", "What's the forecast for [city]?", "How's the weather in [city]?"
* When queries include both weather and other travel questions (e.g., flights, currency), this agent answers ONLY the weather part
- id: flight_agent
description: |
FlightAgent is an AI-powered tool specialized in providing live flight information between airports. It leverages the FlightAware AeroAPI to deliver real-time flight status, gate information, and delay updates.
Capabilities:
* Get live flight information between airports using FlightAware AeroAPI
* Shows real-time flight status
* Shows scheduled/estimated/actual departure and arrival times
* Shows gate and terminal information
* Shows delays
* Shows aircraft type
* Shows flight status
* Automatically resolves city names to airport codes (IATA/ICAO)
* Understands conversation context to infer origin/destination from follow-up questions
* Handles flight-related questions including "What flights go from [city] to [city]?", "Do flights go to [city]?", "Are there direct flights from [city]?"
* When queries include both flight and other travel questions (e.g., weather, currency), this agent answers ONLY the flight part
tracing:
random_sampling: 100

View file

@ -0,0 +1,475 @@
import json
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from openai import AsyncOpenAI
import os
import logging
import time
import uuid
import uvicorn
from datetime import datetime, timedelta
import httpx
from typing import Optional
from opentelemetry.propagate import extract, inject
# Set up logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - [FLIGHT_AGENT] - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
# Configuration
LLM_GATEWAY_ENDPOINT = os.getenv(
"LLM_GATEWAY_ENDPOINT", "http://host.docker.internal:12000/v1"
)
FLIGHT_MODEL = "openai/gpt-4o"
EXTRACTION_MODEL = "openai/gpt-4o-mini"
# FlightAware AeroAPI configuration
AEROAPI_BASE_URL = "https://aeroapi.flightaware.com/aeroapi"
AEROAPI_KEY = os.getenv("AEROAPI_KEY", "ESVFX7TJLxB7OTuayUv0zTQBryA3tOPr")
# HTTP client for API calls
http_client = httpx.AsyncClient(timeout=30.0)
# Initialize OpenAI client
openai_client_via_plano = AsyncOpenAI(
base_url=LLM_GATEWAY_ENDPOINT,
api_key="EMPTY",
)
# System prompt for flight agent
SYSTEM_PROMPT = """You are a travel planning assistant specializing in flight information in a multi-agent system. You will receive flight data in JSON format with these fields:
- "airline": Full airline name (e.g., "Delta Air Lines")
- "flight_number": Flight identifier (e.g., "DL123")
- "departure_time": ISO 8601 timestamp for scheduled departure (e.g., "2025-12-24T23:00:00Z")
- "arrival_time": ISO 8601 timestamp for scheduled arrival (e.g., "2025-12-25T04:40:00Z")
- "origin": Origin airport IATA code (e.g., "ATL")
- "destination": Destination airport IATA code (e.g., "SEA")
- "aircraft_type": Aircraft model code (e.g., "A21N", "B739")
- "status": Flight status (e.g., "Scheduled", "Delayed")
- "terminal_origin": Departure terminal (may be null)
- "gate_origin": Departure gate (may be null)
Your task:
1. Read the JSON flight data carefully
2. Present each flight clearly with: airline, flight number, departure/arrival times (convert from ISO format to readable time), airports, and aircraft type
3. Organize flights chronologically by departure time
4. Convert ISO timestamps to readable format (e.g., "11:00 PM" or "23:00")
5. Include terminal/gate info when available
6. Use natural, conversational language
Important: If the conversation includes information from other agents (like weather details), acknowledge and build upon that context naturally. Your primary focus is flights, but maintain awareness of the full conversation.
Remember: All the data you need is in the JSON. Use it directly."""
async def extract_flight_route(messages: list, request: Request) -> dict:
"""Extract origin, destination, and date from conversation using LLM."""
extraction_prompt = """Extract flight origin, destination cities, and travel date from the conversation.
Rules:
1. Look for patterns: "flight from X to Y", "flights to Y", "fly from X"
2. Extract dates like "tomorrow", "next week", "December 25", "12/25", "on Monday"
3. Use conversation context to fill in missing details
4. Return JSON: {"origin": "City" or null, "destination": "City" or null, "date": "YYYY-MM-DD" or null}
Examples:
- "Flight from Seattle to Atlanta tomorrow" -> {"origin": "Seattle", "destination": "Atlanta", "date": "2025-12-24"}
- "What flights go to New York?" -> {"origin": null, "destination": "New York", "date": null}
- "Flights to Miami on Christmas" -> {"origin": null, "destination": "Miami", "date": "2025-12-25"}
- "Show me flights from LA to NYC next Monday" -> {"origin": "LA", "destination": "NYC", "date": "2025-12-30"}
Today is December 23, 2025. Extract flight route and date:"""
try:
ctx = extract(request.headers)
extra_headers = {}
inject(extra_headers, context=ctx)
response = await openai_client_via_plano.chat.completions.create(
model=EXTRACTION_MODEL,
messages=[
{"role": "system", "content": extraction_prompt},
*[
{"role": msg.get("role"), "content": msg.get("content")}
for msg in messages[-5:]
],
],
temperature=0.1,
max_tokens=100,
extra_headers=extra_headers if extra_headers else None,
)
result = response.choices[0].message.content.strip()
if "```json" in result:
result = result.split("```json")[1].split("```")[0].strip()
elif "```" in result:
result = result.split("```")[1].split("```")[0].strip()
route = json.loads(result)
return {
"origin": route.get("origin"),
"destination": route.get("destination"),
"date": route.get("date"),
}
except Exception as e:
logger.error(f"Error extracting flight route: {e}")
return {"origin": None, "destination": None, "date": None}
async def resolve_airport_code(city_name: str, request: Request) -> Optional[str]:
"""Convert city name to airport code using LLM."""
if not city_name:
return None
try:
ctx = extract(request.headers)
extra_headers = {}
inject(extra_headers, context=ctx)
response = await openai_client_via_plano.chat.completions.create(
model=EXTRACTION_MODEL,
messages=[
{
"role": "system",
"content": "Convert city names to primary airport IATA codes. Return only the 3-letter code. Examples: Seattle→SEA, Atlanta→ATL, New York→JFK, London→LHR",
},
{"role": "user", "content": city_name},
],
temperature=0.1,
max_tokens=10,
extra_headers=extra_headers if extra_headers else None,
)
code = response.choices[0].message.content.strip().upper()
code = code.strip("\"'`.,!? \n\t")
return code if len(code) == 3 else None
except Exception as e:
logger.error(f"Error resolving airport code for {city_name}: {e}")
return None
async def get_flights(
origin_code: str, dest_code: str, travel_date: Optional[str] = None
) -> Optional[dict]:
"""Get flights between two airports using FlightAware API.
Args:
origin_code: Origin airport IATA code
dest_code: Destination airport IATA code
travel_date: Travel date in YYYY-MM-DD format, defaults to today
Note: FlightAware API limits searches to 2 days in the future.
"""
try:
# Use provided date or default to today
if travel_date:
search_date = travel_date
else:
search_date = datetime.now().strftime("%Y-%m-%d")
# Validate date is not too far in the future (FlightAware limit: 2 days)
search_date_obj = datetime.strptime(search_date, "%Y-%m-%d")
today = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
days_ahead = (search_date_obj - today).days
if days_ahead > 2:
logger.warning(
f"Requested date {search_date} is {days_ahead} days ahead, exceeds FlightAware 2-day limit"
)
return {
"origin_code": origin_code,
"destination_code": dest_code,
"flights": [],
"count": 0,
"error": f"FlightAware API only provides flight data up to 2 days in the future. The requested date ({search_date}) is {days_ahead} days ahead. Please search for today, tomorrow, or the day after.",
}
url = f"{AEROAPI_BASE_URL}/airports/{origin_code}/flights/to/{dest_code}"
headers = {"x-apikey": AEROAPI_KEY}
params = {
"start": f"{search_date}T00:00:00Z",
"end": f"{search_date}T23:59:59Z",
"connection": "nonstop",
"max_pages": 1,
}
response = await http_client.get(url, headers=headers, params=params)
if response.status_code != 200:
logger.error(
f"FlightAware API error {response.status_code}: {response.text}"
)
return None
data = response.json()
flights = []
# Log raw API response for debugging
logger.info(f"FlightAware API returned {len(data.get('flights', []))} flights")
for idx, flight_group in enumerate(
data.get("flights", [])[:5]
): # Limit to 5 flights
# FlightAware API nests data in segments array
segments = flight_group.get("segments", [])
if not segments:
continue
flight = segments[0] # Get first segment (direct flights only have one)
# Extract airport codes from nested objects
flight_origin = None
flight_dest = None
if isinstance(flight.get("origin"), dict):
flight_origin = flight["origin"].get("code_iata")
if isinstance(flight.get("destination"), dict):
flight_dest = flight["destination"].get("code_iata")
# Build flight object
flights.append(
{
"airline": flight.get("operator"),
"flight_number": flight.get("ident_iata") or flight.get("ident"),
"departure_time": flight.get("scheduled_out"),
"arrival_time": flight.get("scheduled_in"),
"origin": flight_origin,
"destination": flight_dest,
"aircraft_type": flight.get("aircraft_type"),
"status": flight.get("status"),
"terminal_origin": flight.get("terminal_origin"),
"gate_origin": flight.get("gate_origin"),
}
)
return {
"origin_code": origin_code,
"destination_code": dest_code,
"flights": flights,
"count": len(flights),
}
except Exception as e:
logger.error(f"Error fetching flights: {e}")
return None
app = FastAPI(title="Flight Information Agent", version="1.0.0")
@app.post("/v1/chat/completions")
async def handle_request(request: Request):
"""HTTP endpoint for chat completions with streaming support."""
request_body = await request.json()
messages = request_body.get("messages", [])
return StreamingResponse(
invoke_flight_agent(request, request_body),
media_type="text/plain",
headers={"content-type": "text/event-stream"},
)
async def invoke_flight_agent(request: Request, request_body: dict):
"""Generate streaming chat completions."""
messages = request_body.get("messages", [])
# Step 1: Extract origin, destination, and date
route = await extract_flight_route(messages, request)
origin = route.get("origin")
destination = route.get("destination")
travel_date = route.get("date")
# Step 2: Short circuit if missing origin or destination
if not origin or not destination:
missing = []
if not origin:
missing.append("origin city")
if not destination:
missing.append("destination city")
error_message = f"I need both origin and destination cities to search for flights. Please provide the {' and '.join(missing)}. For example: 'Flights from Seattle to Atlanta'"
error_chunk = {
"id": f"chatcmpl-{uuid.uuid4().hex[:8]}",
"object": "chat.completion.chunk",
"created": int(time.time()),
"model": request_body.get("model", FLIGHT_MODEL),
"choices": [
{
"index": 0,
"delta": {"content": error_message},
"finish_reason": "stop",
}
],
}
yield f"data: {json.dumps(error_chunk)}\n\n"
yield "data: [DONE]\n\n"
return
# Step 3: Resolve airport codes
origin_code = await resolve_airport_code(origin, request)
dest_code = await resolve_airport_code(destination, request)
if not origin_code or not dest_code:
error_chunk = {
"id": f"chatcmpl-{uuid.uuid4().hex[:8]}",
"object": "chat.completion.chunk",
"created": int(time.time()),
"model": request_body.get("model", FLIGHT_MODEL),
"choices": [
{
"index": 0,
"delta": {
"content": f"I couldn't find airport codes for {origin if not origin_code else destination}. Please check the city name."
},
"finish_reason": "stop",
}
],
}
yield f"data: {json.dumps(error_chunk)}\n\n"
yield "data: [DONE]\n\n"
return
# Step 4: Get live flight data
flight_data = await get_flights(origin_code, dest_code, travel_date)
# Determine date display for messages
date_display = travel_date if travel_date else "today"
if not flight_data or not flight_data.get("flights"):
# Check if there's a specific error message (e.g., date too far in future)
error_detail = flight_data.get("error") if flight_data else None
if error_detail:
no_flights_message = error_detail
else:
no_flights_message = f"No direct flights found from {origin} ({origin_code}) to {destination} ({dest_code}) for {date_display}."
error_chunk = {
"id": f"chatcmpl-{uuid.uuid4().hex[:8]}",
"object": "chat.completion.chunk",
"created": int(time.time()),
"model": request_body.get("model", FLIGHT_MODEL),
"choices": [
{
"index": 0,
"delta": {"content": no_flights_message},
"finish_reason": "stop",
}
],
}
yield f"data: {json.dumps(error_chunk)}\n\n"
yield "data: [DONE]\n\n"
return
# Step 5: Prepare context for LLM - append flight data to last user message
flight_context = f"""
Flight search results from {origin} ({origin_code}) to {destination} ({dest_code}):
Flight data in JSON format:
{json.dumps(flight_data, indent=2)}
Present these {len(flight_data.get('flights', []))} flight(s) to the user in a clear, readable format."""
# Build message history with flight data appended to the last user message
response_messages = [{"role": "system", "content": SYSTEM_PROMPT}]
for i, msg in enumerate(messages):
# Append flight data to the last user message
if i == len(messages) - 1 and msg.get("role") == "user":
response_messages.append(
{"role": "user", "content": msg.get("content") + flight_context}
)
else:
response_messages.append(
{"role": msg.get("role"), "content": msg.get("content")}
)
# Log what we're sending to the LLM for debugging
logger.info(f"Sending messages to LLM: {json.dumps(response_messages, indent=2)}")
# Step 6: Stream response
try:
ctx = extract(request.headers)
extra_headers = {"x-envoy-max-retries": "3"}
inject(extra_headers, context=ctx)
stream = await openai_client_via_plano.chat.completions.create(
model=FLIGHT_MODEL,
messages=response_messages,
temperature=request_body.get("temperature", 0.7),
max_tokens=request_body.get("max_tokens", 1000),
stream=True,
extra_headers=extra_headers,
)
async for chunk in stream:
if chunk.choices:
yield f"data: {chunk.model_dump_json()}\n\n"
yield "data: [DONE]\n\n"
except Exception as e:
logger.error(f"Error generating flight response: {e}")
error_chunk = {
"id": f"chatcmpl-{uuid.uuid4().hex[:8]}",
"object": "chat.completion.chunk",
"created": int(time.time()),
"model": request_body.get("model", FLIGHT_MODEL),
"choices": [
{
"index": 0,
"delta": {
"content": "I apologize, but I'm having trouble retrieving flight information right now. Please try again."
},
"finish_reason": "stop",
}
],
}
yield f"data: {json.dumps(error_chunk)}\n\n"
yield "data: [DONE]\n\n"
@app.get("/health")
async def health_check():
"""Health check endpoint."""
return {"status": "healthy", "agent": "flight_information"}
def start_server(host: str = "localhost", port: int = 10520):
"""Start the REST server."""
uvicorn.run(
app,
host=host,
port=port,
log_config={
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s - [FLIGHT_AGENT] - %(levelname)s - %(message)s",
},
},
"handlers": {
"default": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
},
},
"root": {
"level": "INFO",
"handlers": ["default"],
},
},
)
if __name__ == "__main__":
start_server(host="0.0.0.0", port=10520)

View file

@ -0,0 +1,426 @@
import json
import re
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from openai import AsyncOpenAI
import os
import logging
import time
import uuid
import uvicorn
from datetime import datetime, timedelta
import httpx
from typing import Optional
from urllib.parse import quote
from opentelemetry.propagate import extract, inject
# Set up logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - [WEATHER_AGENT] - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)
# Configuration for plano LLM gateway
LLM_GATEWAY_ENDPOINT = os.getenv(
"LLM_GATEWAY_ENDPOINT", "http://host.docker.internal:12001/v1"
)
WEATHER_MODEL = "openai/gpt-4o"
LOCATION_MODEL = "openai/gpt-4o-mini"
# Initialize OpenAI client for plano
openai_client_via_plano = AsyncOpenAI(
base_url=LLM_GATEWAY_ENDPOINT,
api_key="EMPTY",
)
# FastAPI app for REST server
app = FastAPI(title="Weather Forecast Agent", version="1.0.0")
# HTTP client for API calls
http_client = httpx.AsyncClient(timeout=10.0)
# Utility functions
def celsius_to_fahrenheit(temp_c: Optional[float]) -> Optional[float]:
"""Convert Celsius to Fahrenheit."""
return round(temp_c * 9 / 5 + 32, 1) if temp_c is not None else None
def get_user_messages(messages: list) -> list:
"""Extract user messages from message list."""
return [msg for msg in messages if msg.get("role") == "user"]
def get_last_user_content(messages: list) -> str:
"""Get the content of the most recent user message."""
for msg in reversed(messages):
if msg.get("role") == "user":
return msg.get("content", "").lower()
return ""
async def get_weather_data(request: Request, messages: list, days: int = 1):
"""Extract location from user's conversation and fetch weather data from Open-Meteo API.
This function does two things:
1. Uses an LLM to extract the location from the user's message
2. Fetches weather data for that location from Open-Meteo
Currently returns only current day weather. Want to add multi-day forecasts?
"""
instructions = """Extract the location for WEATHER queries. Return just the city name.
Rules:
1. For multi-part queries, extract ONLY the location mentioned with weather keywords ("weather in [location]")
2. If user says "there" or "that city", it typically refers to the DESTINATION city in travel contexts (not the origin)
3. For flight queries with weather, "there" means the destination city where they're traveling TO
4. Return plain text (e.g., "London", "New York", "Paris, France")
5. If no weather location found, return "NOT_FOUND"
Examples:
- "What's the weather in London?" -> "London"
- "Flights from Seattle to Atlanta, and show me the weather there" -> "Atlanta"
- "Can you get me flights from Seattle to Atlanta tomorrow, and also please show me the weather there" -> "Atlanta"
- "What's the weather in Seattle, and what is one flight that goes direct to Atlanta?" -> "Seattle"
- User asked about flights to Atlanta, then "what's the weather like there?" -> "Atlanta"
- "I'm going to Seattle" -> "Seattle"
- "What's happening?" -> "NOT_FOUND"
Extract location:"""
try:
user_messages = [
msg.get("content") for msg in messages if msg.get("role") == "user"
]
if not user_messages:
location = "New York"
else:
ctx = extract(request.headers)
extra_headers = {}
inject(extra_headers, context=ctx)
# For location extraction, pass full conversation for context (e.g., "there" = previous destination)
response = await openai_client_via_plano.chat.completions.create(
model=LOCATION_MODEL,
messages=[
{"role": "system", "content": instructions},
*[
{"role": msg.get("role"), "content": msg.get("content")}
for msg in messages
],
],
temperature=0.1,
max_tokens=50,
extra_headers=extra_headers if extra_headers else None,
)
location = response.choices[0].message.content.strip().strip("\"'`.,!?")
logger.info(f"Location extraction result: '{location}'")
if not location or location.upper() == "NOT_FOUND":
location = "New York"
logger.info(f"Location not found, defaulting to: {location}")
except Exception as e:
logger.error(f"Error extracting location: {e}")
location = "New York"
logger.info(f"Fetching weather for location: '{location}' (days: {days})")
# Step 2: Fetch weather data for the extracted location
try:
# Geocode city to get coordinates
geocode_url = f"https://geocoding-api.open-meteo.com/v1/search?name={quote(location)}&count=1&language=en&format=json"
geocode_response = await http_client.get(geocode_url)
if geocode_response.status_code != 200 or not geocode_response.json().get(
"results"
):
logger.warning(f"Could not geocode {location}, using New York")
location = "New York"
geocode_url = f"https://geocoding-api.open-meteo.com/v1/search?name={quote(location)}&count=1&language=en&format=json"
geocode_response = await http_client.get(geocode_url)
geocode_data = geocode_response.json()
if not geocode_data.get("results"):
return {
"location": location,
"weather": {
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"weather_code": None,
"error": "Could not retrieve weather data",
},
}
result = geocode_data["results"][0]
location_name = result.get("name", location)
latitude = result["latitude"]
longitude = result["longitude"]
logger.info(
f"Geocoded '{location}' to {location_name} ({latitude}, {longitude})"
)
# Get weather forecast
weather_url = (
f"https://api.open-meteo.com/v1/forecast?"
f"latitude={latitude}&longitude={longitude}&"
f"current=temperature_2m&"
f"daily=sunrise,sunset,temperature_2m_max,temperature_2m_min,weather_code&"
f"forecast_days={days}&timezone=auto"
)
weather_response = await http_client.get(weather_url)
if weather_response.status_code != 200:
return {
"location": location_name,
"weather": {
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"weather_code": None,
"error": "Could not retrieve weather data",
},
}
weather_data = weather_response.json()
current_temp = weather_data.get("current", {}).get("temperature_2m")
daily = weather_data.get("daily", {})
# Build forecast for requested number of days
forecast = []
for i in range(days):
date_str = daily["time"][i]
date_obj = datetime.fromisoformat(date_str.replace("Z", "+00:00"))
temp_max = (
daily.get("temperature_2m_max", [])[i]
if daily.get("temperature_2m_max")
else None
)
temp_min = (
daily.get("temperature_2m_min", [])[i]
if daily.get("temperature_2m_min")
else None
)
weather_code = (
daily.get("weather_code", [0])[i] if daily.get("weather_code") else 0
)
sunrise = daily.get("sunrise", [])[i] if daily.get("sunrise") else None
sunset = daily.get("sunset", [])[i] if daily.get("sunset") else None
# Use current temp for today, otherwise use max temp
temp_c = (
temp_max
if temp_max is not None
else (current_temp if i == 0 and current_temp else temp_min)
)
forecast.append(
{
"date": date_str.split("T")[0],
"day_name": date_obj.strftime("%A"),
"temperature_c": round(temp_c, 1) if temp_c is not None else None,
"temperature_f": celsius_to_fahrenheit(temp_c),
"temperature_max_c": round(temp_max, 1)
if temp_max is not None
else None,
"temperature_min_c": round(temp_min, 1)
if temp_min is not None
else None,
"weather_code": weather_code,
"sunrise": sunrise.split("T")[1] if sunrise else None,
"sunset": sunset.split("T")[1] if sunset else None,
}
)
return {"location": location_name, "forecast": forecast}
except Exception as e:
logger.error(f"Error getting weather data: {e}")
return {
"location": location,
"weather": {
"date": datetime.now().strftime("%Y-%m-%d"),
"day_name": datetime.now().strftime("%A"),
"temperature_c": None,
"temperature_f": None,
"weather_code": None,
"error": "Could not retrieve weather data",
},
}
@app.post("/v1/chat/completions")
async def handle_request(request: Request):
"""HTTP endpoint for chat completions with streaming support."""
request_body = await request.json()
messages = request_body.get("messages", [])
logger.info(
"messages detail json dumps: %s",
json.dumps(messages, indent=2),
)
traceparent_header = request.headers.get("traceparent")
return StreamingResponse(
invoke_weather_agent(request, request_body, traceparent_header),
media_type="text/plain",
headers={
"content-type": "text/event-stream",
},
)
async def invoke_weather_agent(
request: Request, request_body: dict, traceparent_header: str = None
):
"""Generate streaming chat completions."""
messages = request_body.get("messages", [])
# Detect if user wants multi-day forecast
last_user_msg = get_last_user_content(messages)
days = 1
if "forecast" in last_user_msg or "week" in last_user_msg:
days = 7
elif "tomorrow" in last_user_msg:
days = 2
# Extract specific number of days if mentioned (e.g., "5 day forecast")
import re
day_match = re.search(r"(\d{1,2})\s+day", last_user_msg)
if day_match:
requested_days = int(day_match.group(1))
days = min(requested_days, 16) # API supports max 16 days
# Get live weather data (location extraction happens inside this function)
weather_data = await get_weather_data(request, messages, days)
# Create weather context to append to user message
forecast_type = "forecast" if days > 1 else "current weather"
weather_context = f"""
Weather data for {weather_data['location']} ({forecast_type}):
{json.dumps(weather_data, indent=2)}"""
# System prompt for weather agent
instructions = """You are a weather assistant in a multi-agent system. You will receive weather data in JSON format with these fields:
- "location": City name
- "forecast": Array of weather objects, each with date, day_name, temperature_c, temperature_f, temperature_max_c, temperature_min_c, weather_code, sunrise, sunset
- weather_code: WMO code (0=clear, 1-3=partly cloudy, 45-48=fog, 51-67=rain, 71-86=snow, 95-99=thunderstorm)
Your task:
1. Present the weather/forecast clearly for the location
2. For single day: show current conditions
3. For multi-day: show each day with date and conditions
4. Include temperature in both Celsius and Fahrenheit
5. Describe conditions naturally based on weather_code
6. Use conversational language
Important: If the conversation includes information from other agents (like flight details), acknowledge and build upon that context naturally. Your primary focus is weather, but maintain awareness of the full conversation.
Remember: Only use the provided data. If fields are null, mention data is unavailable."""
# Build message history with weather data appended to the last user message
response_messages = [{"role": "system", "content": instructions}]
for i, msg in enumerate(messages):
# Append weather data to the last user message
if i == len(messages) - 1 and msg.get("role") == "user":
response_messages.append(
{"role": "user", "content": msg.get("content") + weather_context}
)
else:
response_messages.append(
{"role": msg.get("role"), "content": msg.get("content")}
)
try:
ctx = extract(request.headers)
extra_headers = {"x-envoy-max-retries": "3"}
inject(extra_headers, context=ctx)
stream = await openai_client_via_plano.chat.completions.create(
model=WEATHER_MODEL,
messages=response_messages,
temperature=request_body.get("temperature", 0.7),
max_tokens=request_body.get("max_tokens", 1000),
stream=True,
extra_headers=extra_headers,
)
async for chunk in stream:
if chunk.choices:
yield f"data: {chunk.model_dump_json()}\n\n"
yield "data: [DONE]\n\n"
except Exception as e:
logger.error(f"Error generating weather response: {e}")
error_chunk = {
"id": f"chatcmpl-{uuid.uuid4().hex[:8]}",
"object": "chat.completion.chunk",
"created": int(time.time()),
"model": request_body.get("model", WEATHER_MODEL),
"choices": [
{
"index": 0,
"delta": {
"content": "I apologize, but I'm having trouble retrieving weather information right now. Please try again."
},
"finish_reason": "stop",
}
],
}
yield f"data: {json.dumps(error_chunk)}\n\n"
yield "data: [DONE]\n\n"
@app.get("/health")
async def health_check():
"""Health check endpoint."""
return {"status": "healthy", "agent": "weather_forecast"}
def start_server(host: str = "localhost", port: int = 10510):
"""Start the REST server."""
uvicorn.run(
app,
host=host,
port=port,
log_config={
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s - [WEATHER_AGENT] - %(levelname)s - %(message)s",
},
},
"handlers": {
"default": {
"formatter": "default",
"class": "logging.StreamHandler",
"stream": "ext://sys.stdout",
},
},
"root": {
"level": "INFO",
"handlers": ["default"],
},
},
)
if __name__ == "__main__":
start_server(host="0.0.0.0", port=10510)

View file

@ -1,100 +1,110 @@
version: v0.1
# Arch Gateway configuration version
version: v0.3.0
# External HTTP agents - API type is controlled by request path (/v1/responses, /v1/messages, /v1/chat/completions)
agents:
- id: weather_agent # Example agent for weather
url: http://host.docker.internal:10510
- id: flight_agent # Example agent for flights
url: http://host.docker.internal:10520
# MCP filters applied to requests/responses (e.g., input validation, query rewriting)
filters:
- id: input_guards # Example filter for input validation
url: http://host.docker.internal:10500
# type: mcp (default)
# transport: streamable-http (default)
# tool: input_guards (default - same as filter id)
# LLM provider configurations with API keys and model routing
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
- model: anthropic/claude-sonnet-4-0
access_key: $ANTHROPIC_API_KEY
- model: mistral/ministral-3b-latest
access_key: $MISTRAL_API_KEY
# Model aliases - use friendly names instead of full provider model names
model_aliases:
fast-llm:
target: gpt-4o-mini
smart-llm:
target: gpt-4o
# HTTP listeners - entry points for agent routing, prompt targets, and direct LLM access
listeners:
ingress_traffic:
# Agent listener for routing requests to multiple agents
- type: agent
name: travel_booking_service
port: 8001
router: plano_orchestrator_v1
address: 0.0.0.0
port: 10000
message_format: openai
timeout: 5s
egress_traffic:
agents:
- id: rag_agent
description: virtual assistant for retrieval augmented generation tasks
filter_chain:
- input_guards
# Model listener for direct LLM access
- type: model
name: model_1
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 5s
# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
# Prompt listener for function calling (for prompt_targets)
- type: prompt
name: prompt_function_listener
address: 0.0.0.0
port: 10000
# This listener is used for prompt_targets and function calling
# Reusable service endpoints
endpoints:
app_server:
# value could be ip address or a hostname with port
# this could also be a list of endpoints for load balancing
# for example endpoint: [ ip1:port, ip2:port ]
endpoint: 127.0.0.1:80
# max time to wait for a connection to be established
connect_timeout: 0.005s
mistral_local:
endpoint: 127.0.0.1:8001
error_target:
endpoint: error_target_1
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
- name: openai/gpt-4o
access_key: $OPENAI_API_KEY
model: openai/gpt-4o
default: true
- access_key: $MISTRAL_API_KEY
model: mistral/mistral-8x7b
- model: mistral/mistral-7b-instruct
base_url: http://mistral_local
# Model aliases - friendly names that map to actual provider names
model_aliases:
# Alias for summarization tasks -> fast/cheap model
arch.summarize.v1:
target: gpt-4o
# Alias for general purpose tasks -> latest model
arch.v1:
target: mistral-8x7b
# provides a way to override default settings for the arch system
overrides:
# By default Arch uses an NLI + embedding approach to match an incoming prompt to a prompt target.
# The intent matching threshold is kept at 0.80, you can override this behavior if you would like
prompt_target_intent_matching_threshold: 0.60
# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
# Prompt targets for function calling and API orchestration
prompt_targets:
- name: information_extraction
default: true
description: handel all scenarios that are question and answer in nature. Like summarization, information extraction, etc.
endpoint:
name: app_server
path: /agent/summary
http_method: POST
# Arch uses the default LLM and treats the response from the endpoint as the prompt to send to the LLM
auto_llm_dispatch_on_response: true
# override system prompt for this prompt target
system_prompt: You are a helpful information extraction assistant. Use the information that is provided to you.
- name: reboot_network_device
description: Reboot a specific network device
endpoint:
name: app_server
path: /agent/action
- name: get_current_weather
description: Get current weather at a location.
parameters:
- name: device_id
type: str
description: Identifier of the network device to reboot.
- name: location
description: The location to get the weather for
required: true
- name: confirmation
type: bool
description: Confirmation flag to proceed with reboot.
default: false
enum: [true, false]
type: string
format: City, State
- name: days
description: the number of days for the request
required: true
type: int
endpoint:
name: app_server
path: /weather
http_method: POST
# OpenTelemetry tracing configuration
tracing:
# sampling rate. Note by default Arch works on OpenTelemetry compatible tracing.
sampling_rate: 0.1
# Random sampling percentage (1-100)
random_sampling: 100

View file

@ -1,15 +1,50 @@
agents:
- id: weather_agent
url: http://host.docker.internal:10510
- id: flight_agent
url: http://host.docker.internal:10520
endpoints:
app_server:
connect_timeout: 0.005s
endpoint: 127.0.0.1
port: 80
error_target:
endpoint: error_target_1
port: 80
flight_agent:
endpoint: host.docker.internal
port: 10520
protocol: http
input_guards:
endpoint: host.docker.internal
port: 10500
protocol: http
mistral_local:
endpoint: 127.0.0.1
port: 8001
weather_agent:
endpoint: host.docker.internal
port: 10510
protocol: http
filters:
- id: input_guards
url: http://host.docker.internal:10500
listeners:
- address: 0.0.0.0
agents:
- description: virtual assistant for retrieval augmented generation tasks
filter_chain:
- input_guards
id: rag_agent
name: travel_booking_service
port: 8001
router: plano_orchestrator_v1
type: agent
- address: 0.0.0.0
name: model_1
port: 12000
type: model
- address: 0.0.0.0
name: prompt_function_listener
port: 10000
type: prompt
- address: 0.0.0.0
model_providers:
- access_key: $OPENAI_API_KEY
@ -17,49 +52,44 @@ listeners:
model: gpt-4o
name: openai/gpt-4o
provider_interface: openai
- access_key: $OPENAI_API_KEY
model: gpt-4o-mini
name: openai/gpt-4o-mini
provider_interface: openai
- access_key: $ANTHROPIC_API_KEY
model: claude-sonnet-4-0
name: anthropic/claude-sonnet-4-0
provider_interface: anthropic
- access_key: $MISTRAL_API_KEY
model: mistral-8x7b
name: mistral/mistral-8x7b
provider_interface: mistral
- base_url: http://mistral_local
cluster_name: mistral_mistral_local
endpoint: mistral_local
model: mistral-7b-instruct
name: mistral/mistral-7b-instruct
port: 80
protocol: http
model: ministral-3b-latest
name: mistral/ministral-3b-latest
provider_interface: mistral
name: egress_traffic
port: 12000
timeout: 5s
timeout: 30s
type: model_listener
- address: 0.0.0.0
name: ingress_traffic
port: 10000
timeout: 5s
type: prompt_listener
model_aliases:
arch.summarize.v1:
fast-llm:
target: gpt-4o-mini
smart-llm:
target: gpt-4o
arch.v1:
target: mistral-8x7b
model_providers:
- access_key: $OPENAI_API_KEY
default: true
model: gpt-4o
name: openai/gpt-4o
provider_interface: openai
- access_key: $OPENAI_API_KEY
model: gpt-4o-mini
name: openai/gpt-4o-mini
provider_interface: openai
- access_key: $ANTHROPIC_API_KEY
model: claude-sonnet-4-0
name: anthropic/claude-sonnet-4-0
provider_interface: anthropic
- access_key: $MISTRAL_API_KEY
model: mistral-8x7b
name: mistral/mistral-8x7b
provider_interface: mistral
- base_url: http://mistral_local
cluster_name: mistral_mistral_local
endpoint: mistral_local
model: mistral-7b-instruct
name: mistral/mistral-7b-instruct
port: 80
protocol: http
model: ministral-3b-latest
name: mistral/ministral-3b-latest
provider_interface: mistral
- model: Arch-Function
name: arch-function
@ -67,45 +97,23 @@ model_providers:
- model: Plano-Orchestrator
name: plano-orchestrator
provider_interface: arch
overrides:
prompt_target_intent_matching_threshold: 0.6
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide
assistance within my programmed parameters.
prompt_targets:
- auto_llm_dispatch_on_response: true
default: true
description: handel all scenarios that are question and answer in nature. Like summarization,
information extraction, etc.
- description: Get current weather at a location.
endpoint:
http_method: POST
name: app_server
path: /agent/summary
name: information_extraction
system_prompt: You are a helpful information extraction assistant. Use the information
that is provided to you.
- description: Reboot a specific network device
endpoint:
name: app_server
path: /agent/action
name: reboot_network_device
path: /weather
name: get_current_weather
parameters:
- description: Identifier of the network device to reboot.
name: device_id
- description: The location to get the weather for
format: City, State
name: location
required: true
type: str
- default: false
description: Confirmation flag to proceed with reboot.
enum:
- true
- false
name: confirmation
type: bool
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers
or purchasing decisions.
type: string
- description: the number of days for the request
name: days
required: true
type: int
tracing:
sampling_rate: 0.1
version: v0.1
random_sampling: 100
version: v0.3.0

View file

@ -1,14 +1,12 @@
version: v0.1
listeners:
egress_traffic:
- type: model
name: model_proxy_listener
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
model_providers:
# OpenAI Models
- model: openai/gpt-5-mini-2025-08-07
access_key: $OPENAI_API_KEY

View file

@ -0,0 +1,41 @@
version: v0.3.0
agents:
- id: rag_agent
url: http://host.docker.internal:10505
filters:
- id: query_rewriter
url: http://host.docker.internal:10501
# type: mcp # default is mcp
# transport: streamable-http # default is streamable-http
# tool: query_rewriter # default name is the filter id
- id: context_builder
url: http://host.docker.internal:10502
model_providers:
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
model_aliases:
fast-llm:
target: gpt-4o-mini
smart-llm:
target: gpt-4o
listeners:
- type: agent
name: agent_1
port: 8001
router: arch_agent_router
agents:
- id: rag_agent
description: virtual assistant for retrieval augmented generation tasks
filter_chain:
- query_rewriter
- context_builder
tracing:
random_sampling: 100

View file

@ -0,0 +1,6 @@
llms.txt
================
This project generates a single plaintext file containing the compiled text of all documentation pages, useful for large context models to reference Plano documentation.
Open it here: `llms.txt <../includes/llms.txt>`_

View file

@ -0,0 +1,21 @@
.. _bright_staff:
Bright Staff
============
Bright Staff is Plano's memory-efficient, lightweight controller for agentic traffic. It sits inside the Plano
data plane and makes real-time decisions about how prompts are handled, forwarded, and processed.
Rather than running a separate "model server" subsystem, Plano relies on Envoy's HTTP connection management
and cluster subsystem to talk to different models and backends over HTTP(S). Bright Staff uses these primitives to:
* Inspect prompts, conversation state, and metadata.
* Decide which upstream model(s), tool backends, or APIs to call, and in what order.
* Coordinate retries, fallbacks, and traffic splitting across providers and models.
Plano is designed to run alongside your application servers in your cloud VPC, on-premises, or in local
development. It does not require a GPU itself; GPUs live where your models are hosted (third-party APIs or your
own deployments), and Plano reaches them via HTTP.
.. image:: /_static/img/plano-system-architecture.png
:align: center
:width: 40%

View file

@ -0,0 +1,145 @@
.. _lifecycle_of_a_request:
Request Lifecycle
=================
Below we describe the events in the lifecycle of a request passing through a Plano instance. We first
describe how Plano fits into the request path and then the internal events that take place following
the arrival of a request at Plano from downstream clients. We follow the request until the corresponding
dispatch upstream and the response path.
.. image:: /_static/img/network-topology-ingress-egress.png
:width: 100%
:align: center
Network topology
----------------
How a request flows through the components in a network (including Plano) depends on the networks topology.
Plano can be used in a wide variety of networking topologies. We focus on the inner operations of Plano below,
but briefly we address how Plano relates to the rest of the network in this section.
- **Downstream(Ingress)** listeners take requests from upstream clients like a web UI or clients that forward
prompts to you local application responses from the application flow back through Plano to the downstream.
- **Upstream(Egress)** listeners take requests from the application and forward them to LLMs.
High level architecture
-----------------------
Plano is a set of **two** self-contained processes that are designed to run alongside your application servers
(or on a separate server connected to your application servers via a network).
The first process is designated to manage HTTP-level networking and connection management concerns (protocol management, request id generation, header sanitization, etc.), and the other process is a **controller**, which helps Plano make intelligent decisions about the incoming prompts. The controller hosts the purpose-built LLMs to manage several critical, but undifferentiated, prompt related tasks on behalf of developers.
The request processing path in Plano has three main parts:
* :ref:`Listener subsystem <plano_overview_listeners>` which handles **downstream** and **upstream** request
processing. It is responsible for managing the inbound(edge) and outbound(egress) request lifecycle. The downstream and upstream HTTP/2 codec lives here. This also includes the lifecycle of any **upstream** connection to an LLM provider or tool backend. The listenser subsystmem manages connection pools, load balancing, retries, and failover.
* :ref:`Bright Staff controller subsystem <bright_staff>` is Plano's memory-efficient, lightweight controller for agentic traffic. It sits inside the Plano data plane and makes real-time decisions about how prompts are handled, forwarded, and processed.
These two subsystems are bridged with either the HTTP router filter, and the cluster manager subsystems of Envoy.
Also, Plano utilizes `Envoy event-based thread model <https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310>`_. A main thread is responsible for the server lifecycle, configuration processing, stats, etc. and some number of :ref:`worker threads <arch_overview_threading>` process requests. All threads operate around an event loop (`libevent <https://libevent.org/>`_) and any given downstream TCP connection will be handled by exactly one worker thread for its lifetime. Each worker thread maintains its own pool of TCP connections to upstream endpoints.
Worker threads rarely share state and operate in a trivially parallel fashion. This threading model
enables scaling to very high core count CPUs.
Request Flow (Ingress)
----------------------
A brief outline of the lifecycle of a request and response using the example configuration above:
1. **TCP Connection Establishment**:
A TCP connection from downstream is accepted by an Plano listener running on a worker thread.
The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS,
decrypts incoming data for processing.
3. **Routing Decision (Agent vs Prompt Target)**:
The decrypted data stream is de-framed by the HTTP/2 codec in Plano's HTTP connection manager. Plano performs
intent matching (via the Bright Staff controller and prompt-handling logic) using the configured agents and
:ref:`prompt targets <prompt_target>`, determining whether this request should be handled by an agent workflow
(with optional :ref:`Filter Chains <filter_chain>`) or by a deterministic prompt target.
4a. **Agent Path: Orchestration and Filter Chains**
If the request is routed to an **agent**, Plano executes any attached :ref:`Filter Chains <filter_chain>` first. These filters can apply guardrails, rewrite prompts, or enrich context (for example, RAG retrieval) before the agent runs. Once filters complete, the Bright Staff controller orchestrates which downstream tools, APIs, or LLMs the agent should call and in what sequence.
* Plano may call one or more backend APIs or tools on behalf of the agent.
* If an endpoint cluster is identified, load balancing is performed, circuit breakers are checked, and the request is proxied to the appropriate upstream endpoint.
* If no specific endpoint is required, the prompt is sent to an upstream LLM using Plano's model proxy for
completion or summarization.
For more on agent workflows and orchestration, see :ref:`Prompt Targets and Agents <prompt_target>` and
:ref:`Agent Filter Chains <filter_chain>`.
4b. **Prompt Target Path: Deterministic Tool/API Calls**
If the request is routed to a **prompt target**, Plano treats it as a deterministic, task-specific call.
Plano engages its function-calling and parameter-gathering capabilities to extract the necessary details
from the incoming prompt(s) and produce the structured inputs your backend expects.
* **Parameter Gathering**: Plano extracts and validates parameters defined on the prompt target (for example,
currency symbols, dates, or entity identifiers) so your backend does not need to parse natural language.
* **API Call Execution**: Plano then routes the call to the configured backend endpoint. If an endpoint cluster is identified, load balancing and circuit-breaker checks are applied before proxying the request upstream.
For more on how to design and configure prompt targets, see :ref:`Prompt Target <prompt_target>`.
5. **Error Handling and Forwarding**:
Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to
designated error targets. Error details are communicated through specific headers to the application:
- ``X-Function-Error-Code``: Code indicating the type of function call error.
- ``X-Prompt-Guard-Error-Code``: Code specifying violations detected by prompt guardrails.
- Additional headers carry messages and timestamps to aid in debugging and logging.
6. **Response Handling**:
The upstream endpoints TLS transport socket encrypts the response, which is then proxied back downstream.
Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.
Request Flow (Egress)
---------------------
A brief outline of the lifecycle of a request and response in the context of egress traffic from an application to Large Language Models (LLMs) via Plano:
1. **HTTP Connection Establishment to LLM**:
Plano initiates an HTTP connection to the upstream LLM service. This connection is handled by Planos egress listener running on a worker thread. The connection typically uses a secure transport protocol such as HTTPS, ensuring the prompt data is encrypted before being sent to the LLM service.
2. **Rate Limiting**:
Before sending the request to the LLM, Plano applies rate-limiting policies to ensure that the upstream LLM service is not overwhelmed by excessive traffic. Rate limits are enforced per client or service, ensuring fair usage and preventing accidental or malicious overload. If the rate limit is exceeded, Plano may return an appropriate HTTP error (e.g., 429 Too Many Requests) without sending the prompt to the LLM.
3. **Seamless Request Transformation and Smart Routing**:
After rate limiting, Plano normalizes the outgoing request into a provider-agnostic shape and applies smart routing decisions using the configured :ref:`LLM Providers <llm_providers>`. This includes translating client-specific conventions into a unified OpenAI-style contract, enriching or overriding parameters (for example, temperature or max tokens) based on policy, and choosing the best target model or provider using :ref:`model-based, alias-based, or preference-aligned routing <llm_providers>`.
4. **Load Balancing to (hosted) LLM Endpoints**:
After smart routing selects the target provider/model, Plano routes the prompt to the appropriate LLM endpoint.
If multiple LLM provider instances are available, load balancing is performed to distribute traffic evenly
across the instances. Plano checks the health of the LLM endpoints using circuit breakers and health checks,
ensuring that the prompt is only routed to a healthy, responsive instance.
5. **Response Reception and Forwarding**:
Once the LLM processes the prompt, Plano receives the response from the LLM service. The response is typically a generated text, completion, or summarization. Upon reception, Plano decrypts (if necessary) and handles the response, passing it through any egress processing pipeline defined by the application, such as logging or additional response filtering.
Post-request processing
^^^^^^^^^^^^^^^^^^^^^^^^
Once a request completes, the stream is destroyed. The following also takes places:
* The post-request :ref:`monitoring <monitoring>` are updated (e.g. timing, active requests, upgrades, health checks).
Some statistics are updated earlier however, during request processing. Stats are batched and written by the main
thread periodically.
* :ref:`Access logs <arch_access_logging>` are written to the access log
* :ref:`Trace <arch_overview_tracing>` spans are finalized. If our example request was traced, a
trace span, describing the duration and details of the request would be created by the HCM when
processing request headers and then finalized by the HCM during post-request processing.
Configuration
-------------
Today, only support a static bootstrap configuration file for simplicity today:
.. literalinclude:: ../../concepts/includes/plano_config.yaml
:language: yaml

View file

@ -6,10 +6,6 @@ Tech Overview
.. toctree::
:maxdepth: 2
terminology
threading_model
listener
prompt
model_serving
request_lifecycle
error_target
model_serving
threading_model

View file

@ -3,17 +3,17 @@
Threading Model
===============
Arch builds on top of Envoy's single process with multiple threads architecture.
Plano builds on top of Envoy's single process with multiple threads architecture.
A single *primary* thread controls various sporadic coordination tasks while some number of *worker*
threads perform filtering, and forwarding.
Once a connection is accepted, the connection spends the rest of its lifetime bound to a single worker
thread. All the functionality around prompt handling from a downstream client is handled in a separate worker thread.
This allows the majority of Arch to be largely single threaded (embarrassingly parallel) with a small amount
This allows the majority of Plano to be largely single threaded (embarrassingly parallel) with a small amount
of more complex code handling coordination between the worker threads.
Generally, Arch is written to be 100% non-blocking.
Generally, Plano is written to be 100% non-blocking.
.. tip::