mirror of
https://github.com/katanemo/plano.git
synced 2026-05-02 12:22:43 +02:00
Update docs to Plano (#639)
This commit is contained in:
parent
15fbb6c3af
commit
e224cba3e3
139 changed files with 4407 additions and 24735 deletions
76
docs/source/concepts/agents.rst
Normal file
76
docs/source/concepts/agents.rst
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
.. _agents:
|
||||
|
||||
Agents
|
||||
======
|
||||
|
||||
Agents are autonomous systems that handle wide-ranging, open-ended tasks by calling models in a loop until the work is complete. Unlike deterministic :ref:`prompt targets <prompt_target>`, agents have access to tools, reason about which actions to take, and adapt their behavior based on intermediate results—making them ideal for complex workflows that require multi-step reasoning, external API calls, and dynamic decision-making.
|
||||
|
||||
Plano helps developers build and scale multi-agent systems by managing the orchestration layer—deciding which agent(s) or LLM(s) should handle each request, and in what sequence—while developers focus on implementing agent logic in any language or framework they choose.
|
||||
|
||||
Agent Orchestration
|
||||
-------------------
|
||||
|
||||
**Plano-Orchestrator** is a family of state-of-the-art routing and orchestration models that decide which agent(s) should handle each request, and in what sequence. Built for real-world multi-agent deployments, it analyzes user intent and conversation context to make precise routing and orchestration decisions while remaining efficient enough for low-latency production use across general chat, coding, and long-context multi-turn conversations.
|
||||
|
||||
This allows development teams to:
|
||||
|
||||
* **Scale multi-agent systems**: Route requests across multiple specialized agents without hardcoding routing logic in application code.
|
||||
* **Improve performance**: Direct requests to the most appropriate agent based on intent, reducing unnecessary handoffs and improving response quality.
|
||||
* **Enhance debuggability**: Centralized routing decisions are observable through Plano's tracing and logging, making it easier to understand why a particular agent was selected.
|
||||
|
||||
Inner Loop vs. Outer Loop
|
||||
--------------------------
|
||||
|
||||
Plano distinguishes between the **inner loop** (agent implementation logic) and the **outer loop** (orchestration and routing):
|
||||
|
||||
Inner Loop (Agent Logic)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The inner loop is where your agent lives—the business logic that decides which tools to call, how to interpret results, and when the task is complete. You implement this in any language or framework:
|
||||
|
||||
* **Python agents**: Using frameworks like LangChain, LlamaIndex, CrewAI, or custom Python code.
|
||||
* **JavaScript/TypeScript agents**: Using frameworks like LangChain.js or custom Node.js implementations.
|
||||
* **Any other AI famreowkr**: Agents are just HTTP services that Plano can route to.
|
||||
|
||||
Your agent controls:
|
||||
|
||||
* Which tools or APIs to call in response to a prompt.
|
||||
* How to interpret tool results and decide next steps.
|
||||
* When to call the LLM for reasoning or summarization.
|
||||
* When the task is complete and what response to return.
|
||||
|
||||
.. note::
|
||||
**Making LLM Calls from Agents**
|
||||
|
||||
When your agent needs to call an LLM for reasoning, summarization, or completion, you should route those calls through Plano's Model Proxy rather than calling LLM providers directly. This gives you:
|
||||
|
||||
* **Consistent responses**: Normalized response formats across all :ref:`LLM providers <llm_providers>`, whether you're using OpenAI, Anthropic, Azure OpenAI, or any OpenAI-compatible provider.
|
||||
* **Rich agentic signals**: Automatic capture of function calls, tool usage, reasoning steps, and model behavior—surfaced through traces and metrics without instrumenting your agent code.
|
||||
* **Smart model routing**: Leverage :ref:`model-based, alias-based, or preference-aligned routing <llm_providers>` to dynamically select the best model for each task based on cost, performance, or custom policies.
|
||||
|
||||
By routing LLM calls through the Model Proxy, your agents remain decoupled from specific providers and can benefit from centralized policy enforcement, observability, and intelligent routing—all managed in the outer loop. For a step-by-step guide, see :ref:`llm_router` in the LLM Router guide.
|
||||
|
||||
Outer Loop (Orchestration)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The outer loop is Plano's orchestration layer—it manages the lifecycle of requests across agents and LLMs:
|
||||
|
||||
* **Intent analysis**: Plano-Orchestrator analyzes incoming prompts to determine user intent and conversation context.
|
||||
* **Routing decisions**: Routes requests to the appropriate agent(s) or LLM(s) based on capabilities, context, and availability.
|
||||
* **Sequencing**: Determines whether multiple agents need to collaborate and in what order.
|
||||
* **Lifecycle management**: Handles retries, failover, circuit breaking, and load balancing across agent instances.
|
||||
|
||||
By managing the outer loop, Plano allows you to:
|
||||
|
||||
* Add new agents without changing routing logic in existing agents.
|
||||
* Run multiple versions or variants of agents for A/B testing or canary deployments.
|
||||
* Apply consistent :ref:`filter chains <filter_chain>` (guardrails, context enrichment) before requests reach agents.
|
||||
* Monitor and debug multi-agent workflows through centralized observability.
|
||||
|
||||
Key Benefits
|
||||
------------
|
||||
|
||||
* **Language and framework agnostic**: Write agents in any language; Plano orchestrates them via HTTP.
|
||||
* **Reduced complexity**: Agents focus on task logic; Plano handles routing, retries, and cross-cutting concerns.
|
||||
* **Better observability**: Centralized tracing shows which agents were called, in what sequence, and why.
|
||||
* **Easier scaling**: Add more agent instances or new agent types without refactoring existing code.
|
||||
74
docs/source/concepts/filter_chain.rst
Normal file
74
docs/source/concepts/filter_chain.rst
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
.. _filter_chain:
|
||||
|
||||
Filter Chains
|
||||
==============
|
||||
|
||||
Filter chains are Plano's way of capturing **reusable workflow steps** in the dataplane, without duplication and coupling logic into application code. A filter chain is an ordered list of **mutations** that a request flows through before reaching its final destination —such as an agent, an LLM, or a tool backend. Each filter is a network-addressable service/path that can:
|
||||
|
||||
1. Inspect the incoming prompt, metadata, and conversation state.
|
||||
2. Mutate or enrich the request (for example, rewrite queries or build context).
|
||||
3. Short-circuit the flow and return a response early (for example, block a request on a compliance failure).
|
||||
4. Emit structured logs and traces so you can debug and continuously improve your agents.
|
||||
|
||||
In other words, filter chains provide a lightweight programming model over HTTP for building reusable steps
|
||||
in your agent architectures.
|
||||
|
||||
Typical Use Cases
|
||||
-----------------
|
||||
|
||||
Without a dataplane programming model, teams tend to spread logic like query rewriting, compliance checks,
|
||||
context building, and routing decisions across many agents and frameworks. This quickly becomes hard to reason
|
||||
about and even harder to evolve.
|
||||
|
||||
Filter chains show up most often in patterns like:
|
||||
|
||||
* **Guardrails and Compliance**: Enforcing content policies, stripping or masking sensitive data, and blocking obviously unsafe or off-topic requests before they reach an agent.
|
||||
* **Query rewriting, RAG, and Memory**: Rewriting user queries for retrieval, normalizing entities, and assembling RAG context envelopes while pulling in relevant memory (for example, conversation history, user profiles, or prior tool results) before calling a model or tool.
|
||||
* **Cross-cutting Observability**: Injecting correlation IDs, sampling traces, or logging enriched request metadata at consistent points in the request path.
|
||||
|
||||
Because these behaviors live in the dataplane rather than inside individual agents, you define them once, attach them to many agents and prompt targets, and can add, remove, or reorder them without changing application code.
|
||||
|
||||
Configuration example
|
||||
---------------------
|
||||
|
||||
The example below shows a configuration where an agent uses a filter chain with two filters: a query rewriter,
|
||||
and a context builder that prepares retrieval context before the agent runs.
|
||||
|
||||
.. literalinclude:: ../../source/resources/includes/plano_config_agents_filters.yaml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 7-14, 37-39
|
||||
:caption: Example Configuration
|
||||
|
||||
In this setup:
|
||||
|
||||
* The ``filters`` section defines the reusable filters, each running as its own HTTP/MCP service.
|
||||
* The ``listeners`` section wires the ``rag_agent`` behind an ``agent`` listener and attaches a ``filter_chain`` with ``query_rewriter`` followed by ``context_builder``.
|
||||
* When a request arrives at ``agent_1``, Plano executes the filters in order before handing control to ``rag_agent``.
|
||||
|
||||
|
||||
Filter Chain Programming Model (HTTP and MCP)
|
||||
---------------------------------------------
|
||||
|
||||
Filters are implemented as simple RESTful endpoints reachable via HTTP. If you want to use the `Model Context Protocol (MCP) <https://modelcontextprotocol.io/>`_, you can configure that as well, which makes it easy to write filters in any language. However, you can also write a filter as a plain HTTP service.
|
||||
|
||||
|
||||
When defining a filter in Plano configuration, the following fields are optional:
|
||||
|
||||
* ``type``: Controls the filter runtime. Use ``mcp`` for Model Context Protocol filters, or ``http`` for plain HTTP filters. Defaults to ``mcp``.
|
||||
* ``transport``: Controls how Plano talks to the filter (defaults to ``streamable-http`` for efficient streaming interactions over HTTP). You can omit this for standard HTTP transport.
|
||||
* ``tool``: Names the MCP tool Plano will invoke (by default, the filter ``id``). You can omit this if the tool name matches your filter id.
|
||||
|
||||
In practice, you typically only need to specify ``id`` and ``url`` to get started. Plano's sensible defaults mean a filter can be as simple as an HTTP endpoint. If you want to customize the runtime or protocol, those fields are there, but they're optional.
|
||||
|
||||
Filters communicate the outcome of their work via HTTP status codes:
|
||||
|
||||
* **HTTP 200 (Success)**: The filter successfully processed the request. If the filter mutated the request (e.g., rewrote a query or enriched context), those mutations are passed downstream.
|
||||
* **HTTP 4xx (User Error)**: The request violates a filter's rules or constraints—for example, content moderation policies or compliance checks. The request is terminated, and the error is returned to the caller. This is *not* a fatal error; it represents expected user-facing policy enforcement.
|
||||
* **HTTP 5xx (Fatal Error)**: An unexpected failure in the filter itself (for example, a crash or misconfiguration). Plano will surface the error back to the caller and record it in logs and traces.
|
||||
|
||||
This semantics allows filters to enforce guardrails and policies (4xx) without blocking the entire system, while still surfacing critical failures (5xx) for investigation.
|
||||
|
||||
If any filter fails or decides to terminate the request early (for example, after a policy violation), Plano will
|
||||
surface that outcome back to the caller and record it in logs and traces. This makes filter chains a safe and
|
||||
powerful abstraction for evolving your agent workflows over time.
|
||||
|
|
@ -1,27 +1,16 @@
|
|||
version: v0.1.0
|
||||
version: v0.2.0
|
||||
|
||||
listeners:
|
||||
ingress_traffic:
|
||||
address: 0.0.0.0
|
||||
port: 10000
|
||||
message_format: openai
|
||||
timeout: 30s
|
||||
|
||||
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
|
||||
llm_providers:
|
||||
model_providers:
|
||||
- access_key: $OPENAI_API_KEY
|
||||
model: openai/gpt-4o
|
||||
default: true
|
||||
|
||||
# default system prompt used by all prompt targets
|
||||
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
|
||||
|
||||
prompt_guards:
|
||||
input_guards:
|
||||
jailbreak:
|
||||
on_exception:
|
||||
message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
|
||||
|
||||
prompt_targets:
|
||||
- name: information_extraction
|
||||
default: true
|
||||
79
docs/source/concepts/listeners.rst
Normal file
79
docs/source/concepts/listeners.rst
Normal file
|
|
@ -0,0 +1,79 @@
|
|||
.. _plano_overview_listeners:
|
||||
|
||||
Listeners
|
||||
---------
|
||||
**Listeners** are a top-level primitive in Plano that bind network traffic to the dataplane. They simplify the
|
||||
configuration required to accept incoming connections from downstream clients (edge) and to expose a unified egress
|
||||
endpoint for calls from your applications to upstream LLMs.
|
||||
|
||||
Plano builds on Envoy's Listener subsystem to streamline connection management for developers. It hides most of
|
||||
Envoy's complexity behind sensible defaults and a focused configuration surface, so you can bind listeners without
|
||||
deep knowledge of Envoy’s configuration model while still getting secure, reliable, and performant connections.
|
||||
|
||||
Listeners are modular building blocks: you can configure only inbound listeners (for edge proxying and guardrails),
|
||||
only outbound/model-proxy listeners (for LLM routing from your services), or both together. This lets you fit Plano
|
||||
cleanly into existing architectures, whether you need it at the edge, behind the firewall, or across the full
|
||||
request path.
|
||||
|
||||
|
||||
Network Topology
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The diagram below shows how inbound and outbound traffic flow through Plano and how listeners relate to agents,
|
||||
prompt targets, and upstream LLMs:
|
||||
|
||||
.. image:: /_static/img/network-topology-ingress-egress.png
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
|
||||
Inbound (Agent & Prompt Target)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Developers configure **inbound listeners** to accept connections from clients such as web frontends, backend
|
||||
services, or other gateways. An inbound listener acts as the primary entry point for prompt traffic, handling
|
||||
initial connection setup, TLS termination, guardrails, and forwarding incoming traffic to the appropriate prompt
|
||||
targets or agents.
|
||||
|
||||
There are two primary types of inbound connections exposed via listeners:
|
||||
|
||||
* **Agent Inbound (Edge)**: Clients (web/mobile apps or other services) connect to Plano, send prompts, and receive
|
||||
responses. This is typically your public/edge listener where Plano applies guardrails, routing, and orchestration
|
||||
before returning results to the caller.
|
||||
|
||||
* **Prompt Target Inbound (Edge)**: Your application server calls Plano's internal listener targeting
|
||||
:ref:`prompt targets <prompt_target>` that can invoke tools and LLMs directly on its behalf.
|
||||
|
||||
Inbound listeners are where you attach :ref:`Filter Chains <filter_chain>` so that safety and context-building happen
|
||||
consistently at the edge.
|
||||
|
||||
Outbound (Model Proxy & Egress)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Plano also exposes an **egress listener** that your applications call when sending requests to upstream LLM providers
|
||||
or self-hosted models. From your application's perspective this looks like a single OpenAI-compatible HTTP endpoint
|
||||
(for example, ``http://127.0.0.1:12000/v1``), while Plano handles provider selection, retries, and failover behind
|
||||
the scenes.
|
||||
|
||||
Under the hood, Plano opens outbound HTTP(S) connections to upstream LLM providers using its unified API surface and
|
||||
smart model routing. For more details on how Plano talks to models and how providers are configured, see
|
||||
:ref:`LLM providers <llm_providers>`.
|
||||
|
||||
Configure Listeners
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Listeners are configured via the ``listeners`` block in your Plano configuration. You can define one or more inbound
|
||||
listeners (for example, ``type:edge``) or one or more outbound/model listeners (for example, ``type:model``), or both
|
||||
in the same deployment.
|
||||
|
||||
To configure an inbound (edge) listener, add a ``listeners`` block to your configuration file and define at least one
|
||||
listener with address, port, and protocol details:
|
||||
|
||||
.. literalinclude:: ./includes/plano_config.yaml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:lines: 1-13
|
||||
:emphasize-lines: 3-7
|
||||
:caption: Example Configuration
|
||||
|
||||
When you start Plano, you specify a listener address/port that you want to bind downstream. Plano also exposes a
|
||||
predefined internal listener (``127.0.0.1:12000``) that you can use to proxy egress calls originating from your
|
||||
application to LLMs (API-based or hosted) via prompt targets.
|
||||
|
|
@ -3,7 +3,7 @@
|
|||
Client Libraries
|
||||
================
|
||||
|
||||
Arch provides a unified interface that works seamlessly with multiple client libraries and tools. You can use your preferred client library without changing your existing code - just point it to Arch's gateway endpoints.
|
||||
Plano provides a unified interface that works seamlessly with multiple client libraries and tools. You can use your preferred client library without changing your existing code - just point it to Plano's gateway endpoints.
|
||||
|
||||
Supported Clients
|
||||
------------------
|
||||
|
|
@ -16,7 +16,7 @@ Supported Clients
|
|||
Gateway Endpoints
|
||||
-----------------
|
||||
|
||||
Arch exposes two main endpoints:
|
||||
Plano exposes three main endpoints:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
|
@ -26,13 +26,15 @@ Arch exposes two main endpoints:
|
|||
- Purpose
|
||||
* - ``http://127.0.0.1:12000/v1/chat/completions``
|
||||
- OpenAI-compatible chat completions (LLM Gateway)
|
||||
* - ``http://127.0.0.1:12000/v1/responses``
|
||||
- OpenAI Responses API with :ref:`conversational state management <managing_conversational_state>` (LLM Gateway)
|
||||
* - ``http://127.0.0.1:12000/v1/messages``
|
||||
- Anthropic-compatible messages (LLM Gateway)
|
||||
|
||||
OpenAI (Python) SDK
|
||||
-------------------
|
||||
|
||||
The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint.
|
||||
The OpenAI SDK works with any provider through Plano's OpenAI-compatible endpoint.
|
||||
|
||||
**Installation:**
|
||||
|
||||
|
|
@ -46,7 +48,7 @@ The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint
|
|||
|
||||
from openai import OpenAI
|
||||
|
||||
# Point to Arch's LLM Gateway
|
||||
# Point to Plano's LLM Gateway
|
||||
client = OpenAI(
|
||||
api_key="test-key", # Can be any value for local testing
|
||||
base_url="http://127.0.0.1:12000/v1"
|
||||
|
|
@ -96,7 +98,7 @@ The OpenAI SDK works with any provider through Arch's OpenAI-compatible endpoint
|
|||
|
||||
**Using with Non-OpenAI Models:**
|
||||
|
||||
The OpenAI SDK can be used with any provider configured in Arch:
|
||||
The OpenAI SDK can be used with any provider configured in Plano:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
|
@ -124,10 +126,92 @@ The OpenAI SDK can be used with any provider configured in Arch:
|
|||
]
|
||||
)
|
||||
|
||||
OpenAI Responses API (Conversational State)
|
||||
-------------------------------------------
|
||||
|
||||
The OpenAI Responses API (``v1/responses``) enables multi-turn conversations with automatic state management. Plano handles conversation history for you, so you don't need to manually include previous messages in each request.
|
||||
|
||||
See :ref:`managing_conversational_state` for detailed configuration and storage backend options.
|
||||
|
||||
**Installation:**
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install openai
|
||||
|
||||
**Basic Multi-Turn Conversation:**
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from openai import OpenAI
|
||||
|
||||
# Point to Plano's LLM Gateway
|
||||
client = OpenAI(
|
||||
api_key="test-key",
|
||||
base_url="http://127.0.0.1:12000/v1"
|
||||
)
|
||||
|
||||
# First turn - creates a new conversation
|
||||
response = client.chat.completions.create(
|
||||
model="gpt-4o-mini",
|
||||
messages=[
|
||||
{"role": "user", "content": "My name is Alice"}
|
||||
]
|
||||
)
|
||||
|
||||
# Extract response_id for conversation continuity
|
||||
response_id = response.id
|
||||
print(f"Assistant: {response.choices[0].message.content}")
|
||||
|
||||
# Second turn - continues the conversation
|
||||
# Plano automatically retrieves and merges previous context
|
||||
response = client.chat.completions.create(
|
||||
model="gpt-4o-mini",
|
||||
messages=[
|
||||
{"role": "user", "content": "What's my name?"}
|
||||
],
|
||||
metadata={"response_id": response_id} # Reference previous conversation
|
||||
)
|
||||
|
||||
print(f"Assistant: {response.choices[0].message.content}")
|
||||
# Output: "Your name is Alice"
|
||||
|
||||
**Using with Any Provider:**
|
||||
|
||||
The Responses API works with any LLM provider configured in Plano:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# Multi-turn conversation with Claude
|
||||
response = client.chat.completions.create(
|
||||
model="claude-3-5-sonnet-20241022",
|
||||
messages=[
|
||||
{"role": "user", "content": "Let's discuss quantum physics"}
|
||||
]
|
||||
)
|
||||
|
||||
response_id = response.id
|
||||
|
||||
# Continue conversation - Plano manages state regardless of provider
|
||||
response = client.chat.completions.create(
|
||||
model="claude-3-5-sonnet-20241022",
|
||||
messages=[
|
||||
{"role": "user", "content": "Tell me more about entanglement"}
|
||||
],
|
||||
metadata={"response_id": response_id}
|
||||
)
|
||||
|
||||
**Key Benefits:**
|
||||
|
||||
* **Reduced payload size**: No need to send full conversation history in each request
|
||||
* **Provider flexibility**: Use any configured LLM provider with state management
|
||||
* **Automatic context merging**: Plano handles conversation continuity behind the scenes
|
||||
* **Production-ready storage**: Configure :ref:`PostgreSQL or memory storage <managing_conversational_state>` based on your needs
|
||||
|
||||
Anthropic (Python) SDK
|
||||
----------------------
|
||||
|
||||
The Anthropic SDK works with any provider through Arch's Anthropic-compatible endpoint.
|
||||
The Anthropic SDK works with any provider through Plano's Anthropic-compatible endpoint.
|
||||
|
||||
**Installation:**
|
||||
|
||||
|
|
@ -141,7 +225,7 @@ The Anthropic SDK works with any provider through Arch's Anthropic-compatible en
|
|||
|
||||
import anthropic
|
||||
|
||||
# Point to Arch's LLM Gateway
|
||||
# Point to Plano's LLM Gateway
|
||||
client = anthropic.Anthropic(
|
||||
api_key="test-key", # Can be any value for local testing
|
||||
base_url="http://127.0.0.1:12000"
|
||||
|
|
@ -192,7 +276,7 @@ The Anthropic SDK works with any provider through Arch's Anthropic-compatible en
|
|||
|
||||
**Using with Non-Anthropic Models:**
|
||||
|
||||
The Anthropic SDK can be used with any provider configured in Arch:
|
||||
The Anthropic SDK can be used with any provider configured in Plano:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
|
@ -284,7 +368,7 @@ For direct HTTP requests or integration with any programming language:
|
|||
Cross-Client Compatibility
|
||||
--------------------------
|
||||
|
||||
One of Arch's key features is cross-client compatibility. You can:
|
||||
One of Plano's key features is cross-client compatibility. You can:
|
||||
|
||||
**Use OpenAI SDK with Claude Models:**
|
||||
|
||||
|
|
|
|||
|
|
@ -1,16 +1,16 @@
|
|||
.. _llm_providers:
|
||||
|
||||
LLM Providers
|
||||
=============
|
||||
**LLM Providers** are a top-level primitive in Arch, helping developers centrally define, secure, observe,
|
||||
and manage the usage of their LLMs. Arch builds on Envoy's reliable `cluster subsystem <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/upstream/cluster_manager>`_
|
||||
to manage egress traffic to LLMs, which includes intelligent routing, retry and fail-over mechanisms,
|
||||
ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly
|
||||
switch between LLM providers or upgrade LLM versions, simplifying the integration and scaling of LLMs
|
||||
across applications.
|
||||
Model (LLM) Providers
|
||||
=====================
|
||||
**Model Providers** are a top-level primitive in Plano, helping developers centrally define, secure, observe,
|
||||
and manage the usage of their models. Plano builds on Envoy's reliable `cluster subsystem <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/upstream/cluster_manager>`_ to manage egress traffic to models, which includes intelligent routing, retry and fail-over mechanisms,
|
||||
ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly switch between model providers or upgrade model versions, simplifying the integration and scaling of models across applications.
|
||||
|
||||
Today, we are enabling you to connect to 11+ different AI providers through a unified interface with advanced routing and management capabilities.
|
||||
Whether you're using OpenAI, Anthropic, Azure OpenAI, local Ollama models, or any OpenAI-compatible provider, Arch provides seamless integration with enterprise-grade features.
|
||||
Today, we are enable you to connect to 15+ different AI providers through a unified interface with advanced routing and management capabilities.
|
||||
Whether you're using OpenAI, Anthropic, Azure OpenAI, local Ollama models, or any OpenAI-compatible provider, Plano provides seamless integration with enterprise-grade features.
|
||||
|
||||
.. note::
|
||||
Please refer to the quickstart guide :ref:`here <llm_routing_quickstart>` to configure and use LLM providers via common client libraries like OpenAI and Anthropic Python SDKs, or via direct HTTP/cURL requests.
|
||||
|
||||
Core Capabilities
|
||||
-----------------
|
||||
|
|
@ -18,29 +18,29 @@ Core Capabilities
|
|||
**Multi-Provider Support**
|
||||
Connect to any combination of providers simultaneously (see :ref:`supported_providers` for full details):
|
||||
|
||||
- **First-Class Providers**: Native integrations with OpenAI, Anthropic, DeepSeek, Mistral, Groq, Google Gemini, Together AI, xAI, Azure OpenAI, and Ollama
|
||||
- **OpenAI-Compatible Providers**: Any provider implementing the OpenAI Chat Completions API standard
|
||||
- First-Class Providers: Native integrations with OpenAI, Anthropic, DeepSeek, Mistral, Groq, Google Gemini, Together AI, xAI, Azure OpenAI, and Ollama
|
||||
- OpenAI-Compatible Providers: Any provider implementing the OpenAI Chat Completions API standard
|
||||
|
||||
**Intelligent Routing**
|
||||
Three powerful routing approaches to optimize model selection:
|
||||
|
||||
- **Model-based Routing**: Direct routing to specific models using provider/model names (see :ref:`supported_providers`)
|
||||
- **Alias-based Routing**: Semantic routing using custom aliases (see :ref:`model_aliases`)
|
||||
- **Preference-aligned Routing**: Intelligent routing using the Arch-Router model (see :ref:`preference_aligned_routing`)
|
||||
- Model-based Routing: Direct routing to specific models using provider/model names (see :ref:`supported_providers`)
|
||||
- Alias-based Routing: Semantic routing using custom aliases (see :ref:`model_aliases`)
|
||||
- Preference-aligned Routing: Intelligent routing using the Plano-Router model (see :ref:`preference_aligned_routing`)
|
||||
|
||||
**Unified Client Interface**
|
||||
Use your preferred client library without changing existing code (see :ref:`client_libraries` for details):
|
||||
|
||||
- **OpenAI Python SDK**: Full compatibility with all providers
|
||||
- **Anthropic Python SDK**: Native support with cross-provider capabilities
|
||||
- **cURL & HTTP Clients**: Direct REST API access for any programming language
|
||||
- **Custom Integrations**: Standard HTTP interfaces for seamless integration
|
||||
- OpenAI Python SDK: Full compatibility with all providers
|
||||
- Anthropic Python SDK: Native support with cross-provider capabilities
|
||||
- cURL & HTTP Clients: Direct REST API access for any programming language
|
||||
- Custom Integrations: Standard HTTP interfaces for seamless integration
|
||||
|
||||
Key Benefits
|
||||
------------
|
||||
|
||||
- **Provider Flexibility**: Switch between providers without changing client code
|
||||
- **Three Routing Methods**: Choose from model-based, alias-based, or preference-aligned routing (using `Arch-Router-1.5B <https://huggingface.co/katanemo/Arch-Router-1.5B>`_) strategies
|
||||
- **Three Routing Methods**: Choose from model-based, alias-based, or preference-aligned routing (using `Plano-Router-1.5B <https://huggingface.co/katanemo/Plano-Router-1.5B>`_) strategies
|
||||
- **Cost Optimization**: Route requests to cost-effective models based on complexity
|
||||
- **Performance Optimization**: Use fast models for simple tasks, powerful models for complex reasoning
|
||||
- **Environment Management**: Configure different models for different environments
|
||||
|
|
|
|||
|
|
@ -3,27 +3,21 @@
|
|||
Supported Providers & Configuration
|
||||
===================================
|
||||
|
||||
Arch provides first-class support for multiple LLM providers through native integrations and OpenAI-compatible interfaces. This comprehensive guide covers all supported providers, their available chat models, and detailed configuration instructions.
|
||||
Plano provides first-class support for multiple LLM providers through native integrations and OpenAI-compatible interfaces. This comprehensive guide covers all supported providers, their available chat models, and detailed configuration instructions.
|
||||
|
||||
.. note::
|
||||
**Model Support:** Arch supports all chat models from each provider, not just the examples shown in this guide. The configurations below demonstrate common models for reference, but you can use any chat model available from your chosen provider.
|
||||
**Model Support:** Plano supports all chat models from each provider, not just the examples shown in this guide. The configurations below demonstrate common models for reference, but you can use any chat model available from your chosen provider.
|
||||
|
||||
Please refer to the quuickstart guide :ref:`here <llm_routing_quickstart>` to configure and use LLM providers via common client libraries like OpenAI and Anthropic Python SDKs, or via direct HTTP/cURL requests.
|
||||
|
||||
|
||||
Configuration Structure
|
||||
-----------------------
|
||||
|
||||
All providers are configured in the ``llm_providers`` section of your ``arch_config.yaml`` file:
|
||||
All providers are configured in the ``llm_providers`` section of your ``plano_config.yaml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
version: v0.1
|
||||
|
||||
listeners:
|
||||
egress_traffic:
|
||||
address: 0.0.0.0
|
||||
port: 12000
|
||||
message_format: openai
|
||||
timeout: 30s
|
||||
|
||||
llm_providers:
|
||||
# Provider configurations go here
|
||||
- model: provider/model-name
|
||||
|
|
@ -50,7 +44,7 @@ Any provider that implements the OpenAI API interface can be configured using cu
|
|||
Supported API Endpoints
|
||||
------------------------
|
||||
|
||||
Arch supports the following standardized endpoints across providers:
|
||||
Plano supports the following standardized endpoints across providers:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
|
@ -65,6 +59,9 @@ Arch supports the following standardized endpoints across providers:
|
|||
* - ``/v1/messages``
|
||||
- Anthropic-style messages
|
||||
- Anthropic SDK, cURL, custom clients
|
||||
* - ``/v1/responses``
|
||||
- Unified response endpoint for agentic apps
|
||||
- All SDKs, cURL, custom clients
|
||||
|
||||
First-Class Providers
|
||||
---------------------
|
||||
|
|
@ -78,7 +75,7 @@ OpenAI
|
|||
|
||||
**Authentication:** API Key - Get your OpenAI API key from `OpenAI Platform <https://platform.openai.com/api-keys>`_.
|
||||
|
||||
**Supported Chat Models:** All OpenAI chat models including GPT-5, GPT-4o, GPT-4, GPT-3.5-turbo, and all future releases.
|
||||
**Supported Chat Models:** All OpenAI chat models including GPT-5.2, GPT-5, GPT-4o, and all future releases.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
|
@ -87,21 +84,18 @@ OpenAI
|
|||
* - Model Name
|
||||
- Model ID for Config
|
||||
- Description
|
||||
* - GPT-5.2
|
||||
- ``openai/gpt-5.2``
|
||||
- Next-generation model (use any model name from OpenAI's API)
|
||||
* - GPT-5
|
||||
- ``openai/gpt-5``
|
||||
- Next-generation model (use any model name from OpenAI's API)
|
||||
* - GPT-4o
|
||||
- ``openai/gpt-4o``
|
||||
- Latest multimodal model
|
||||
* - GPT-4o mini
|
||||
- ``openai/gpt-4o-mini``
|
||||
- Fast, cost-effective model
|
||||
* - GPT-4
|
||||
- ``openai/gpt-4``
|
||||
* - GPT-4o
|
||||
- ``openai/gpt-4o``
|
||||
- High-capability reasoning model
|
||||
* - GPT-3.5 Turbo
|
||||
- ``openai/gpt-3.5-turbo``
|
||||
- Balanced performance and cost
|
||||
* - o3-mini
|
||||
- ``openai/o3-mini``
|
||||
- Reasoning-focused model (preview)
|
||||
|
|
@ -115,15 +109,15 @@ OpenAI
|
|||
|
||||
llm_providers:
|
||||
# Latest models (examples - use any OpenAI chat model)
|
||||
- model: openai/gpt-4o-mini
|
||||
- model: openai/gpt-5.2
|
||||
access_key: $OPENAI_API_KEY
|
||||
default: true
|
||||
|
||||
- model: openai/gpt-4o
|
||||
- model: openai/gpt-5
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
# Use any model name from OpenAI's API
|
||||
- model: openai/gpt-5
|
||||
- model: openai/gpt-4o
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
Anthropic
|
||||
|
|
@ -135,7 +129,7 @@ Anthropic
|
|||
|
||||
**Authentication:** API Key - Get your Anthropic API key from `Anthropic Console <https://console.anthropic.com/settings/keys>`_.
|
||||
|
||||
**Supported Chat Models:** All Anthropic Claude models including Claude Sonnet 4, Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, and all future releases.
|
||||
**Supported Chat Models:** All Anthropic Claude models including Claude Sonnet 4.5, Claude Opus 4.5, Claude Haiku 4.5, and all future releases.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
|
@ -144,24 +138,18 @@ Anthropic
|
|||
* - Model Name
|
||||
- Model ID for Config
|
||||
- Description
|
||||
* - Claude Sonnet 4
|
||||
- ``anthropic/claude-sonnet-4``
|
||||
- Next-generation model (use any model name from Anthropic's API)
|
||||
* - Claude 3.5 Sonnet
|
||||
- ``anthropic/claude-3-5-sonnet-20241022``
|
||||
- Latest high-performance model
|
||||
* - Claude 3.5 Haiku
|
||||
- ``anthropic/claude-3-5-haiku-20241022``
|
||||
- Fast and efficient model
|
||||
* - Claude 3 Opus
|
||||
- ``anthropic/claude-3-opus-20240229``
|
||||
* - Claude Opus 4.5
|
||||
- ``anthropic/claude-opus-4-5``
|
||||
- Most capable model for complex tasks
|
||||
* - Claude 3 Sonnet
|
||||
- ``anthropic/claude-3-sonnet-20240229``
|
||||
* - Claude Sonnet 4.5
|
||||
- ``anthropic/claude-sonnet-4-5``
|
||||
- Balanced performance model
|
||||
* - Claude 3 Haiku
|
||||
- ``anthropic/claude-3-haiku-20240307``
|
||||
- Fastest model
|
||||
* - Claude Haiku 4.5
|
||||
- ``anthropic/claude-haiku-4-5``
|
||||
- Fast and efficient model
|
||||
* - Claude Sonnet 3.5
|
||||
- ``anthropic/claude-sonnet-3-5``
|
||||
- Complex agents and coding
|
||||
|
||||
**Configuration Examples:**
|
||||
|
||||
|
|
@ -169,14 +157,14 @@ Anthropic
|
|||
|
||||
llm_providers:
|
||||
# Latest models (examples - use any Anthropic chat model)
|
||||
- model: anthropic/claude-3-5-sonnet-20241022
|
||||
- model: anthropic/claude-opus-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
- model: anthropic/claude-3-5-haiku-20241022
|
||||
- model: anthropic/claude-sonnet-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
# Use any model name from Anthropic's API
|
||||
- model: anthropic/claude-sonnet-4
|
||||
- model: anthropic/claude-haiku-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
|
||||
DeepSeek
|
||||
|
|
@ -267,7 +255,7 @@ Groq
|
|||
|
||||
**Authentication:** API Key - Get your Groq API key from `Groq Console <https://console.groq.com/keys>`_.
|
||||
|
||||
**Supported Chat Models:** All Groq chat models including Llama 3, Mixtral, Gemma, and all future releases.
|
||||
**Supported Chat Models:** All Groq chat models including Llama 4, GPT OSS, Mixtral, Gemma, and all future releases.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
|
@ -276,25 +264,28 @@ Groq
|
|||
* - Model Name
|
||||
- Model ID for Config
|
||||
- Description
|
||||
* - Llama 3.1 8B
|
||||
- ``groq/llama3-8b-8192``
|
||||
* - Llama 4 Maverick 17B
|
||||
- ``groq/llama-4-maverick-17b-128e-instruct``
|
||||
- Fast inference Llama model
|
||||
* - Llama 3.1 70B
|
||||
- ``groq/llama3-70b-8192``
|
||||
- Larger Llama model
|
||||
* - Mixtral 8x7B
|
||||
- ``groq/mixtral-8x7b-32768``
|
||||
- Mixture of experts model
|
||||
* - Llama 4 Scout 8B
|
||||
- ``groq/llama-4-scout-8b-128e-instruct``
|
||||
- Smaller Llama model
|
||||
* - GPT OSS 20B
|
||||
- ``groq/gpt-oss-20b``
|
||||
- Open source GPT model
|
||||
|
||||
**Configuration Examples:**
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
llm_providers:
|
||||
- model: groq/llama3-8b-8192
|
||||
- model: groq/llama-4-maverick-17b-128e-instruct
|
||||
access_key: $GROQ_API_KEY
|
||||
|
||||
- model: groq/mixtral-8x7b-32768
|
||||
- model: groq/llama-4-scout-8b-128e-instruct
|
||||
access_key: $GROQ_API_KEY
|
||||
|
||||
- model: groq/gpt-oss-20b
|
||||
access_key: $GROQ_API_KEY
|
||||
|
||||
Google Gemini
|
||||
|
|
@ -306,7 +297,7 @@ Google Gemini
|
|||
|
||||
**Authentication:** API Key - Get your Google AI API key from `Google AI Studio <https://aistudio.google.com/app/apikey>`_.
|
||||
|
||||
**Supported Chat Models:** All Google Gemini chat models including Gemini 1.5 Pro, Gemini 1.5 Flash, and all future releases.
|
||||
**Supported Chat Models:** All Google Gemini chat models including Gemini 3 Pro, Gemini 3 Flash, and all future releases.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
|
@ -315,11 +306,11 @@ Google Gemini
|
|||
* - Model Name
|
||||
- Model ID for Config
|
||||
- Description
|
||||
* - Gemini 1.5 Pro
|
||||
- ``gemini/gemini-1.5-pro``
|
||||
* - Gemini 3 Pro
|
||||
- ``gemini/gemini-3-pro``
|
||||
- Advanced reasoning and creativity
|
||||
* - Gemini 1.5 Flash
|
||||
- ``gemini/gemini-1.5-flash``
|
||||
* - Gemini 3 Flash
|
||||
- ``gemini/gemini-3-flash``
|
||||
- Fast and efficient model
|
||||
|
||||
**Configuration Examples:**
|
||||
|
|
@ -327,10 +318,10 @@ Google Gemini
|
|||
.. code-block:: yaml
|
||||
|
||||
llm_providers:
|
||||
- model: gemini/gemini-1.5-pro
|
||||
- model: gemini/gemini-3-pro
|
||||
access_key: $GOOGLE_API_KEY
|
||||
|
||||
- model: gemini/gemini-1.5-flash
|
||||
- model: gemini/gemini-3-flash
|
||||
access_key: $GOOGLE_API_KEY
|
||||
|
||||
Together AI
|
||||
|
|
@ -524,7 +515,7 @@ Amazon Bedrock
|
|||
|
||||
**Provider Prefix:** ``amazon_bedrock/``
|
||||
|
||||
**API Endpoint:** Arch automatically constructs the endpoint as:
|
||||
**API Endpoint:** Plano automatically constructs the endpoint as:
|
||||
- Non-streaming: ``/model/{model-id}/converse``
|
||||
- Streaming: ``/model/{model-id}/converse-stream``
|
||||
|
||||
|
|
@ -723,7 +714,7 @@ Configure routing preferences for dynamic model selection:
|
|||
.. code-block:: yaml
|
||||
|
||||
llm_providers:
|
||||
- model: openai/gpt-4o
|
||||
- model: openai/gpt-5.2
|
||||
access_key: $OPENAI_API_KEY
|
||||
routing_preferences:
|
||||
- name: complex_reasoning
|
||||
|
|
@ -731,7 +722,7 @@ Configure routing preferences for dynamic model selection:
|
|||
- name: code_review
|
||||
description: reviewing and analyzing existing code for bugs and improvements
|
||||
|
||||
- model: anthropic/claude-3-5-sonnet-20241022
|
||||
- model: anthropic/claude-sonnet-4-5
|
||||
access_key: $ANTHROPIC_API_KEY
|
||||
routing_preferences:
|
||||
- name: creative_writing
|
||||
|
|
@ -741,15 +732,15 @@ Model Selection Guidelines
|
|||
--------------------------
|
||||
|
||||
**For Production Applications:**
|
||||
- **High Performance**: OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet
|
||||
- **Cost-Effective**: OpenAI GPT-4o mini, Anthropic Claude 3.5 Haiku
|
||||
- **High Performance**: OpenAI GPT-5.2, Anthropic Claude Sonnet 4.5
|
||||
- **Cost-Effective**: OpenAI GPT-5, Anthropic Claude Haiku 4.5
|
||||
- **Code Tasks**: DeepSeek Coder, Together AI Code Llama
|
||||
- **Local Deployment**: Ollama with Llama 3.1 or Code Llama
|
||||
|
||||
**For Development/Testing:**
|
||||
- **Fast Iteration**: Groq models (optimized inference)
|
||||
- **Local Testing**: Ollama models
|
||||
- **Cost Control**: Smaller models like GPT-4o mini or Mistral Small
|
||||
- **Cost Control**: Smaller models like GPT-4o or Mistral Small
|
||||
|
||||
See Also
|
||||
--------
|
||||
|
|
|
|||
|
|
@ -1,15 +1,17 @@
|
|||
.. _prompt_target:
|
||||
|
||||
Prompt Target
|
||||
==============
|
||||
=============
|
||||
A Prompt Target is a deterministic, task-specific backend function or API endpoint that your application calls via Plano.
|
||||
Unlike agents (which handle wide-ranging, open-ended tasks), prompt targets are designed for focused, specific workloads where Plano can add value through input clarification and validation.
|
||||
|
||||
**Prompt Targets** are a core concept in Arch, empowering developers to clearly define how user prompts are interpreted, processed, and routed within their generative AI applications. Prompts can seamlessly be routed either to specialized AI agents capable of handling sophisticated, context-driven tasks or to targeted tools provided by your application, offering users a fast, precise, and personalized experience.
|
||||
Plano helps by:
|
||||
|
||||
This section covers the essentials of prompt targets—what they are, how to configure them, their practical uses, and recommended best practices—to help you fully utilize this feature in your applications.
|
||||
* **Clarifying and validating input**: Plano enriches incoming prompts with metadata (e.g., detecting follow-ups or clarifying requests) and can extract structured parameters from natural language before passing them to your backend.
|
||||
* **Enabling high determinism**: Since the task is specific and well-defined, Plano can reliably extract the information your backend needs without ambiguity.
|
||||
* **Reducing backend work**: Your backend receives clean, validated, structured inputs—so you can focus on business logic instead of parsing and validation.
|
||||
|
||||
What Are Prompt Targets?
|
||||
------------------------
|
||||
Prompt targets are endpoints within Arch that handle specific types of user prompts. They act as the bridge between user inputs and your backend agents or tools (APIs), enabling Arch to route, process, and manage prompts efficiently. Defining prompt targets helps you decouple your application's core logic from processing and handling complexities, leading to clearer code organization, better scalability, and easier maintenance.
|
||||
For example, a prompt target might be "schedule a meeting" (specific task, deterministic inputs like date, time, attendees) or "retrieve documents" (well-defined RAG query with clear intent). Prompt targets are typically called from your application code via Plano's internal listener.
|
||||
|
||||
|
||||
.. table::
|
||||
|
|
@ -33,16 +35,11 @@ Below are the key features of prompt targets that empower developers to build ef
|
|||
- **Input Management**: Specify required and optional parameters for each target.
|
||||
- **Tools Integration**: Seamlessly connect prompts to backend APIs or functions.
|
||||
- **Error Handling**: Direct errors to designated handlers for streamlined troubleshooting.
|
||||
- **Metadata Enrichment**: Attach additional context to prompts for enhanced processing.
|
||||
|
||||
Configuring Prompt Targets
|
||||
--------------------------
|
||||
Configuring prompt targets involves defining them in Arch's configuration file. Each Prompt target specifies how a particular type of prompt should be handled, including the endpoint to invoke and any parameters required.
|
||||
- **Multi-Turn Support**: Manage follow-up prompts and clarifications in conversational flows.
|
||||
|
||||
Basic Configuration
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A prompt target configuration includes the following elements:
|
||||
Configuring prompt targets involves defining them in Plano's configuration file. Each Prompt target specifies how a particular type of prompt should be handled, including the endpoint to invoke and any parameters required. A prompt target configuration includes the following elements:
|
||||
|
||||
.. vale Vale.Spelling = NO
|
||||
|
||||
|
|
@ -55,8 +52,8 @@ A prompt target configuration includes the following elements:
|
|||
|
||||
Defining Parameters
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
Parameters are the pieces of information that Arch needs to extract from the user's prompt to perform the desired action.
|
||||
Each parameter can be marked as required or optional. Here is a full list of parameter attributes that Arch can support:
|
||||
Parameters are the pieces of information that Plano needs to extract from the user's prompt to perform the desired action.
|
||||
Each parameter can be marked as required or optional. Here is a full list of parameter attributes that Plano can support:
|
||||
|
||||
.. table::
|
||||
:width: 100%
|
||||
|
|
@ -98,50 +95,92 @@ Example Configuration For Tools
|
|||
name: api_server
|
||||
path: /weather
|
||||
|
||||
Example Configuration For Agents
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
.. _plano_multi_turn_guide:
|
||||
|
||||
.. code-block:: yaml
|
||||
:caption: Agent Orchestration Configuration Example
|
||||
Multi-Turn
|
||||
~~~~~~~~~~
|
||||
Developers often `struggle <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ to efficiently handle
|
||||
``follow-up`` or ``clarification`` questions. Specifically, when users ask for changes or additions to previous responses, it requires developers to
|
||||
re-write prompts using LLMs with precise prompt engineering techniques. This process is slow, manual, error prone and adds latency and token cost for
|
||||
common scenarios that can be managed more efficiently.
|
||||
|
||||
overrides:
|
||||
use_agent_orchestrator: true
|
||||
Plano is highly capable of accurately detecting and processing prompts in multi-turn scenarios so that you can buil fast and accurate agents in minutes.
|
||||
Below are some cnversational examples that you can build via Plano. Each example is enriched with annotations (via ** [Plano] ** ) that illustrates how Plano
|
||||
processess conversational messages on your behalf.
|
||||
|
||||
prompt_targets:
|
||||
- name: sales_agent
|
||||
description: handles queries related to sales and purchases
|
||||
Example 1: Adjusting Retrieval
|
||||
|
||||
- name: issues_and_repairs
|
||||
description: handles issues, repairs, or refunds
|
||||
.. code-block:: text
|
||||
|
||||
- name: escalate_to_human
|
||||
description: escalates to human agent
|
||||
User: What are the benefits of renewable energy?
|
||||
**[Plano]**: Check if there is an available <prompt_target> that can handle this user query.
|
||||
**[Plano]**: Found "get_info_for_energy_source" prompt_target in arch_config.yaml. Forward prompt to the endpoint configured in "get_info_for_energy_source"
|
||||
...
|
||||
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind.
|
||||
|
||||
.. note::
|
||||
Today, you can use Arch to coordinate more specific agentic scenarios via tools and function calling, or use it for high-level agent routing and hand off scenarios. In the future, we plan to offer you the ability to combine these two approaches for more complex scenarios. Please see `github issues <https://github.com/katanemo/archgw/issues/442>`_ for more details.
|
||||
User: Include cost considerations in the response.
|
||||
**[Plano]**: Follow-up detected. Forward prompt history to the "get_info_for_energy_source" prompt_target and post the following parameters consideration="cost"
|
||||
...
|
||||
Assistant: Renewable energy reduces greenhouse gas emissions, lowers air pollution, and provides sustainable power sources like solar and wind. While the initial setup costs can be high, long-term savings from reduced fuel expenses and government incentives make it cost-effective.
|
||||
|
||||
Routing Logic
|
||||
-------------
|
||||
Prompt targets determine where and how user prompts are processed. Arch uses intelligent routing logic to ensure that prompts are directed to the appropriate targets based on their intent and context.
|
||||
|
||||
Default Targets
|
||||
~~~~~~~~~~~~~~~
|
||||
For general-purpose prompts that do not match any specific prompt target, Arch routes them to a designated default target. This is useful for handling open-ended queries like document summarization or information extraction.
|
||||
Example 2: Switching Intent
|
||||
---------------------------
|
||||
.. code-block:: text
|
||||
|
||||
Intent Matching
|
||||
~~~~~~~~~~~~~~~
|
||||
Arch analyzes the user's prompt to determine its intent and matches it with the most suitable prompt target based on the name and description defined in the configuration.
|
||||
User: What are the symptoms of diabetes?
|
||||
**[Plano]**: Check if there is an available <prompt_target> that can handle this user query.
|
||||
**[Plano]**: Found "diseases_symptoms" prompt_target in arch_config.yaml. Forward disease=diabeteres to "diseases_symptoms" prompt target
|
||||
...
|
||||
Assistant: Common symptoms include frequent urination, excessive thirst, fatigue, and blurry vision.
|
||||
|
||||
For example:
|
||||
User: How is it diagnosed?
|
||||
**[Plano]**: New intent detected.
|
||||
**[Plano]**: Found "disease_diagnoses" prompt_target in arch_config.yaml. Forward disease=diabeteres to "disease_diagnoses" prompt target
|
||||
...
|
||||
Assistant: Diabetes is diagnosed through blood tests like fasting blood sugar, A1C, or an oral glucose tolerance test.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
Prompt: "Can you reboot the router?"
|
||||
Matching Target: reboot_device (based on description matching "reboot devices")
|
||||
Build Multi-Turn RAG Apps
|
||||
-------------------------
|
||||
The following section describes how you can easilly add support for multi-turn scenarios via Plano. You process and manage multi-turn prompts
|
||||
just like you manage single-turn ones. Plano handles the conpleixity of detecting the correct intent based on the last user prompt and
|
||||
the covnersational history, extracts relevant parameters needed by downstream APIs, and dipatches calls to any upstream LLMs to summarize the
|
||||
response from your APIs.
|
||||
|
||||
|
||||
.. _multi_turn_subsection_prompt_target:
|
||||
|
||||
Step 1: Define Plano Config
|
||||
---------------------------
|
||||
|
||||
.. literalinclude:: ../build_with_plano/includes/multi_turn/prompt_targets_multi_turn.yaml
|
||||
:language: yaml
|
||||
:caption: Plano Config
|
||||
:linenos:
|
||||
|
||||
Step 2: Process Request in Flask
|
||||
--------------------------------
|
||||
|
||||
Once the prompt targets are configured as above, handle parameters across multi-turn as if its a single-turn request
|
||||
|
||||
.. literalinclude:: ../build_with_plano/includes/multi_turn/multi_turn_rag.py
|
||||
:language: python
|
||||
:caption: Parameter handling with Flask
|
||||
:linenos:
|
||||
|
||||
Demo App
|
||||
--------
|
||||
|
||||
For your convenience, we've built a `demo app <https://github.com/katanemo/archgw/tree/main/demos/samples_python/multi_turn_rag_agent>`_
|
||||
that you can test and modify locally for multi-turn RAG scenarios.
|
||||
|
||||
.. figure:: ../build_with_plano/includes/multi_turn/mutli-turn-example.png
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
Example multi-turn user conversation showing adjusting retrieval
|
||||
|
||||
Summary
|
||||
--------
|
||||
Prompt targets are essential for defining how user prompts are handled within your generative AI applications using Arch.
|
||||
|
||||
By carefully configuring prompt targets, you can ensure that prompts are accurately routed, necessary parameters are extracted, and backend services are invoked seamlessly. This modular approach not only simplifies your application's architecture but also enhances scalability, maintainability, and overall user experience.
|
||||
~~~~~~~
|
||||
By carefully designing prompt targets as deterministic, task-specific entry points, you ensure that prompts are routed to the right workload, necessary parameters are cleanly extracted and validated, and backend services are invoked with structured inputs. This clear separation between prompt handling and business logic simplifies your architecture, makes behavior more predictable and testable, and improves the scalability and maintainability of your agentic applications.
|
||||
|
|
|
|||
|
|
@ -1,53 +0,0 @@
|
|||
.. _error_target:
|
||||
|
||||
Error Target
|
||||
=============
|
||||
|
||||
**Error targets** are designed to capture and manage specific issues or exceptions that occur during Arch's function or system's execution.
|
||||
|
||||
These endpoints receive errors forwarded from Arch when issues arise, such as improper function/API calls, guardrail violations, or other processing errors.
|
||||
The errors are communicated to the application via headers like ``X-Arch-[ERROR-TYPE]``, enabling you to respond appropriately and handle errors gracefully.
|
||||
|
||||
|
||||
Key Concepts
|
||||
------------
|
||||
|
||||
- **Error Type**: Categorizes the nature of the error, such as "ValidationError" or "RuntimeError." These error types help in identifying what kind of issue occurred and provide context for troubleshooting.
|
||||
|
||||
- **Error Message**: A clear, human-readable message describing the error. This should provide enough detail to inform users or developers of the root cause or required action.
|
||||
|
||||
- **Parameter-Specific Errors**: Errors that arise due to invalid or missing parameters when invoking a function. These errors are critical for ensuring the correctness of inputs.
|
||||
|
||||
|
||||
Error Header Example
|
||||
--------------------
|
||||
|
||||
.. code-block:: bash
|
||||
:caption: Error Header Example
|
||||
|
||||
HTTP/1.1 400 Bad Request
|
||||
X-Arch-Error-Type: FunctionValidationError
|
||||
X-Arch-Error-Message: Tools call parsing failure
|
||||
X-Arch-Target-Prompt: createUser
|
||||
Content-Type: application/json
|
||||
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Please create a user with the following ID: 1234"
|
||||
},
|
||||
{
|
||||
"role": "system",
|
||||
"content": "Expected a string for 'user_id', but got an integer."
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
Best Practices and Tips
|
||||
-----------------------
|
||||
|
||||
- **Graceful Degradation**: If an error occurs, fail gracefully by providing fallback logic or alternative flows when possible.
|
||||
|
||||
- **Log Errors**: Always log errors on the server side for later analysis.
|
||||
|
||||
- **Client-Side Handling**: Make sure the client can interpret error responses and provide meaningful feedback to the user. Clients should not display raw error codes or stack traces but rather handle them gracefully.
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
.. _arch_overview_listeners:
|
||||
|
||||
Listener
|
||||
---------
|
||||
**Listener** is a top level primitive in Arch, which simplifies the configuration required to bind incoming
|
||||
connections from downstream clients, and for egress connections to LLMs (hosted or API)
|
||||
|
||||
Arch builds on Envoy's Listener subsystem to streamline connection management for developers. Arch minimizes
|
||||
the complexity of Envoy's listener setup by using best-practices and exposing only essential settings,
|
||||
making it easier for developers to bind connections without deep knowledge of Envoy’s configuration model. This
|
||||
simplification ensures that connections are secure, reliable, and optimized for performance.
|
||||
|
||||
Downstream (Ingress)
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
Developers can configure Arch to accept connections from downstream clients. A downstream listener acts as the
|
||||
primary entry point for incoming traffic, handling initial connection setup, including network filtering, guardrails,
|
||||
and additional network security checks. For more details on prompt security and safety,
|
||||
see :ref:`here <arch_overview_prompt_handling>`.
|
||||
|
||||
Upstream (Egress)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Arch automatically configures a listener to route requests from your application to upstream LLM API providers (or hosts).
|
||||
When you start Arch, it creates a listener for egress traffic based on the presence of the ``listener`` configuration
|
||||
section in the configuration file. Arch binds itself to a local address such as ``127.0.0.1:12000/v1`` or a DNS-based
|
||||
address like ``arch.local:12000/v1`` for outgoing traffic. For more details on LLM providers, read :ref:`here <llm_providers>`.
|
||||
|
||||
Configure Listener
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To configure a Downstream (Ingress) Listener, simply add the ``listener`` directive to your configuration file:
|
||||
|
||||
.. literalinclude:: ../includes/arch_config.yaml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:lines: 1-18
|
||||
:emphasize-lines: 3-7
|
||||
:caption: Example Configuration
|
||||
|
|
@ -1,45 +0,0 @@
|
|||
.. _model_serving:
|
||||
|
||||
Model Serving
|
||||
=============
|
||||
|
||||
Arch is a set of `two` self-contained processes that are designed to run alongside your application
|
||||
servers (or on a separate host connected via a network). The first process is designated to manage low-level
|
||||
networking and HTTP related concerns, and the other process is for model serving, which helps Arch make
|
||||
intelligent decisions about the incoming prompts. The model server is designed to call the purpose-built
|
||||
LLMs in Arch.
|
||||
|
||||
.. image:: /_static/img/arch-system-architecture.jpg
|
||||
:align: center
|
||||
:width: 40%
|
||||
|
||||
|
||||
Arch' is designed to be deployed in your cloud VPC, on a on-premises host, and can work on devices that don't
|
||||
have a GPU. Note, GPU devices are need for fast and cost-efficient use, so that Arch (model server, specifically)
|
||||
can process prompts quickly and forward control back to the application host. There are three modes in which Arch
|
||||
can be configured to run its **model server** subsystem:
|
||||
|
||||
Local Serving (CPU - Moderate)
|
||||
------------------------------
|
||||
The following bash commands enable you to configure the model server subsystem in Arch to run local on device
|
||||
and only use CPU devices. This will be the slowest option but can be useful in dev/test scenarios where GPUs
|
||||
might not be available.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ archgw up --local-cpu
|
||||
|
||||
Cloud Serving (GPU - Blazing Fast)
|
||||
----------------------------------
|
||||
The command below instructs Arch to intelligently use GPUs locally for fast intent detection, but default to
|
||||
cloud serving for function calling and guardrails scenarios to dramatically improve the speed and overall performance
|
||||
of your applications.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ archgw up
|
||||
|
||||
.. Note::
|
||||
Arch's model serving in the cloud is priced at $0.05M/token (156x cheaper than GPT-4o) with average latency
|
||||
of 200ms (10x faster than GPT-4o). Please refer to our :ref:`Get Started <quickstart>` to know
|
||||
how to generate API keys for model serving
|
||||
|
|
@ -1,127 +0,0 @@
|
|||
.. _arch_overview_prompt_handling:
|
||||
|
||||
Prompts
|
||||
=======
|
||||
|
||||
Arch's primary design point is to securely accept, process and handle prompts. To do that effectively,
|
||||
Arch relies on Envoy's HTTP `connection management <https://www.envoyproxy.io/docs/envoy/v1.31.2/intro/arch_overview/http/http_connection_management>`_,
|
||||
subsystem and its **prompt handler** subsystem engineered with purpose-built LLMs to
|
||||
implement critical functionality on behalf of developers so that you can stay focused on business logic.
|
||||
|
||||
Arch's **prompt handler** subsystem interacts with the **model subsystem** through Envoy's cluster manager system to ensure robust, resilient and fault-tolerant experience in managing incoming prompts.
|
||||
|
||||
.. seealso::
|
||||
Read more about the :ref:`model subsystem <model_serving>` and how the LLMs are hosted in Arch.
|
||||
|
||||
Messages
|
||||
--------
|
||||
|
||||
Arch accepts messages directly from the body of the HTTP request in a format that follows the `Hugging Face Messages API <https://huggingface.co/docs/text-generation-inference/en/messages_api>`_.
|
||||
This design allows developers to pass a list of messages, where each message is represented as a dictionary
|
||||
containing two key-value pairs:
|
||||
|
||||
- **Role**: Defines the role of the message sender, such as "user" or "assistant".
|
||||
- **Content**: Contains the actual text of the message.
|
||||
|
||||
|
||||
Prompt Guard
|
||||
-----------------
|
||||
|
||||
Arch is engineered with `Arch-Guard <https://huggingface.co/collections/katanemo/arch-guard-6702bdc08b889e4bce8f446d>`_, an industry leading safety layer, powered by a
|
||||
compact and high-performing LLM that monitors incoming prompts to detect and reject jailbreak attempts -
|
||||
ensuring that unauthorized or harmful behaviors are intercepted early in the process.
|
||||
|
||||
To add jailbreak guardrails, see example below:
|
||||
|
||||
.. literalinclude:: ../includes/arch_config.yaml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:lines: 1-25
|
||||
:emphasize-lines: 21-25
|
||||
:caption: Example Configuration
|
||||
|
||||
.. Note::
|
||||
As a roadmap item, Arch will expose the ability for developers to define custom guardrails via Arch-Guard,
|
||||
and add support for additional safety checks defined by developers and hazardous categories like, violent crimes, privacy, hate,
|
||||
etc. To offer feedback on our roadmap, please visit our `github page <https://github.com/orgs/katanemo/projects/1>`_
|
||||
|
||||
|
||||
Prompt Targets
|
||||
--------------
|
||||
|
||||
Once a prompt passes any configured guardrail checks, Arch processes the contents of the incoming conversation
|
||||
and identifies where to forward the conversation to via its ``prompt target`` primitive. Prompt targets are endpoints
|
||||
that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with metadata like knowing
|
||||
when a user's intent has changed so that you can build faster, more accurate RAG apps.
|
||||
|
||||
Configuring ``prompt_targets`` is simple. See example below:
|
||||
|
||||
.. literalinclude:: ../includes/arch_config.yaml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 39-53
|
||||
:caption: Example Configuration
|
||||
|
||||
|
||||
.. seealso::
|
||||
|
||||
Check :ref:`Prompt Target <prompt_target>` for more details!
|
||||
|
||||
Intent Matching
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Arch uses fast text embedding and intent recognition approaches to first detect the intent of each incoming prompt.
|
||||
This intent matching phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint.
|
||||
Arch’s intent matching framework considers both the name and description of each prompt target, and uses a composite matching score between embedding similarity and intent classification scores to enhance accuracy in forwarding decisions.
|
||||
|
||||
- **Intent Recognition**: NLI techniques further refine the matching process by evaluating the semantic alignment between the prompt and potential targets.
|
||||
|
||||
- **Text Embedding**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match, ensuring that the prompt is handled by the correct downstream service.
|
||||
|
||||
Agentic Apps via Prompt Targets
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of prompts) needed by a downstream backend API or function call before calling it directly.
|
||||
For more details on how you can build agentic applications using Arch, see our full guide :ref:`here <arch_agent_guide>`:
|
||||
|
||||
.. Note::
|
||||
`Arch-Function <https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68>`_ is a collection of dedicated agentic models engineered in Arch to extract information from a (set of) prompts and executes necessary backend API calls.
|
||||
This allows for efficient handling of agentic tasks, such as scheduling data retrieval, by dynamically interacting with backend services.
|
||||
Arch-Function achieves state-of-the-art performance, comparable with frontier models like Claude Sonnet 3.5 ang GPT-4, while being 44x cheaper ($0.10M/token hosted) and 10x faster (p50 latencies of 200ms).
|
||||
|
||||
Prompting LLMs
|
||||
--------------
|
||||
Arch is a single piece of software that is designed to manage both ingress and egress prompt traffic, drawing its distributed proxy nature from the robust `Envoy <https://envoyproxy.io>`_.
|
||||
This makes it extremely efficient and capable of handling upstream connections to LLMs.
|
||||
If your application is originating code to an API-based LLM, simply use the OpenAI client and configure it with Arch.
|
||||
By sending traffic through Arch, you can propagate traces, manage and monitor traffic, apply rate limits, and utilize a large set of traffic management capabilities in a centralized way.
|
||||
|
||||
.. Attention::
|
||||
When you start Arch, it automatically creates a listener port for egress calls to upstream LLMs. This is based on the
|
||||
``llm_providers`` configuration section in the ``arch_config.yml`` file. Arch binds itself to a local address such as
|
||||
``127.0.0.1:12000``.
|
||||
|
||||
|
||||
Example: Using OpenAI Client with Arch as an Egress Gateway
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openai
|
||||
|
||||
# Set the OpenAI API base URL to the Arch gateway endpoint
|
||||
openai.api_base = "http://127.0.0.1:12000"
|
||||
|
||||
# No need to set openai.api_key since it's configured in Arch's gateway
|
||||
|
||||
# Use the OpenAI client as usual
|
||||
response = openai.Completion.create(
|
||||
model="text-davinci-003",
|
||||
prompt="What is the capital of France?"
|
||||
)
|
||||
|
||||
print("OpenAI Response:", response.choices[0].text.strip())
|
||||
|
||||
In these examples, the OpenAI client is used to send traffic directly through the Arch egress proxy to the LLM of your choice, such as OpenAI.
|
||||
The OpenAI client is configured to route traffic via Arch by setting the proxy to ``127.0.0.1:12000``, assuming Arch is running locally and bound to that address and port.
|
||||
This setup allows you to take advantage of Arch's advanced traffic management features while interacting with LLM APIs like OpenAI.
|
||||
|
|
@ -1,170 +0,0 @@
|
|||
.. _lifecycle_of_a_request:
|
||||
|
||||
Request Lifecycle
|
||||
=================
|
||||
|
||||
Below we describe the events in the lifecycle of a request passing through an Arch gateway instance. We first
|
||||
describe how Arch fits into the request path and then the internal events that take place following
|
||||
the arrival of a request at Arch from downstream clients. We follow the request until the corresponding
|
||||
dispatch upstream and the response path.
|
||||
|
||||
.. image:: /_static/img/network-topology-ingress-egress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
Terminology
|
||||
-----------
|
||||
|
||||
We recommend that you get familiar with some of the :ref:`terminology <arch_terminology>` used in Arch
|
||||
before reading this section.
|
||||
|
||||
Network topology
|
||||
----------------
|
||||
|
||||
How a request flows through the components in a network (including Arch) depends on the network’s topology.
|
||||
Arch can be used in a wide variety of networking topologies. We focus on the inner operation of Arch below,
|
||||
but briefly we address how Arch relates to the rest of the network in this section.
|
||||
|
||||
- **Downstream(Ingress)** listeners take requests from upstream clients like a web UI or clients that forward
|
||||
prompts to you local application responses from the application flow back through Arch to the downstream.
|
||||
|
||||
- **Upstream(Egress)** listeners take requests from the application and forward them to LLMs.
|
||||
|
||||
.. image:: /_static/img/network-topology-ingress-egress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
In practice, Arch can be deployed on the edge and as an internal load balancer between AI agents. A request path may
|
||||
traverse multiple Arch gateways:
|
||||
|
||||
.. image:: /_static/img/network-topology-agent.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
|
||||
High level architecture
|
||||
-----------------------
|
||||
Arch is a set of **two** self-contained processes that are designed to run alongside your application servers
|
||||
(or on a separate server connected to your application servers via a network). The first process is designated
|
||||
to manage HTTP-level networking and connection management concerns (protocol management, request id generation,
|
||||
header sanitization, etc.), and the other process is for **model serving**, which helps Arch make intelligent
|
||||
decisions about the incoming prompts. The model server hosts the purpose-built LLMs to
|
||||
manage several critical, but undifferentiated, prompt related tasks on behalf of developers.
|
||||
|
||||
|
||||
The request processing path in Arch has three main parts:
|
||||
|
||||
* :ref:`Listener subsystem <arch_overview_listeners>` which handles **downstream** and **upstream** request
|
||||
processing. It is responsible for managing the downstream (ingress) and the upstream (egress) request
|
||||
lifecycle. The downstream and upstream HTTP/2 codec lives here.
|
||||
* :ref:`Prompt handler subsystem <arch_overview_prompt_handling>` which is responsible for selecting and
|
||||
forwarding prompts ``prompt_targets`` and establishes the lifecycle of any **upstream** connection to a
|
||||
hosted endpoint that implements domain-specific business logic for incoming prompts. This is where knowledge
|
||||
of targets and endpoint health, load balancing and connection pooling exists.
|
||||
* :ref:`Model serving subsystem <model_serving>` which helps Arch make intelligent decisions about the
|
||||
incoming prompts. The model server is designed to call the purpose-built LLMs in Arch.
|
||||
|
||||
The three subsystems are bridged with either the HTTP router filter, and the cluster manager subsystems of Envoy.
|
||||
|
||||
Also, Arch utilizes `Envoy event-based thread model <https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310>`_.
|
||||
A main thread is responsible for the server lifecycle, configuration processing, stats, etc. and some number of
|
||||
:ref:`worker threads <arch_overview_threading>` process requests. All threads operate around an event loop (`libevent <https://libevent.org/>`_)
|
||||
and any given downstream TCP connection will be handled by exactly one worker thread for its lifetime. Each worker
|
||||
thread maintains its own pool of TCP connections to upstream endpoints.
|
||||
|
||||
Worker threads rarely share state and operate in a trivially parallel fashion. This threading model
|
||||
enables scaling to very high core count CPUs.
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
Today, only support a static bootstrap configuration file for simplicity today:
|
||||
|
||||
.. literalinclude:: ../includes/arch_config.yaml
|
||||
:language: yaml
|
||||
|
||||
|
||||
Request Flow (Ingress)
|
||||
----------------------
|
||||
|
||||
A brief outline of the lifecycle of a request and response using the example configuration above:
|
||||
|
||||
1. **TCP Connection Establishment**:
|
||||
A TCP connection from downstream is accepted by an Arch listener running on a worker thread.
|
||||
The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS,
|
||||
decrypts incoming data for processing.
|
||||
|
||||
2. **Prompt Guardrails Check**:
|
||||
Arch first checks the incoming prompts for guardrails such as jailbreak attempts. This ensures
|
||||
that harmful or unwanted behaviors are detected early in the request processing pipeline.
|
||||
|
||||
3. **Intent Matching**:
|
||||
The decrypted data stream is de-framed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs
|
||||
intent matching via is **prompt-handler** subsystem using the name and description of the defined prompt targets,
|
||||
determining which endpoint should handle the prompt.
|
||||
|
||||
4. **Parameter Gathering with Arch-Function**:
|
||||
If a prompt target requires specific parameters, Arch engages Arch-FC to extract the necessary details
|
||||
from the incoming prompt(s). This process gathers the critical information needed for downstream API calls.
|
||||
|
||||
5. **API Call Execution**:
|
||||
Arch routes the prompt to the appropriate backend API or function call. If an endpoint cluster is identified,
|
||||
load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint.
|
||||
|
||||
6. **Default Summarization by Upstream LLM**:
|
||||
By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization.
|
||||
This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval Augmented Generation)
|
||||
and agentic applications.
|
||||
|
||||
7. **Error Handling and Forwarding**:
|
||||
Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to
|
||||
designated error targets. Error details are communicated through specific headers to the application:
|
||||
|
||||
- ``X-Function-Error-Code``: Code indicating the type of function call error.
|
||||
- ``X-Prompt-Guard-Error-Code``: Code specifying violations detected by prompt guardrails.
|
||||
- Additional headers carry messages and timestamps to aid in debugging and logging.
|
||||
|
||||
8. **Response Handling**:
|
||||
The upstream endpoint’s TLS transport socket encrypts the response, which is then proxied back downstream.
|
||||
Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.
|
||||
|
||||
|
||||
Request Flow (Egress)
|
||||
---------------------
|
||||
|
||||
A brief outline of the lifecycle of a request and response in the context of egress traffic from an application to Large Language Models (LLMs) via Arch:
|
||||
|
||||
1. **HTTP Connection Establishment to LLM**:
|
||||
Arch initiates an HTTP connection to the upstream LLM service. This connection is handled by Arch’s egress listener
|
||||
running on a worker thread. The connection typically uses a secure transport protocol such as HTTPS, ensuring the
|
||||
prompt data is encrypted before being sent to the LLM service.
|
||||
|
||||
2. **Rate Limiting**:
|
||||
Before sending the request to the LLM, Arch applies rate-limiting policies to ensure that the upstream LLM service
|
||||
is not overwhelmed by excessive traffic. Rate limits are enforced per client or service, ensuring fair usage and
|
||||
preventing accidental or malicious overload. If the rate limit is exceeded, Arch may return an appropriate HTTP
|
||||
error (e.g., 429 Too Many Requests) without sending the prompt to the LLM.
|
||||
|
||||
3. **Load Balancing to (hosted) LLM Endpoints**:
|
||||
After passing the rate-limiting checks, Arch routes the prompt to the appropriate LLM endpoint.
|
||||
If multiple LLM providers instances are available, load balancing is performed to distribute traffic evenly
|
||||
across the instances. Arch checks the health of the LLM endpoints using circuit breakers and health checks,
|
||||
ensuring that the prompt is only routed to a healthy, responsive instance.
|
||||
|
||||
4. **Response Reception and Forwarding**:
|
||||
Once the LLM processes the prompt, Arch receives the response from the LLM service. The response is typically a
|
||||
generated text, completion, or summarization. Upon reception, Arch decrypts (if necessary) and handles the response,
|
||||
passing it through any egress processing pipeline defined by the application, such as logging or additional response filtering.
|
||||
|
||||
|
||||
Post-request processing
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Once a request completes, the stream is destroyed. The following also takes places:
|
||||
|
||||
* The post-request :ref:`monitoring <monitoring>` are updated (e.g. timing, active requests, upgrades, health checks).
|
||||
Some statistics are updated earlier however, during request processing. Stats are batched and written by the main
|
||||
thread periodically.
|
||||
* :ref:`Access logs <arch_access_logging>` are written to the access log
|
||||
* :ref:`Trace <arch_overview_tracing>` spans are finalized. If our example request was traced, a
|
||||
trace span, describing the duration and details of the request would be created by the HCM when
|
||||
processing request headers and then finalized by the HCM during post-request processing.
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
.. _tech_overview:
|
||||
|
||||
Tech Overview
|
||||
=============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
terminology
|
||||
threading_model
|
||||
listener
|
||||
prompt
|
||||
model_serving
|
||||
request_lifecycle
|
||||
error_target
|
||||
|
|
@ -1,50 +0,0 @@
|
|||
.. _arch_terminology:
|
||||
|
||||
Terminology
|
||||
============
|
||||
|
||||
A few definitions before we dive into the main architecture documentation. Also note, Arch borrows from Envoy's terminology
|
||||
to keep things consistent in logs and traces, and introduces and clarifies concepts are is relates to LLM applications.
|
||||
|
||||
**Agent**: An application that uses LLMs to handle wide-ranging tasks from users via prompts. This could be as simple
|
||||
as retrieving or summarizing data from an API, or being able to trigger complex actions like adjusting ad campaigns, or
|
||||
changing travel plans via prompts.
|
||||
|
||||
**Arch Config**: Arch operates based on a configuration that controls the behavior of a single instance of the Arch gateway.
|
||||
This where you enable capabilities like LLM routing, fast function calling (via prompt_targets), applying guardrails, and enabling critical
|
||||
features like metrics and tracing. For the full configuration reference of `arch_config.yaml` see :ref:`here <configuration_reference>`.
|
||||
|
||||
**Downstream(Ingress)**: An downstream client (web application, etc.) connects to Arch, sends prompts, and receives responses.
|
||||
|
||||
**Upstream(Egress)**: An upstream host that receives connections and prompts from Arch, and returns context or responses for a prompt
|
||||
|
||||
.. image:: /_static/img/network-topology-ingress-egress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
**Listener**: A :ref:`listener <arch_overview_listeners>` is a named network location (e.g., port, address, path etc.) that Arch
|
||||
listens on to process prompts before forwarding them to your application server endpoints. rch enables you to configure one listener
|
||||
for downstream connections (like port 80, 443) and creates a separate internal listener for calls that initiate from your application
|
||||
code to LLMs.
|
||||
|
||||
.. Note::
|
||||
|
||||
When you start Arch, you specify a listener address/port that you want to bind downstream. But, Arch uses are predefined port
|
||||
that you can use (``127.0.0.1:12000``) to proxy egress calls originating from your application to LLMs (API-based or hosted).
|
||||
For more details, check out :ref:`LLM providers <llm_providers>`.
|
||||
|
||||
**Prompt Target**: Arch offers a primitive called :ref:`prompt target <prompt_target>` to help separate business logic from
|
||||
undifferentiated work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Arch.
|
||||
For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you
|
||||
can build faster, more accurate retrieval (RAG) apps. To support agentic apps, like scheduling travel plans or sharing comments on a
|
||||
document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of
|
||||
prompts) needed by a downstream backend API or function call before calling it directly.
|
||||
|
||||
**Model Serving**: Arch is a set of `two` self-contained processes that are designed to run alongside your application servers
|
||||
(or on a separate host connected via a network).The :ref:`model serving <model_serving>` process helps Arch make intelligent decisions
|
||||
about the incoming prompts. The model server is designed to call the (fast) purpose-built LLMs in Arch.
|
||||
|
||||
**Error Target**: :ref:`Error targets <error_target>` are those endpoints that receive forwarded errors from Arch when issues arise,
|
||||
such as failing to properly call a function/API, detecting violations of guardrails, or encountering other processing errors.
|
||||
These errors are communicated to the application via headers ``X-Arch-[ERROR-TYPE]``, allowing it to handle the errors gracefully
|
||||
and take appropriate actions.
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
.. _arch_overview_threading:
|
||||
|
||||
Threading Model
|
||||
===============
|
||||
|
||||
Arch builds on top of Envoy's single process with multiple threads architecture.
|
||||
|
||||
A single *primary* thread controls various sporadic coordination tasks while some number of *worker*
|
||||
threads perform filtering, and forwarding.
|
||||
|
||||
Once a connection is accepted, the connection spends the rest of its lifetime bound to a single worker
|
||||
thread. All the functionality around prompt handling from a downstream client is handled in a separate worker thread.
|
||||
This allows the majority of Arch to be largely single threaded (embarrassingly parallel) with a small amount
|
||||
of more complex code handling coordination between the worker threads.
|
||||
|
||||
Generally, Arch is written to be 100% non-blocking.
|
||||
|
||||
.. tip::
|
||||
|
||||
For most workloads we recommend configuring the number of worker threads to be equal to the number of
|
||||
hardware threads on the machine.
|
||||
Loading…
Add table
Add a link
Reference in a new issue