mirror of
https://github.com/katanemo/plano.git
synced 2026-04-25 00:36:34 +02:00
Update docs to Plano (#639)
This commit is contained in:
parent
15fbb6c3af
commit
e224cba3e3
139 changed files with 4407 additions and 24735 deletions
|
|
@ -1,10 +1,18 @@
|
|||
.. _quickstart:
|
||||
|
||||
Quickstart
|
||||
================
|
||||
==========
|
||||
|
||||
Follow this guide to learn how to quickly set up Arch and integrate it into your generative AI applications.
|
||||
Follow this guide to learn how to quickly set up Plano and integrate it into your generative AI applications. You can:
|
||||
|
||||
- :ref:`Build agents <quickstart_agents>` for multi-step workflows (e.g., travel assistants with flights and hotels).
|
||||
- :ref:`Call deterministic APIs via prompt targets <quickstart_prompt_targets>` to turn instructions directly into function calls.
|
||||
- :ref:`Use Plano as a model proxy (Gateway) <llm_routing_quickstart>` to standardize access to multiple LLM providers.
|
||||
|
||||
.. note::
|
||||
This quickstart assumes basic familiarity with agents and prompt targets from the Concepts section. For background, see :ref:`Agents <agents>` and :ref:`Prompt Target <prompt_target>`.
|
||||
|
||||
The full agent and backend API implementations used here are available in the `plano-quickstart repository <https://github.com/plano-ai/plano-quickstart>`_. This guide focuses on wiring and configuring Plano (orchestration, prompt targets, and the model proxy), not application code.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
|
@ -15,32 +23,113 @@ Before you begin, ensure you have the following:
|
|||
2. `Docker Compose <https://docs.docker.com/compose/install/>`_ (v2.29)
|
||||
3. `Python <https://www.python.org/downloads/>`_ (v3.10+)
|
||||
|
||||
Arch's CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:
|
||||
Plano's CLI allows you to manage and interact with the Plano efficiently. To install the CLI, simply run the following command:
|
||||
|
||||
.. tip::
|
||||
|
||||
We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that ``archgw`` and its dependencies do not interfere with other packages on your system.
|
||||
|
||||
We recommend that developers create a new Python virtual environment to isolate dependencies before installing Plano. This ensures that ``plano`` and its dependencies do not interfere with other packages on your system.
|
||||
.. code-block:: console
|
||||
|
||||
$ python -m venv venv
|
||||
$ source venv/bin/activate # On Windows, use: venv\Scripts\activate
|
||||
$ pip install archgw==0.3.22
|
||||
$ pip install plano==0.4.0
|
||||
|
||||
|
||||
Build AI Agent with Arch Gateway
|
||||
--------------------------------
|
||||
Build Agentic Apps with Plano
|
||||
-----------------------------
|
||||
|
||||
In the following quickstart, we will show you how easy it is to build an AI agent with the Arch gateway. We will build a currency exchange agent using the following simple steps. For this demo, we will use `https://api.frankfurter.dev/` to fetch the latest prices for currencies and assume USD as the base currency.
|
||||
Plano helps you build agentic applications in two complementary ways:
|
||||
|
||||
Step 1. Create arch config file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
* **Orchestrate agents**: Let Plano decide which agent or LLM should handle each request and in what sequence.
|
||||
* **Call deterministic backends**: Use prompt targets to turn natural-language prompts into structured, validated API calls.
|
||||
|
||||
Create ``arch_config.yaml`` file with the following content:
|
||||
.. _quickstart_agents:
|
||||
|
||||
Building agents with Plano orchestration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Agents are where your business logic lives (the "inner loop"). Plano takes care of the "outer loop"—routing, sequencing, and managing calls across agents and LLMs.
|
||||
|
||||
At a high level, building agents with Plano looks like this:
|
||||
|
||||
1. **Implement your agent** in your framework of choice (Python, JS/TS, etc.), exposing it as an HTTP service.
|
||||
2. **Route LLM calls through Plano's Model Proxy**, so all models share a consistent interface and observability.
|
||||
3. **Configure Plano to orchestrate**: define which agent(s) can handle which kinds of prompts, and let Plano decide when to call an agent vs. an LLM.
|
||||
|
||||
This quickstart uses a simplified version of the Travel Booking Assistant; for the full multi-agent walkthrough, see :ref:`Orchestration <agent_routing>`.
|
||||
|
||||
Step 1. Minimal orchestration config
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is a minimal configuration that wires Plano-Orchestrator to two HTTP services: one for flights and one for hotels.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
version: v0.1.0
|
||||
version: v0.1.0
|
||||
|
||||
agents:
|
||||
- id: flight_agent
|
||||
url: http://host.docker.internal:10520 # your flights service
|
||||
- id: hotel_agent
|
||||
url: http://host.docker.internal:10530 # your hotels service
|
||||
|
||||
model_providers:
|
||||
- model: openai/gpt-4o
|
||||
access_key: $OPENAI_API_KEY
|
||||
|
||||
listeners:
|
||||
- type: agent
|
||||
name: travel_assistant
|
||||
port: 8001
|
||||
router: plano_orchestrator_v1
|
||||
agents:
|
||||
- id: flight_agent
|
||||
description: Search for flights and provide flight status.
|
||||
- id: hotel_agent
|
||||
description: Find hotels and check availability.
|
||||
|
||||
tracing:
|
||||
random_sampling: 100
|
||||
|
||||
Step 2. Start your agents and Plano
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Run your ``flight_agent`` and ``hotel_agent`` services (see :ref:`Orchestration <agent_routing>` for a full Travel Booking example), then start Plano with the config above:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ plano up plano_config.yaml
|
||||
|
||||
Plano will start the orchestrator and expose an agent listener on port ``8001``.
|
||||
|
||||
Step 3. Send a prompt and let Plano route
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now send a request to Plano using the OpenAI-compatible chat completions API—the orchestrator will analyze the prompt and route it to the right agent based on intent:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ curl --header 'Content-Type: application/json' \
|
||||
--data '{"messages": [{"role": "user","content": "Find me flights from SFO to JFK tomorrow"}], "model": "openai/gpt-4o"}' \
|
||||
http://localhost:8001/v1/chat/completions
|
||||
|
||||
You can then ask a follow-up like "Also book me a hotel near JFK" and Plano-Orchestrator will route to ``hotel_agent``—your agents stay focused on business logic while Plano handles routing.
|
||||
|
||||
.. _quickstart_prompt_targets:
|
||||
|
||||
Deterministic API calls with prompt targets
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Next, we'll show Plano's deterministic API calling using a single prompt target. We'll build a currency exchange backend powered by `https://api.frankfurter.dev/`, assuming USD as the base currency.
|
||||
|
||||
Step 1. Create plano config file
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Create ``plano_config.yaml`` file with the following content:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
version: v0.1.0
|
||||
|
||||
listeners:
|
||||
ingress_traffic:
|
||||
|
|
@ -49,19 +138,13 @@ Create ``arch_config.yaml`` file with the following content:
|
|||
message_format: openai
|
||||
timeout: 30s
|
||||
|
||||
llm_providers:
|
||||
model_providers:
|
||||
- access_key: $OPENAI_API_KEY
|
||||
model: openai/gpt-4o
|
||||
|
||||
system_prompt: |
|
||||
You are a helpful assistant.
|
||||
|
||||
prompt_guards:
|
||||
input_guards:
|
||||
jailbreak:
|
||||
on_exception:
|
||||
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
|
||||
|
||||
prompt_targets:
|
||||
- name: currency_exchange
|
||||
description: Get currency exchange rate from USD to other currencies
|
||||
|
|
@ -88,16 +171,16 @@ Create ``arch_config.yaml`` file with the following content:
|
|||
endpoint: api.frankfurter.dev:443
|
||||
protocol: https
|
||||
|
||||
Step 2. Start arch gateway with currency conversion config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Step 2. Start plano with currency conversion config
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
$ archgw up arch_config.yaml
|
||||
2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.1.5
|
||||
$ plano up plano_config.yaml
|
||||
2024-12-05 16:56:27,979 - cli.main - INFO - Starting plano cli version: 0.1.5
|
||||
...
|
||||
2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful!
|
||||
2024-12-05 16:56:28,485 - cli.main - INFO - Starting arch model server and arch gateway
|
||||
2024-12-05 16:56:28,485 - cli.main - INFO - Starting plano model server and plano gateway
|
||||
...
|
||||
2024-12-05 16:56:51,647 - cli.core - INFO - Container is healthy!
|
||||
|
||||
|
|
@ -106,7 +189,7 @@ Once the gateway is up, you can start interacting with it at port 10000 using th
|
|||
Some sample queries you can ask include: ``what is currency rate for gbp?`` or ``show me list of currencies for conversion``.
|
||||
|
||||
Step 3. Interacting with gateway using curl command
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is a sample curl command you can use to interact:
|
||||
|
||||
|
|
@ -129,15 +212,17 @@ And to get the list of supported currencies:
|
|||
"Here is a list of the currencies that are supported for conversion from USD, along with their symbols:\n\n1. AUD - Australian Dollar\n2. BGN - Bulgarian Lev\n3. BRL - Brazilian Real\n4. CAD - Canadian Dollar\n5. CHF - Swiss Franc\n6. CNY - Chinese Renminbi Yuan\n7. CZK - Czech Koruna\n8. DKK - Danish Krone\n9. EUR - Euro\n10. GBP - British Pound\n11. HKD - Hong Kong Dollar\n12. HUF - Hungarian Forint\n13. IDR - Indonesian Rupiah\n14. ILS - Israeli New Sheqel\n15. INR - Indian Rupee\n16. ISK - Icelandic Króna\n17. JPY - Japanese Yen\n18. KRW - South Korean Won\n19. MXN - Mexican Peso\n20. MYR - Malaysian Ringgit\n21. NOK - Norwegian Krone\n22. NZD - New Zealand Dollar\n23. PHP - Philippine Peso\n24. PLN - Polish Złoty\n25. RON - Romanian Leu\n26. SEK - Swedish Krona\n27. SGD - Singapore Dollar\n28. THB - Thai Baht\n29. TRY - Turkish Lira\n30. USD - United States Dollar\n31. ZAR - South African Rand\n\nIf you want to convert USD to any of these currencies, you can select the one you are interested in."
|
||||
|
||||
|
||||
Use Arch Gateway as LLM Router
|
||||
------------------------------
|
||||
.. _llm_routing_quickstart:
|
||||
|
||||
Step 1. Create arch config file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Use Plano as a Model Proxy (Gateway)
|
||||
------------------------------------
|
||||
|
||||
Arch operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines OpenAI and Mistral LLM providers.
|
||||
Step 1. Create plano config file
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create ``arch_config.yaml`` file with the following content:
|
||||
Plano operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines OpenAI and Mistral LLM providers.
|
||||
|
||||
Create ``plano_config.yaml`` file with the following content:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
|
@ -150,7 +235,7 @@ Create ``arch_config.yaml`` file with the following content:
|
|||
message_format: openai
|
||||
timeout: 30s
|
||||
|
||||
llm_providers:
|
||||
model_providers:
|
||||
- access_key: $OPENAI_API_KEY
|
||||
model: openai/gpt-4o
|
||||
default: true
|
||||
|
|
@ -158,19 +243,19 @@ Create ``arch_config.yaml`` file with the following content:
|
|||
- access_key: $MISTRAL_API_KEY
|
||||
model: mistralministral-3b-latest
|
||||
|
||||
Step 2. Start arch gateway
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Step 2. Start plano
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Once the config file is created, ensure that you have environment variables set up for ``MISTRAL_API_KEY`` and ``OPENAI_API_KEY`` (or these are defined in a ``.env`` file).
|
||||
|
||||
Start the Arch gateway:
|
||||
Start Plano:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ archgw up arch_config.yaml
|
||||
2024-12-05 11:24:51,288 - cli.main - INFO - Starting archgw cli version: 0.1.5
|
||||
$ plano up plano_config.yaml
|
||||
2024-12-05 11:24:51,288 - cli.main - INFO - Starting plano cli version: 0.1.5
|
||||
2024-12-05 11:24:51,825 - cli.utils - INFO - Schema validation successful!
|
||||
2024-12-05 11:24:51,825 - cli.main - INFO - Starting arch model server and arch gateway
|
||||
2024-12-05 11:24:51,825 - cli.main - INFO - Starting plano
|
||||
...
|
||||
2024-12-05 11:25:16,131 - cli.core - INFO - Container is healthy!
|
||||
|
||||
|
|
@ -178,9 +263,9 @@ Step 3: Interact with LLM
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Step 3.1: Using OpenAI Python client
|
||||
++++++++++++++++++++++++++++++++++++
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Make outbound calls via the Arch gateway:
|
||||
Make outbound calls via the Plano gateway:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
|
@ -188,14 +273,14 @@ Make outbound calls via the Arch gateway:
|
|||
|
||||
# Use the OpenAI client as usual
|
||||
client = OpenAI(
|
||||
# No need to set a specific openai.api_key since it's configured in Arch's gateway
|
||||
# No need to set a specific openai.api_key since it's configured in Plano's gateway
|
||||
api_key='--',
|
||||
# Set the OpenAI API base URL to the Arch gateway endpoint
|
||||
# Set the OpenAI API base URL to the Plano gateway endpoint
|
||||
base_url="http://127.0.0.1:12000/v1"
|
||||
)
|
||||
|
||||
response = client.chat.completions.create(
|
||||
# we select model from arch_config file
|
||||
# we select model from plano_config file
|
||||
model="--",
|
||||
messages=[{"role": "user", "content": "What is the capital of France?"}],
|
||||
)
|
||||
|
|
@ -203,7 +288,7 @@ Make outbound calls via the Arch gateway:
|
|||
print("OpenAI Response:", response.choices[0].message.content)
|
||||
|
||||
Step 3.2: Using curl command
|
||||
++++++++++++++++++++++++++++
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
|
@ -225,38 +310,13 @@ Step 3.2: Using curl command
|
|||
],
|
||||
}
|
||||
|
||||
You can override model selection using the ``x-arch-llm-provider-hint`` header. For example, to use Mistral, use the following curl command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ curl --header 'Content-Type: application/json' \
|
||||
--header 'x-arch-llm-provider-hint: ministral-3b' \
|
||||
--data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
|
||||
http://localhost:12000/v1/chat/completions
|
||||
|
||||
{
|
||||
...
|
||||
"model": "ministral-3b-latest",
|
||||
"choices": [
|
||||
{
|
||||
"messages": {
|
||||
"role": "assistant",
|
||||
"content": "The capital of France is Paris. It is the most populous city in France and is known for its iconic landmarks such as the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. Paris is also a major global center for art, fashion, gastronomy, and culture.",
|
||||
},
|
||||
...
|
||||
}
|
||||
],
|
||||
...
|
||||
}
|
||||
|
||||
|
||||
Next Steps
|
||||
==========
|
||||
|
||||
Congratulations! You've successfully set up Arch and made your first prompt-based request. To further enhance your GenAI applications, explore the following resources:
|
||||
Congratulations! You've successfully set up Plano and made your first prompt-based request. To further enhance your GenAI applications, explore the following resources:
|
||||
|
||||
- :ref:`Full Documentation <overview>`: Comprehensive guides and references.
|
||||
- `GitHub Repository <https://github.com/katanemo/arch>`_: Access the source code, contribute, and track updates.
|
||||
- `Support <https://github.com/katanemo/arch#contact>`_: Get help and connect with the Arch community .
|
||||
- `GitHub Repository <https://github.com/katanemo/plano>`_: Access the source code, contribute, and track updates.
|
||||
- `Support <https://github.com/katanemo/plano#contact>`_: Get help and connect with the Plano community .
|
||||
|
||||
With Arch, building scalable, fast, and personalized GenAI applications has never been easier. Dive deeper into Arch's capabilities and start creating innovative AI-driven experiences today!
|
||||
With Plano, building scalable, fast, and personalized GenAI applications has never been easier. Dive deeper into Plano's capabilities and start creating innovative AI-driven experiences today!
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue