diff --git a/get_started/quickstart.html b/get_started/quickstart.html index 1a6b72d7..676f33db 100755 --- a/get_started/quickstart.html +++ b/get_started/quickstart.html @@ -160,9 +160,9 @@

Quickstart

Follow this guide to learn how to quickly set up Plano and integrate it into your generative AI applications. You can:

Note

@@ -196,6 +196,98 @@
+
+

Use Plano as a Model Proxy (Gateway)

+
+

Step 1. Create plano config file

+

Plano operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines OpenAI and Anthropic LLM providers.

+

Create plano_config.yaml file with the following content:

+
version: v0.3.0
+
+listeners:
+  - type: model
+    name: model_1
+    address: 0.0.0.0
+    port: 12000
+
+model_providers:
+
+  - access_key: $OPENAI_API_KEY
+    model: openai/gpt-4o
+    default: true
+
+  - access_key: $ANTHROPIC_API_KEY
+    model: anthropic/claude-sonnet-4-5
+
+
+
+
+

Step 2. Start plano

+

Once the config file is created, ensure that you have environment variables set up for ANTHROPIC_API_KEY and OPENAI_API_KEY (or these are defined in a .env file).

+

Start Plano:

+
$ planoai up plano_config.yaml
+# Or if installed with uv tool: uvx planoai up plano_config.yaml
+2024-12-05 11:24:51,288 - planoai.main - INFO - Starting plano cli version: 0.4.1
+2024-12-05 11:24:51,825 - planoai.utils - INFO - Schema validation successful!
+2024-12-05 11:24:51,825 - planoai.main - INFO - Starting plano
+...
+2024-12-05 11:25:16,131 - planoai.core - INFO - Container is healthy!
+
+
+
+
+

Step 3: Interact with LLM

+
+

Step 3.1: Using curl command

+
$ curl --header 'Content-Type: application/json' \
+  --data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
+  http://localhost:12000/v1/chat/completions
+
+{
+  ...
+  "model": "gpt-4o-2024-08-06",
+  "choices": [
+    {
+      ...
+      "messages": {
+        "role": "assistant",
+        "content": "The capital of France is Paris.",
+      },
+    }
+  ],
+}
+
+
+
+

Note

+

When the requested model is not found in the configuration, Plano will randomly select an available model from the configured providers. In this example, we use "model": "none" and Plano selects the default model openai/gpt-4o.

+
+
+
+

Step 3.2: Using OpenAI Python client

+

Make outbound calls via the Plano gateway:

+
from openai import OpenAI
+
+# Use the OpenAI client as usual
+client = OpenAI(
+  # No need to set a specific openai.api_key since it's configured in Plano's gateway
+  api_key='--',
+  # Set the OpenAI API base URL to the Plano gateway endpoint
+  base_url="http://127.0.0.1:12000/v1"
+)
+
+response = client.chat.completions.create(
+    # we select model from plano_config file
+    model="--",
+    messages=[{"role": "user", "content": "What is the capital of France?"}],
+)
+
+print("OpenAI Response:", response.choices[0].message.content)
+
+
+
+
+

Build Agentic Apps with Plano

Plano helps you build agentic applications in two complementary ways:

@@ -268,8 +360,8 @@

Deterministic API calls with prompt targets

Next, we’ll show Plano’s deterministic API calling using a single prompt target. We’ll build a currency exchange backend powered by https://api.frankfurter.dev/, assuming USD as the base currency.

-
-

Step 1. Create plano config file

+
+

Step 1. Create plano config file

Create plano_config.yaml file with the following content:

version: v0.1.0
 
@@ -351,94 +443,6 @@
 
-
-

Use Plano as a Model Proxy (Gateway)

-
-

Step 1. Create plano config file

-

Plano operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines OpenAI and Anthropic LLM providers.

-

Create plano_config.yaml file with the following content:

-
version: v0.3.0
-
-listeners:
-  - type: model
-    name: model_1
-    address: 0.0.0.0
-    port: 12000
-
-model_providers:
-
-  - access_key: $OPENAI_API_KEY
-    model: openai/gpt-4o
-    default: true
-
-  - access_key: $ANTHROPIC_API_KEY
-    model: anthropic/claude-sonnet-4-5
-
-
-
-
-

Step 2. Start plano

-

Once the config file is created, ensure that you have environment variables set up for ANTHROPIC_API_KEY and OPENAI_API_KEY (or these are defined in a .env file).

-

Start Plano:

-
$ planoai up plano_config.yaml
-# Or if installed with uv tool: uvx planoai up plano_config.yaml
-2024-12-05 11:24:51,288 - planoai.main - INFO - Starting plano cli version: 0.4.1
-2024-12-05 11:24:51,825 - planoai.utils - INFO - Schema validation successful!
-2024-12-05 11:24:51,825 - planoai.main - INFO - Starting plano
-...
-2024-12-05 11:25:16,131 - planoai.core - INFO - Container is healthy!
-
-
-
-
-

Step 3: Interact with LLM

-
-

Step 3.1: Using OpenAI Python client

-

Make outbound calls via the Plano gateway:

-
from openai import OpenAI
-
-# Use the OpenAI client as usual
-client = OpenAI(
-  # No need to set a specific openai.api_key since it's configured in Plano's gateway
-  api_key='--',
-  # Set the OpenAI API base URL to the Plano gateway endpoint
-  base_url="http://127.0.0.1:12000/v1"
-)
-
-response = client.chat.completions.create(
-    # we select model from plano_config file
-    model="--",
-    messages=[{"role": "user", "content": "What is the capital of France?"}],
-)
-
-print("OpenAI Response:", response.choices[0].message.content)
-
-
-
-
-

Step 3.2: Using curl command

-
$ curl --header 'Content-Type: application/json' \
-  --data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
-  http://localhost:12000/v1/chat/completions
-
-{
-  ...
-  "model": "gpt-4o-2024-08-06",
-  "choices": [
-    {
-      ...
-      "messages": {
-        "role": "assistant",
-        "content": "The capital of France is Paris.",
-      },
-    }
-  ],
-}
-
-
-
-
-

Next Steps

@@ -472,6 +476,16 @@