plano/demos/use_cases
Adil Hafeez 2f9121407b
Use mcp tools for filter chain (#621)
* agents framework demo

* more changes

* add more changes

* pending changes

* fix tests

* fix more

* rebase with main and better handle error from mcp

* add trace for filters

* add test for client error, server error and for mcp error

* update schema validate code and rename kind => type in agent_filter

* fix agent description and pre-commit

* fix tests

* add provider specific request parsing in agents chat

* fix precommit and tests

* cleanup demo

* update readme

* fix pre-commit

* refactor tracing

* fix fmt

* fix: handle MessageContent enum in responses API conversion

- Update request.rs to handle new MessageContent enum structure from main
- MessageContent can now be Text(String) or Items(Vec<InputContent>)
- Handle new InputItem variants (ItemReference, FunctionCallOutput)
- Fixes compilation error after merging latest main (#632)

* address pr feedback

* fix span

* fix build

* update openai version
2025-12-17 17:30:14 -08:00
..
chatgpt-preference-model-selector chatgpt.com updated its backend api path. fixing (#530) 2025-07-14 21:20:23 -07:00
claude_code_router fixed bug in Bedrock translation code and dramatically improved tracing for outbound LLM traffic (#601) 2025-10-24 14:07:05 -07:00
llm_routing add support for agents (#564) 2025-10-14 14:01:11 -07:00
mcp_filter Use mcp tools for filter chain (#621) 2025-12-17 17:30:14 -08:00
model_alias_routing fixed mixed inputs from openai v1/responses api (#632) 2025-12-16 13:39:13 -08:00
model_choice_with_test_harness release 0.3.22 (#629) 2025-12-11 11:20:19 -08:00
ollama better model names (#517) 2025-07-11 16:42:16 -07:00
preference_based_routing release 0.3.22 (#629) 2025-12-11 11:20:19 -08:00
spotify_bearer_auth better model names (#517) 2025-07-11 16:42:16 -07:00
README.md better model names (#517) 2025-07-11 16:42:16 -07:00

Use Arch for (Model-based) LLM Routing Step 1. Create arch config file

Create arch_config.yaml file with following content:

version: v0.1.0

listeners:
  egress_traffic:
    address: 0.0.0.0
    port: 12000
    message_format: openai
    timeout: 30s

llm_providers:
  - access_key: $OPENAI_API_KEY
    model: openai/gpt-4o
    default: true

  - access_key: $MISTRAL_API_KEY
    model: mistral/ministral-3b-latest

Step 2. Start arch gateway

Once the config file is created ensure that you have env vars setup for MISTRAL_API_KEY and OPENAI_API_KEY (or these are defined in .env file).

Start arch gateway,

$ archgw up arch_config.yaml
2024-12-05 11:24:51,288 - cli.main - INFO - Starting archgw cli version: 0.1.5
2024-12-05 11:24:51,825 - cli.utils - INFO - Schema validation successful!
2024-12-05 11:24:51,825 - cli.main - INFO - Starting arch model server and arch gateway
...
2024-12-05 11:25:16,131 - cli.core - INFO - Container is healthy!

Step 3: Interact with LLM

Step 3.1: Using OpenAI python client

Make outbound calls via Arch gateway

from openai import OpenAI

# Use the OpenAI client as usual
client = OpenAI(
  # No need to set a specific openai.api_key since it's configured in Arch's gateway
  api_key = '--',
  # Set the OpenAI API base URL to the Arch gateway endpoint
  base_url = "http://127.0.0.1:12000/v1"
)

response = client.chat.completions.create(
    # we select model from arch_config file
    model="None",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print("OpenAI Response:", response.choices[0].message.content)

Step 3.2: Using curl command

$ curl --header 'Content-Type: application/json' \
  --data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
  http://localhost:12000/v1/chat/completions

{
  ...
  "model": "gpt-4o-2024-08-06",
  "choices": [
    {
      ...
      "messages": {
        "role": "assistant",
        "content": "The capital of France is Paris.",
      },
    }
  ],
...
}

You can override model selection using x-arch-llm-provider-hint header. For example if you want to use mistral using following curl command,

$ curl --header 'Content-Type: application/json' \
  --header 'x-arch-llm-provider-hint: ministral-3b' \
  --data '{"messages": [{"role": "user","content": "What is the capital of France?"}], "model": "none"}' \
  http://localhost:12000/v1/chat/completions
{
  ...
  "model": "ministral-3b-latest",
  "choices": [
    {
      "messages": {
        "role": "assistant",
        "content": "The capital of France is Paris. It is the most populous city in France and is known for its iconic landmarks such as the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. Paris is also a major global center for art, fashion, gastronomy, and culture.",
      },
      ...
    }
  ],
  ...
}