plano/demos/use_cases/llm_routing
Salman Paracha fb0581fd39
add support for v1/messages and transformations (#558)
* pushing draft PR

* transformations are working. Now need to add some tests next

* updated tests and added necessary response transformations for Anthropics' message response object

* fixed bugs for integration tests

* fixed doc tests

* fixed serialization issues with enums on response

* adding some debug logs to help

* fixed issues with non-streaming responses

* updated the stream_context to update response bytes

* the serialized bytes length must be set in the response side

* fixed the debug statement that was causing the integration tests for wasm to fail

* fixing json parsing errors

* intentionally removing the headers

* making sure that we convert the raw bytes to the correct provider type upstream

* fixing non-streaming responses to tranform correctly

* /v1/messages works with transformations to and from /v1/chat/completions

* updating the CLI and demos to support anthropic vs. claude

* adding the anthropic key to the preference based routing tests

* fixed test cases and added more structured logs

* fixed integration tests and cleaned up logs

* added python client tests for anthropic and openai

* cleaned up logs and fixed issue with connectivity for llm gateway in weather forecast demo

* fixing the tests. python dependency order was broken

* updated the openAI client to fix demos

* removed the raw response debug statement

* fixed the dup cloning issue and cleaned up the ProviderRequestType enum and traits

* fixing logs

* moved away from string literals to consts

* fixed streaming from Anthropic Client to OpenAI

* removed debug statement that would likely trip up integration tests

* fixed integration tests for llm_gateway

* cleaned up test cases and removed unnecessary crates

* fixing comments from PR

* fixed bug whereby we were sending an OpenAIChatCompletions request object to llm_gateway even though the request may have been AnthropicMessages

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-4.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-9.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-10.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-41.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-136.local>
2025-09-10 07:40:30 -07:00
..
arch_config.yaml add support for v1/messages and transformations (#558) 2025-09-10 07:40:30 -07:00
docker-compose.yaml add support for openwebui (#487) 2025-05-28 19:08:00 -07:00
jaeger_tracing_llm_routing.png refactor demos (#398) 2025-02-07 18:45:42 -08:00
llm_routing_demo.png refactor demos (#398) 2025-02-07 18:45:42 -08:00
README.md better model names (#517) 2025-07-11 16:42:16 -07:00
run_demo.sh refactor demos (#398) 2025-02-07 18:45:42 -08:00

LLM Routing

This demo shows how you can arch gateway to manage keys and route to upstream LLM.

Starting the demo

  1. Please make sure the pre-requisites are installed correctly
  2. Start Arch
    sh run_demo.sh
    
  3. Navigate to http://localhost:18080/

Following screen shows an example of interaction with arch gateway showing dynamic routing. You can select between different LLMs using "override model" option in the chat UI.

LLM Routing Demo

You can also pass in a header to override model when sending prompt. Following example shows how you can use x-arch-llm-provider-hint header to override model selection,


$ curl --header 'Content-Type: application/json' \
  --header 'x-arch-llm-provider-hint: mistral/ministral-3b' \
  --data '{"messages": [{"role": "user","content": "hello"}], "model": "none"}' \
  http://localhost:12000/v1/chat/completions 2> /dev/null | jq .
{
  "id": "xxx",
  "object": "chat.completion",
  "created": 1737760394,
  "model": "ministral-3b-latest",
  "choices": [
    {
      "index": 0,
      "messages": {
        "role": "assistant",
        "tool_calls": null,
        "content": "Hello! How can I assist you today? Let's chat about anything you'd like. 😊"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 4,
    "total_tokens": 25,
    "completion_tokens": 21
  }
}

Observability

Arch gateway publishes stats endpoint at http://localhost:19901/stats. In this demo we are using prometheus to pull stats from arch and we are using grafana to visualize the stats in dashboard. To see grafana dashboard follow instructions below,

  1. Navigate to http://localhost:3000/ to open grafana UI (use admin/grafana as credentials)
  2. From grafana left nav click on dashboards and select "Intelligent Gateway Overview" to view arch gateway stats
  3. For tracing you can head over to http://localhost:16686/ to view recent traces.

Following is a screenshot of tracing UI showing call received by arch gateway and making upstream call to LLM,

Jaeger Tracing