plano/tests/e2e
Salman Paracha 33e90dd338
fixed mixed inputs from openai v1/responses api (#632)
* fixed mixed inputs from openai v1/responses api

* removing tracing from model-alias-rouing

* handling additional input types from openairs

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
2025-12-16 13:39:13 -08:00
..
.vscode better model names (#517) 2025-07-11 16:42:16 -07:00
common.py Use intent model from archfc to pick prompt gateway (#328) 2024-12-20 13:25:01 -08:00
common_scripts.sh Use intent model from archfc to pick prompt gateway (#328) 2024-12-20 13:25:01 -08:00
docker-compose.yaml removing model_server. buh bye (#619) 2025-11-22 15:04:41 -08:00
poetry.lock add support for v1/messages and transformations (#558) 2025-09-10 07:40:30 -07:00
pyproject.toml add support for v1/messages and transformations (#558) 2025-09-10 07:40:30 -07:00
README.md Use intent model from archfc to pick prompt gateway (#328) 2024-12-20 13:25:01 -08:00
response.hex Add support for Amazon Bedrock Converse and ConverseStream (#588) 2025-10-22 11:31:21 -07:00
response_with_tools.hex Add support for Amazon Bedrock Converse and ConverseStream (#588) 2025-10-22 11:31:21 -07:00
run_e2e_tests.sh Add support for v1/responses API (#622) 2025-12-03 14:58:26 -08:00
test_model_alias_routing.py fixed test and docs for deployment (#595) 2025-10-22 14:13:16 -07:00
test_openai_responses_api_client.py fixed mixed inputs from openai v1/responses api (#632) 2025-12-16 13:39:13 -08:00
test_prompt_gateway.py removing model_server python module to brightstaff (function calling) (#615) 2025-11-22 12:55:00 -08:00

e2e tests

e2e tests for arch llm gateway and prompt gateway

To be able to run e2e tests successfully run_e2e_script prepares environment in following way,

  1. build and start weather_forecast demo (using docker compose)
  2. build, install and start model server async (using poetry)
  3. build and start arch gateway (using docker compose)
  4. wait for model server to be ready
  5. wait for arch gateway to be ready
  6. start e2e tests (using poetry)
    1. runs llm gateway tests for llm routing
    2. runs prompt gateway tests to test function calling, parameter gathering and summarization
  7. cleanup
    1. stops arch gateway
    2. stops model server
    3. stops weather_forecast demo

How to run

To run locally make sure that following requirements are met.

Requirements

  • Python 3.10
  • Poetry
  • Docker

Running tests locally

sh run_e2e_test.sh