mirror of
https://github.com/katanemo/plano.git
synced 2026-04-25 00:36:34 +02:00
* cleaning up plano cli commands * adding support for wildcard model providers * fixing compile errors * fixing bugs related to default model provider, provider hint and duplicates in the model provider list * fixed cargo fmt issues * updating tests to always include the model id * using default for the prompt_gateway path * fixed the model name, as gpt-5-mini-2025-08-07 wasn't in the config * making sure that all aliases and models match the config * fixed the config generator to allow for base_url providers LLMs to include wildcard models * re-ran the models list utility and added a shell script to run it * updating docs to mention wildcard model providers * updated provider_models.json to yaml, added that file to our docs for reference * updating the build docs to use the new root-based build --------- Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local> |
||
|---|---|---|
| .. | ||
| .vscode | ||
| common.py | ||
| common_scripts.sh | ||
| config_memory_state_v1_responses.yaml | ||
| docker-compose.yaml | ||
| pyproject.toml | ||
| README.md | ||
| response.hex | ||
| response_with_tools.hex | ||
| run_e2e_tests.sh | ||
| test_model_alias_routing.py | ||
| test_openai_responses_api_client.py | ||
| test_openai_responses_api_client_with_state.py | ||
| test_prompt_gateway.py | ||
| uv.lock | ||
e2e tests
e2e tests for arch llm gateway and prompt gateway
To be able to run e2e tests successfully run_e2e_script prepares environment in following way,
- build and start weather_forecast demo (using docker compose)
- build, install and start model server async (using uv)
- build and start arch gateway (using docker compose)
- wait for model server to be ready
- wait for arch gateway to be ready
- start e2e tests (using uv)
- runs llm gateway tests for llm routing
- runs prompt gateway tests to test function calling, parameter gathering and summarization
- cleanup
- stops arch gateway
- stops model server
- stops weather_forecast demo
How to run
To run locally make sure that following requirements are met.
Requirements
- Python 3.10
- uv
- Docker
Running tests locally
sh run_e2e_test.sh