* cleaning up plano cli commands
* adding support for wildcard model providers
* fixing compile errors
* fixing bugs related to default model provider, provider hint and duplicates in the model provider list
* fixed cargo fmt issues
* updating tests to always include the model id
* using default for the prompt_gateway path
* fixed the model name, as gpt-5-mini-2025-08-07 wasn't in the config
* making sure that all aliases and models match the config
* fixed the config generator to allow for base_url providers LLMs to include wildcard models
* re-ran the models list utility and added a shell script to run it
* updating docs to mention wildcard model providers
* updated provider_models.json to yaml, added that file to our docs for reference
* updating the build docs to use the new root-based build
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* pushing draft PR
* transformations are working. Now need to add some tests next
* updated tests and added necessary response transformations for Anthropics' message response object
* fixed bugs for integration tests
* fixed doc tests
* fixed serialization issues with enums on response
* adding some debug logs to help
* fixed issues with non-streaming responses
* updated the stream_context to update response bytes
* the serialized bytes length must be set in the response side
* fixed the debug statement that was causing the integration tests for wasm to fail
* fixing json parsing errors
* intentionally removing the headers
* making sure that we convert the raw bytes to the correct provider type upstream
* fixing non-streaming responses to tranform correctly
* /v1/messages works with transformations to and from /v1/chat/completions
* updating the CLI and demos to support anthropic vs. claude
* adding the anthropic key to the preference based routing tests
* fixed test cases and added more structured logs
* fixed integration tests and cleaned up logs
* added python client tests for anthropic and openai
* cleaned up logs and fixed issue with connectivity for llm gateway in weather forecast demo
* fixing the tests. python dependency order was broken
* updated the openAI client to fix demos
* removed the raw response debug statement
* fixed the dup cloning issue and cleaned up the ProviderRequestType enum and traits
* fixing logs
* moved away from string literals to consts
* fixed streaming from Anthropic Client to OpenAI
* removed debug statement that would likely trip up integration tests
* fixed integration tests for llm_gateway
* cleaned up test cases and removed unnecessary crates
* fixing comments from PR
* fixed bug whereby we were sending an OpenAIChatCompletions request object to llm_gateway even though the request may have been AnthropicMessages
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-4.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-9.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-10.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-41.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-136.local>
* stashing changes on my local branch
* updated the java demo with debug points and jaeger tracing
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>