* first commit with tests to enable state mamangement via memory
* fixed logs to follow the conversational flow a bit better
* added support for supabase
* added the state_storage_v1_responses flag, and use that to store state appropriately
* cleaned up logs and fixed issue with connectivity for llm gateway in weather forecast demo
* fixed mixed inputs from openai v1/responses api (#632)
* fixed mixed inputs from openai v1/responses api
* removing tracing from model-alias-rouing
* handling additional input types from openairs
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* resolving PR comments
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* adding function_calling functionality via rust
* fixed rendered YAML file
* removed model_server from envoy.template and forwarding traffic to bright_staff
* fixed bugs in function_calling.rs that were breaking tests. All good now
* updating e2e test to clean up disk usage
* removing Arch* models to be used as a default model if one is not specified
* if the user sets arch-function base_url we should honor it
* fixing demos as we needed to pin to a particular version of huggingface_hub else the chatbot ui wouldn't build
* adding a constant for Arch-Function model name
* fixing some edge cases with calls made to Arch-Function
* fixed JSON parsing issues in function_calling.rs
* fixed bug where the raw response from Arch-Function was re-encoded
* removed debug from supervisord.conf
* commenting out disk cleanup
* adding back disk space
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* adding support for base_url
* updated docs
* fixed tests for config generator
* making fixes based on PR comments
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
* first commit to get Bedrock Converse API working. Next commit support for streaming and binary frames
* adding translation from BedrockBinaryFrameDecoder to AnthropicMessagesEvent
* Claude Code works with Amazon Bedrock
* added tests for openai streaming from bedrock
* PR comments fixed
* adding support for bedrock in docs as supported provider
* cargo fmt
* revertted to chatgpt models for claude code routing
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
Co-authored-by: Adil Hafeez <adil.hafeez@gmail.com>
* adding support for Qwen models and fixed issue with passing PATH variable
* don't need to have qwen in the model alias routing example
* fixed base_url for qwen
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
* adding support for moonshot and z-ai
* Revert unwanted changes to arch_config.yaml
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
* fixed docs and added ollama as a first-class LLM provider
* matching the LLM routing section on the README.md to the docs
* updated the section on preference-based routing
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-167.local>
* updating the messaging to call ourselves the edge and AI gateway for agents
* updating README to tidy up some language
* updating README to tidy up some language
* updating README to tidy up some language
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-329.local>
* pushing docs updated
* Fixed README.md logo
* Fixed README.md logo
* Fixed README.md spacing
* fixed tag line
* LLM router doc fixes
* minor logo and branding changes
* minor changes to the README
* minor changes to the README
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-329.local>