* add pluggable session cache with Redis backend
* add Redis session affinity demos (Docker Compose and Kubernetes)
* address PR review feedback on session cache
* document Redis session cache backend for model affinity
* sync rendered config reference with session_cache addition
* add tenant-scoped Redis session cache keys and remove dead log_affinity_hit
- Add tenant_header to SessionCacheConfig; when set, cache keys are scoped
as plano:affinity:{tenant_id}:{session_id} for multi-tenant isolation
- Thread tenant_id through RouterService, routing_service, and llm handlers
- Use Cow<'_, str> in session_key to avoid allocation when no tenant is set
- Remove unused log_affinity_hit (logging was already inlined at call sites)
* remove session_affinity_redis and session_affinity_redis_k8s demos
* fix: route Perplexity OpenAI paths without /v1
* add tests for Perplexity provider handling in LLM module
* refactor: use constant for Perplexity provider prefix in LLM module
* moving const to top of file
* support configurable orchestrator model via orchestration config section
* add self-hosting docs and demo for Plano-Orchestrator
* list all Plano-Orchestrator model variants in docs
* use overrides for custom routing and orchestration model
* update docs
* update orchestrator model name
* rename arch provider to plano, use llm_routing_model and agent_orchestration_model
* regenerate rendered config reference
* Add Codex CLI support; xAI response improvements
* Add native Plano running check and update CLI agent error handling
* adding PR suggestions for transformations and code quality
* message extraction logic in ResponsesAPIRequest
* xAI support for Responses API by routing to native endpoint + refactor code
* cleaning up plano cli commands
* adding support for wildcard model providers
* fixing compile errors
* fixing bugs related to default model provider, provider hint and duplicates in the model provider list
* fixed cargo fmt issues
* updating tests to always include the model id
* using default for the prompt_gateway path
* fixed the model name, as gpt-5-mini-2025-08-07 wasn't in the config
* making sure that all aliases and models match the config
* fixed the config generator to allow for base_url providers LLMs to include wildcard models
* re-ran the models list utility and added a shell script to run it
* updating docs to mention wildcard model providers
* updated provider_models.json to yaml, added that file to our docs for reference
* updating the build docs to use the new root-based build
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* adding support for signals
* reducing false positives for signals like positive interaction
* adding docs. Still need to fix the messages list, but waiting on PR #621
* Improve frustration detection: normalize contractions and refine punctuation
* Further refine test cases with longer messages
* minor doc changes
* fixing echo statement for build
* fixing the messages construction and using the trait for signals
* update signals docs
* fixed some minor doc changes
* added more tests and fixed docuemtnation. PR 100% ready
* made fixes based on PR comments
* Optimize latency
1. replace sliding window approach with trigram containment check
2. add code to pre-compute ngrams for patterns
* removed some debug statements to make tests easier to read
* PR comments to make ObservableStreamProcessor accept optonal Vec<Messagges>
* fixed PR comments
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
Co-authored-by: MeiyuZhong <mariazhong9612@gmail.com>
Co-authored-by: nehcgs <54548843+nehcgs@users.noreply.github.com>
* agents framework demo
* more changes
* add more changes
* pending changes
* fix tests
* fix more
* rebase with main and better handle error from mcp
* add trace for filters
* add test for client error, server error and for mcp error
* update schema validate code and rename kind => type in agent_filter
* fix agent description and pre-commit
* fix tests
* add provider specific request parsing in agents chat
* fix precommit and tests
* cleanup demo
* update readme
* fix pre-commit
* refactor tracing
* fix fmt
* fix: handle MessageContent enum in responses API conversion
- Update request.rs to handle new MessageContent enum structure from main
- MessageContent can now be Text(String) or Items(Vec<InputContent>)
- Handle new InputItem variants (ItemReference, FunctionCallOutput)
- Fixes compilation error after merging latest main (#632)
* address pr feedback
* fix span
* fix build
* update openai version
* first commit with tests to enable state mamangement via memory
* fixed logs to follow the conversational flow a bit better
* added support for supabase
* added the state_storage_v1_responses flag, and use that to store state appropriately
* cleaned up logs and fixed issue with connectivity for llm gateway in weather forecast demo
* fixed mixed inputs from openai v1/responses api (#632)
* fixed mixed inputs from openai v1/responses api
* removing tracing from model-alias-rouing
* handling additional input types from openairs
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* resolving PR comments
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* adding canonical tracing support via bright-staff
* improved formatting for tools in the traces
* removing anthropic from the currency exchange demo
* using Envoy to transport traces, not calling OTEL directly
* moving otel collcetor cluster outside tracing if/else
* minor fixes to not write to the OTEL collector if tracing is disabled
* fixed PR comments and added more trace attributes
* more fixes based on PR comments
* more clean up based on PR comments
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* making first commit. still need to work on streaming respones
* making first commit. still need to work on streaming respones
* stream buffer implementation with tests
* adding grok API keys to workflow
* fixed changes based on code review
* adding support for bedrock models
* fixed issues with translation to claude code
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* adding function_calling functionality via rust
* fixed rendered YAML file
* removed model_server from envoy.template and forwarding traffic to bright_staff
* fixed bugs in function_calling.rs that were breaking tests. All good now
* updating e2e test to clean up disk usage
* removing Arch* models to be used as a default model if one is not specified
* if the user sets arch-function base_url we should honor it
* fixing demos as we needed to pin to a particular version of huggingface_hub else the chatbot ui wouldn't build
* adding a constant for Arch-Function model name
* fixing some edge cases with calls made to Arch-Function
* fixed JSON parsing issues in function_calling.rs
* fixed bug where the raw response from Arch-Function was re-encoded
* removed debug from supervisord.conf
* commenting out disk cleanup
* adding back disk space
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-342.local>
* first commit to get Bedrock Converse API working. Next commit support for streaming and binary frames
* adding translation from BedrockBinaryFrameDecoder to AnthropicMessagesEvent
* Claude Code works with Amazon Bedrock
* added tests for openai streaming from bedrock
* PR comments fixed
* adding support for bedrock in docs as supported provider
* cargo fmt
* revertted to chatgpt models for claude code routing
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
Co-authored-by: Adil Hafeez <adil.hafeez@gmail.com>
* fixed for claude code routing. first commit
* removing redundant enum tags for cache_control
* making sure that claude code can run via the archgw cli
* fixing broken config
* adding a README.md and updated the cli to use more of our defined patterns for params
* fixed config.yaml
* minor fixes to make sure PR is clean. Ready to ship
* adding claude-sonnet-4-5 to the config
* fixes based on PR
* fixed alias for README
* fixed 400 error handling tests, now that we write temperature to 1.0 for GPT-5
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-257.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-288.local>
* adding support for model aliases in archgw
* fixed PR based on feedback
* removing README. Not relevant for PR
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-136.local>
* pushing draft PR
* transformations are working. Now need to add some tests next
* updated tests and added necessary response transformations for Anthropics' message response object
* fixed bugs for integration tests
* fixed doc tests
* fixed serialization issues with enums on response
* adding some debug logs to help
* fixed issues with non-streaming responses
* updated the stream_context to update response bytes
* the serialized bytes length must be set in the response side
* fixed the debug statement that was causing the integration tests for wasm to fail
* fixing json parsing errors
* intentionally removing the headers
* making sure that we convert the raw bytes to the correct provider type upstream
* fixing non-streaming responses to tranform correctly
* /v1/messages works with transformations to and from /v1/chat/completions
* updating the CLI and demos to support anthropic vs. claude
* adding the anthropic key to the preference based routing tests
* fixed test cases and added more structured logs
* fixed integration tests and cleaned up logs
* added python client tests for anthropic and openai
* cleaned up logs and fixed issue with connectivity for llm gateway in weather forecast demo
* fixing the tests. python dependency order was broken
* updated the openAI client to fix demos
* removed the raw response debug statement
* fixed the dup cloning issue and cleaned up the ProviderRequestType enum and traits
* fixing logs
* moved away from string literals to consts
* fixed streaming from Anthropic Client to OpenAI
* removed debug statement that would likely trip up integration tests
* fixed integration tests for llm_gateway
* cleaned up test cases and removed unnecessary crates
* fixing comments from PR
* fixed bug whereby we were sending an OpenAIChatCompletions request object to llm_gateway even though the request may have been AnthropicMessages
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-4.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-9.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-10.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-41.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-136.local>
* updating the implementation of /v1/chat/completions to use the generic provider interfaces
* saving changes, although we will need a small re-factor after this as well
* more refactoring changes, getting close
* more refactoring changes to avoid unecessary re-direction and duplication
* more clean up
* more refactoring
* more refactoring to clean code and make stream_context.rs work
* removing unecessary trait implemenations
* some more clean-up
* fixed bugs
* fixing test cases, and making sure all references to the ChatCOmpletions* objects point to the new types
* refactored changes to support enum dispatch
* removed the dependency on try_streaming_from_bytes into a try_from trait implementation
* updated readme based on new usage
* updated code based on code review comments
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-2.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-4.local>