updating the implementation of /v1/chat/completions to use the generi… (#548)

* updating the implementation of /v1/chat/completions to use the generic provider interfaces

* saving changes, although we will need a small re-factor after this as well

* more refactoring changes, getting close

* more refactoring changes to avoid unecessary re-direction and duplication

* more clean up

* more refactoring

* more refactoring to clean code and make stream_context.rs work

* removing unecessary trait implemenations

* some more clean-up

* fixed bugs

* fixing test cases, and making sure all references to the ChatCOmpletions* objects point to the new types

* refactored changes to support enum dispatch

* removed the dependency on try_streaming_from_bytes into a try_from trait implementation

* updated readme based on new usage

* updated code based on code review comments

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-2.local>
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-4.local>
This commit is contained in:
Salman Paracha 2025-08-20 12:55:29 -07:00 committed by GitHub
parent 1fdde8181a
commit 89ab51697a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
22 changed files with 1044 additions and 972 deletions

View file

@ -1,5 +1,5 @@
use common::configuration::ModelUsagePreference;
use hermesllm::providers::openai::types::{ChatCompletionsRequest, Message};
use hermesllm::apis::openai::{ChatCompletionsRequest, Message};
use thiserror::Error;
#[derive(Debug, Error)]