mirror of
https://github.com/katanemo/plano.git
synced 2026-05-09 15:52:44 +02:00
* pushing draft PR * transformations are working. Now need to add some tests next * updated tests and added necessary response transformations for Anthropics' message response object * fixed bugs for integration tests * fixed doc tests * fixed serialization issues with enums on response * adding some debug logs to help * fixed issues with non-streaming responses * updated the stream_context to update response bytes * the serialized bytes length must be set in the response side * fixed the debug statement that was causing the integration tests for wasm to fail * fixing json parsing errors * intentionally removing the headers * making sure that we convert the raw bytes to the correct provider type upstream * fixing non-streaming responses to tranform correctly * /v1/messages works with transformations to and from /v1/chat/completions * updating the CLI and demos to support anthropic vs. claude * adding the anthropic key to the preference based routing tests * fixed test cases and added more structured logs * fixed integration tests and cleaned up logs * added python client tests for anthropic and openai * cleaned up logs and fixed issue with connectivity for llm gateway in weather forecast demo * fixing the tests. python dependency order was broken * updated the openAI client to fix demos * removed the raw response debug statement * fixed the dup cloning issue and cleaned up the ProviderRequestType enum and traits * fixing logs * moved away from string literals to consts * fixed streaming from Anthropic Client to OpenAI * removed debug statement that would likely trip up integration tests * fixed integration tests for llm_gateway * cleaned up test cases and removed unnecessary crates * fixing comments from PR * fixed bug whereby we were sending an OpenAIChatCompletions request object to llm_gateway even though the request may have been AnthropicMessages --------- Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-4.local> Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-9.local> Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-10.local> Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-41.local> Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-136.local> |
||
|---|---|---|
| .. | ||
| src | ||
| Cargo.toml | ||
| README.md | ||
hermesllm
A Rust library for handling LLM (Large Language Model) API requests and responses with unified abstractions across multiple providers.
Features
- Unified request/response types with provider-specific parsing
- Support for both streaming and non-streaming responses
- Type-safe provider identification
- OpenAI-compatible API structure with extensible provider support
Supported Providers
- OpenAI
- Mistral
- Groq
- Deepseek
- Gemini
- Claude
- GitHub
Installation
Add to your Cargo.toml:
[dependencies]
hermesllm = { path = "../hermesllm" } # or appropriate path in workspace
Usage
Basic Request Parsing
use hermesllm::providers::{ProviderRequestType, ProviderRequest, ProviderId};
// Parse request from JSON bytes
let request_bytes = r#"{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello!"}]}"#;
// Parse with provider context
let request = ProviderRequestType::try_from((request_bytes.as_bytes(), &ProviderId::OpenAI))?;
// Access request properties
println!("Model: {}", request.model());
println!("User message: {:?}", request.get_recent_user_message());
println!("Is streaming: {}", request.is_streaming());
Working with Responses
use hermesllm::providers::{ProviderResponseType, ProviderResponse};
// Parse response from provider
let response_bytes = /* JSON response from LLM */;
let response = ProviderResponseType::try_from((response_bytes, ProviderId::OpenAI))?;
// Extract token usage
if let Some((prompt, completion, total)) = response.extract_usage_counts() {
println!("Tokens used: {}/{}/{}", prompt, completion, total);
}
Handling Streaming Responses
use hermesllm::providers::{ProviderStreamResponseIter, ProviderStreamResponse};
// Create streaming iterator from SSE data
let sse_data = /* Server-Sent Events data */;
let mut stream = ProviderStreamResponseIter::try_from((sse_data, &ProviderId::OpenAI))?;
// Process streaming chunks
for chunk_result in stream {
match chunk_result {
Ok(chunk) => {
if let Some(content) = chunk.content_delta() {
print!("{}", content);
}
if chunk.is_final() {
break;
}
}
Err(e) => eprintln!("Stream error: {}", e),
}
}
Provider Compatibility
use hermesllm::providers::{ProviderId, has_compatible_api, supported_apis};
// Check API compatibility
let provider = ProviderId::Groq;
if has_compatible_api(&provider, "/v1/chat/completions") {
println!("Provider supports chat completions");
}
// List supported APIs
let apis = supported_apis(&provider);
println!("Supported APIs: {:?}", apis);
Core Types
Provider Types
ProviderId- Enum identifying supported providers (OpenAI, Mistral, Groq, etc.)ProviderRequestType- Enum wrapping provider-specific request typesProviderResponseType- Enum wrapping provider-specific response typesProviderStreamResponseIter- Iterator for streaming response chunks
Traits
ProviderRequest- Common interface for all request typesProviderResponse- Common interface for all response typesProviderStreamResponse- Interface for streaming response chunksTokenUsage- Interface for token usage information
OpenAI API Types
ChatCompletionsRequest- Chat completion request structureChatCompletionsResponse- Chat completion response structureMessage,Role,MessageContent- Message building blocks
Architecture
The library uses a type-safe enum-based approach that:
- Provides Type Safety: All provider operations are checked at compile time
- Enables Runtime Provider Selection: Provider can be determined from request headers or config
- Maintains Clean Abstractions: Common traits hide provider-specific details
- Supports Extensibility: New providers can be added by extending the enums
All requests are parsed into a common ProviderRequestType enum which implements the ProviderRequest trait, allowing uniform access to request properties regardless of the underlying provider format.
Examples
See the src/lib.rs tests for complete working examples of:
- Parsing requests with provider context
- Handling streaming responses
- Working with token usage information
License
This project is licensed under the MIT License.