2025-06-10 12:53:27 -07:00
# hermesllm
2025-08-20 12:55:29 -07:00
A Rust library for handling LLM (Large Language Model) API requests and responses with unified abstractions across multiple providers.
2025-06-10 12:53:27 -07:00
## Features
2025-08-20 12:55:29 -07:00
- Unified request/response types with provider-specific parsing
- Support for both streaming and non-streaming responses
- Type-safe provider identification
- OpenAI-compatible API structure with extensible provider support
2025-06-10 12:53:27 -07:00
## Supported Providers
2025-08-20 12:55:29 -07:00
- OpenAI
2025-06-10 12:53:27 -07:00
- Mistral
- Groq
2025-08-20 12:55:29 -07:00
- Deepseek
2025-06-10 12:53:27 -07:00
- Gemini
- Claude
2025-08-20 12:55:29 -07:00
- GitHub
2025-06-10 12:53:27 -07:00
## Installation
2025-08-20 12:55:29 -07:00
Add to your `Cargo.toml` :
2025-06-10 12:53:27 -07:00
```toml
[dependencies]
2025-08-20 12:55:29 -07:00
hermesllm = { path = "../hermesllm" } # or appropriate path in workspace
2025-06-10 12:53:27 -07:00
```
## Usage
2025-08-20 12:55:29 -07:00
### Basic Request Parsing
```rust
use hermesllm::providers::{ProviderRequestType, ProviderRequest, ProviderId};
// Parse request from JSON bytes
let request_bytes = r#"{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello!"}]}"#;
// Parse with provider context
let request = ProviderRequestType::try_from((request_bytes.as_bytes(), &ProviderId::OpenAI))?;
// Access request properties
println!("Model: {}", request.model());
println!("User message: {:?}", request.get_recent_user_message());
println!("Is streaming: {}", request.is_streaming());
```
### Working with Responses
```rust
use hermesllm::providers::{ProviderResponseType, ProviderResponse};
// Parse response from provider
let response_bytes = /* JSON response from LLM */;
let response = ProviderResponseType::try_from((response_bytes, ProviderId::OpenAI))?;
// Extract token usage
if let Some((prompt, completion, total)) = response.extract_usage_counts() {
println!("Tokens used: {}/{}/{}", prompt, completion, total);
}
```
### Handling Streaming Responses
```rust
use hermesllm::providers::{ProviderStreamResponseIter, ProviderStreamResponse};
// Create streaming iterator from SSE data
let sse_data = /* Server-Sent Events data */;
let mut stream = ProviderStreamResponseIter::try_from((sse_data, &ProviderId::OpenAI))?;
// Process streaming chunks
for chunk_result in stream {
match chunk_result {
Ok(chunk) => {
if let Some(content) = chunk.content_delta() {
print!("{}", content);
}
if chunk.is_final() {
break;
}
}
Err(e) => eprintln!("Stream error: {}", e),
}
}
```
### Provider Compatibility
2025-06-10 12:53:27 -07:00
```rust
2025-08-20 12:55:29 -07:00
use hermesllm::providers::{ProviderId, has_compatible_api, supported_apis};
2025-06-10 12:53:27 -07:00
2025-08-20 12:55:29 -07:00
// Check API compatibility
let provider = ProviderId::Groq;
if has_compatible_api(& provider, "/v1/chat/completions") {
println!("Provider supports chat completions");
}
2025-06-10 12:53:27 -07:00
2025-08-20 12:55:29 -07:00
// List supported APIs
let apis = supported_apis(&provider);
println!("Supported APIs: {:?}", apis);
2025-06-10 12:53:27 -07:00
```
2025-08-20 12:55:29 -07:00
## Core Types
### Provider Types
- `ProviderId` - Enum identifying supported providers (OpenAI, Mistral, Groq, etc.)
- `ProviderRequestType` - Enum wrapping provider-specific request types
- `ProviderResponseType` - Enum wrapping provider-specific response types
- `ProviderStreamResponseIter` - Iterator for streaming response chunks
### Traits
- `ProviderRequest` - Common interface for all request types
- `ProviderResponse` - Common interface for all response types
- `ProviderStreamResponse` - Interface for streaming response chunks
- `TokenUsage` - Interface for token usage information
### OpenAI API Types
- `ChatCompletionsRequest` - Chat completion request structure
- `ChatCompletionsResponse` - Chat completion response structure
- `Message` , `Role` , `MessageContent` - Message building blocks
## Architecture
The library uses a type-safe enum-based approach that:
- **Provides Type Safety**: All provider operations are checked at compile time
- **Enables Runtime Provider Selection**: Provider can be determined from request headers or config
- **Maintains Clean Abstractions**: Common traits hide provider-specific details
- **Supports Extensibility**: New providers can be added by extending the enums
2025-06-10 12:53:27 -07:00
2025-08-20 12:55:29 -07:00
All requests are parsed into a common `ProviderRequestType` enum which implements the `ProviderRequest` trait, allowing uniform access to request properties regardless of the underlying provider format.
2025-06-10 12:53:27 -07:00
2025-08-20 12:55:29 -07:00
## Examples
2025-06-10 12:53:27 -07:00
2025-08-20 12:55:29 -07:00
See the `src/lib.rs` tests for complete working examples of:
- Parsing requests with provider context
- Handling streaming responses
- Working with token usage information
2025-06-10 12:53:27 -07:00
## License
2025-08-20 12:55:29 -07:00
This project is licensed under the MIT License.