updating the messaging to call ourselves the edge and AI gateway for … (#527)

* updating the messaging to call ourselves the edge and AI gateway for agents

* updating README to tidy up some language

* updating README to tidy up some language

* updating README to tidy up some language

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-329.local>
This commit is contained in:
Salman Paracha 2025-07-12 03:25:09 -07:00 committed by GitHub
parent 14f19f02a7
commit 5e65572573
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 25 additions and 33 deletions

View file

@ -35,28 +35,23 @@ make outbound LLM calls.
Adding custom LLM Provider
--------------------------
We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We offer first class support for openai and ollama. You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.
We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We also offer first class support for OpenAI, Anthropic, DeepSeek, Mistral, Groq, and Ollama based models.
You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.
For example following code block shows you how to add an ollama-supported LLM in the `arch_config.yaml` file.
.. code-block:: yaml
- name: local-llama
llm_providers:
- model: some_custom_llm_provider/llama3.2
provider_interface: openai
model: llama3.2
endpoint: host.docker.internal:11434
base_url: http://host.docker.internal:11434
For example following code block shows you how to add mistral llm provider in the `arch_config.yaml` file.
And in the following code block shows you how to add mistral llm provider in the `arch_config.yaml` file.
.. code-block:: yaml
- name: mistral-ai
provider_interface: openai
model: ministral-3b-latest
endpoint: api.mistral.ai:443
protocol: https
llm_providers:
- name: mistral/ministral-3b-latest
access_key: $MISTRAL_API_KEY
Example: Using the OpenAI Python SDK
------------------------------------
@ -65,15 +60,15 @@ Example: Using the OpenAI Python SDK
from openai import OpenAI
# Initialize the Arch client
client = OpenAI(base_url="http://127.0.0.12000/")
# Initialize the Arch client
client = OpenAI(base_url="http://127.0.0.1:2000/")
# Define your LLM provider and prompt
llm_provider = "openai"
prompt = "What is the capital of France?"
# Define your model and messages
model = "llama3.2"
messages = [{"role": "user", "content": "What is the capital of France?"}]
# Send the prompt to the LLM through Arch
response = client.completions.create(llm_provider=llm_provider, prompt=prompt)
# Send the messages to the LLM through Arch
response = client.chat.completions.create(model=model, messages=messages)
# Print the response
print("LLM Response:", response)
# Print the response
print("LLM Response:", response.choices[0].message.content)