Update llm_provider.rst (#543)

This commit is contained in:
Musa 2025-07-27 09:26:12 -07:00 committed by GitHub
parent ac3fb4cb5b
commit d215724864
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -38,27 +38,29 @@ Adding custom LLM Provider
We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We also offer first class support for OpenAI, Anthropic, DeepSeek, Mistral, Groq, and Ollama based models.
You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.
For example following code block shows you how to add an ollama-supported LLM in the `arch_config.yaml` file.
For example following code block shows you how to add an ollama-supported LLM in the ``arch_config.yaml`` file.
.. code-block:: yaml
llm_providers:
- model: some_custom_llm_provider/llama3.2
provider_interface: openai
base_url: http://host.docker.internal:11434
And in the following code block shows you how to add mistral llm provider in the `arch_config.yaml` file.
llm_providers:
- model: some_custom_llm_provider/llama3.2
provider_interface: openai
base_url: http://host.docker.internal:11434
And in the following code block shows you how to add mistral llm provider in the ``arch_config.yaml`` file.
.. code-block:: yaml
llm_providers:
- name: mistral/ministral-3b-latest
access_key: $MISTRAL_API_KEY
- name: mistral/ministral-3b-latest
access_key: $MISTRAL_API_KEY
Example: Using the OpenAI Python SDK
------------------------------------
.. code-block:: python
from openai import OpenAI
from openai import OpenAI
# Initialize the Arch client
client = OpenAI(base_url="http://127.0.0.1:2000/")