From 21e57ab02833a95a277f536da6fd4dd43580aa4c Mon Sep 17 00:00:00 2001 From: adilhafeez Date: Sat, 25 Jan 2025 01:14:59 +0000 Subject: [PATCH] deploy: 38f7691163253c90befa441f84ba7ac0e6d4c10e --- concepts/llm_provider.html | 23 +++++++++++++++++++++++ searchindex.js | 2 +- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/concepts/llm_provider.html b/concepts/llm_provider.html index 0df37176..562746af 100755 --- a/concepts/llm_provider.html +++ b/concepts/llm_provider.html @@ -198,6 +198,28 @@ abstracts the complexities of integrating with different LLM providers, providin calls, handling retries, managing rate limits, and ensuring seamless integration with cloud-based and on-premise LLMs. Simply configure the details of the LLMs your application will use, and Arch offers a unified interface to make outbound LLM calls.

+
+

Adding custom LLM Provider

+

We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We offer first class support for openai and ollama. You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.

+

For example following code block shows you how to add an ollama-supported LLM in the arch_config.yaml file. +.. code-block:: yaml

+
+
    +
  • name: local-llama +provider_interface: openai +model: llama3.2 +endpoint: host.docker.internal:11434

  • +
+
+

For example following code block shows you how to add mistral llm provider in the arch_config.yaml file.

+
- name: mistral-ai
+  provider_interface: openai
+  model: ministral-3b-latest
+  endpoint: api.mistral.ai:443
+  protocol: https
+
+
+

Example: Using the OpenAI Python SDK

from openai import OpenAI
@@ -238,6 +260,7 @@ make outbound LLM calls.