From dcde1e2f20f26c3465374034a0d214ff06363f27 Mon Sep 17 00:00:00 2001 From: salmanap Date: Sat, 25 Jan 2025 04:36:27 +0000 Subject: [PATCH] deploy: a7feb6bffb3a5bf8fa363861038e64d8bef96b12 --- concepts/llm_provider.html | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/concepts/llm_provider.html b/concepts/llm_provider.html index 96596452..ad6ddd61 100755 --- a/concepts/llm_provider.html +++ b/concepts/llm_provider.html @@ -201,16 +201,13 @@ make outbound LLM calls.

Adding custom LLM Provider

We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We offer first class support for openai and ollama. You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.

-

For example following code block shows you how to add an ollama-supported LLM in the arch_config.yaml file. -.. code-block:: yaml

-
-
    -
  • name: local-llama -provider_interface: openai -model: llama3.2 -endpoint: host.docker.internal:11434

  • -
-
+

For example following code block shows you how to add an ollama-supported LLM in the arch_config.yaml file.

+
- name: local-llama
+  provider_interface: openai
+  model: llama3.2
+  endpoint: host.docker.internal:11434
+
+

For example following code block shows you how to add mistral llm provider in the arch_config.yaml file.

- name: mistral-ai
   provider_interface: openai