diff --git a/concepts/llm_provider.html b/concepts/llm_provider.html index 96596452..ad6ddd61 100755 --- a/concepts/llm_provider.html +++ b/concepts/llm_provider.html @@ -201,16 +201,13 @@ make outbound LLM calls.
We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We offer first class support for openai and ollama. You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.
-For example following code block shows you how to add an ollama-supported LLM in the arch_config.yaml file. -.. code-block:: yaml
--+-
-- -
name: local-llama -provider_interface: openai -model: llama3.2 -endpoint: host.docker.internal:11434
For example following code block shows you how to add an ollama-supported LLM in the arch_config.yaml file.
+- name: local-llama
+ provider_interface: openai
+ model: llama3.2
+ endpoint: host.docker.internal:11434
+For example following code block shows you how to add mistral llm provider in the arch_config.yaml file.
- name: mistral-ai
provider_interface: openai