mirror of
https://github.com/katanemo/plano.git
synced 2026-04-30 03:16:28 +02:00
deploy: a7feb6bffb
This commit is contained in:
parent
81b5ec3aaf
commit
dcde1e2f20
1 changed files with 7 additions and 10 deletions
|
|
@ -201,16 +201,13 @@ make outbound LLM calls.</p>
|
|||
<section id="adding-custom-llm-provider">
|
||||
<h2>Adding custom LLM Provider<a @click.prevent="window.navigator.clipboard.writeText($el.href); $el.setAttribute('data-tooltip', 'Copied!'); setTimeout(() => $el.setAttribute('data-tooltip', 'Copy link to this element'), 2000)" aria-label="Copy link to this element" class="headerlink" data-tooltip="Copy link to this element" href="#adding-custom-llm-provider" x-intersect.margin.0%.0%.-70%.0%="activeSection = '#adding-custom-llm-provider'"><svg height="1em" viewbox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><path d="M3.9 12c0-1.71 1.39-3.1 3.1-3.1h4V7H7c-2.76 0-5 2.24-5 5s2.24 5 5 5h4v-1.9H7c-1.71 0-3.1-1.39-3.1-3.1zM8 13h8v-2H8v2zm9-6h-4v1.9h4c1.71 0 3.1 1.39 3.1 3.1s-1.39 3.1-3.1 3.1h-4V17h4c2.76 0 5-2.24 5-5s-2.24-5-5-5z"></path></svg></a></h2>
|
||||
<p>We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We offer first class support for openai and ollama. You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.</p>
|
||||
<p>For example following code block shows you how to add an ollama-supported LLM in the <cite>arch_config.yaml</cite> file.
|
||||
.. code-block:: yaml</p>
|
||||
<blockquote>
|
||||
<div><ul class="simple">
|
||||
<li><p>name: local-llama
|
||||
provider_interface: openai
|
||||
model: llama3.2
|
||||
endpoint: host.docker.internal:11434</p></li>
|
||||
</ul>
|
||||
</div></blockquote>
|
||||
<p>For example following code block shows you how to add an ollama-supported LLM in the <cite>arch_config.yaml</cite> file.</p>
|
||||
<div class="highlight-yaml notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">local-llama</span>
|
||||
</span><span id="line-2"><span class="w"> </span><span class="nt">provider_interface</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">openai</span>
|
||||
</span><span id="line-3"><span class="w"> </span><span class="nt">model</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">llama3.2</span>
|
||||
</span><span id="line-4"><span class="w"> </span><span class="nt">endpoint</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">host.docker.internal:11434</span>
|
||||
</span></code></pre></div>
|
||||
</div>
|
||||
<p>For example following code block shows you how to add mistral llm provider in the <cite>arch_config.yaml</cite> file.</p>
|
||||
<div class="highlight-yaml notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">mistral-ai</span>
|
||||
</span><span id="line-2"><span class="w"> </span><span class="nt">provider_interface</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">openai</span>
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue