mirror of
https://github.com/katanemo/plano.git
synced 2026-05-15 11:02:39 +02:00
deploy: 5e65572573
This commit is contained in:
parent
d00a25ed3f
commit
0497b17898
7 changed files with 16 additions and 28 deletions
|
|
@ -203,39 +203,27 @@ LLMs. Simply configure the details of the LLMs your application will use, and Ar
|
|||
make outbound LLM calls.</p>
|
||||
<section id="adding-custom-llm-provider">
|
||||
<h2>Adding custom LLM Provider<a @click.prevent="window.navigator.clipboard.writeText($el.href); $el.setAttribute('data-tooltip', 'Copied!'); setTimeout(() => $el.setAttribute('data-tooltip', 'Copy link to this element'), 2000)" aria-label="Copy link to this element" class="headerlink" data-tooltip="Copy link to this element" href="#adding-custom-llm-provider" x-intersect.margin.0%.0%.-70%.0%="activeSection = '#adding-custom-llm-provider'"><svg height="1em" viewbox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><path d="M3.9 12c0-1.71 1.39-3.1 3.1-3.1h4V7H7c-2.76 0-5 2.24-5 5s2.24 5 5 5h4v-1.9H7c-1.71 0-3.1-1.39-3.1-3.1zM8 13h8v-2H8v2zm9-6h-4v1.9h4c1.71 0 3.1 1.39 3.1 3.1s-1.39 3.1-3.1 3.1h-4V17h4c2.76 0 5-2.24 5-5s-2.24-5-5-5z"></path></svg></a></h2>
|
||||
<p>We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We offer first class support for openai and ollama. You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.</p>
|
||||
<p>We support any OpenAI compliant LLM for example mistral, openai, ollama etc. We also offer first class support for OpenAI, Anthropic, DeepSeek, Mistral, Groq, and Ollama based models.
|
||||
You can easily configure an LLM that communicates over the OpenAI API interface, by following the below guide.</p>
|
||||
<p>For example following code block shows you how to add an ollama-supported LLM in the <cite>arch_config.yaml</cite> file.</p>
|
||||
<div class="highlight-yaml notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">local-llama</span>
|
||||
</span><span id="line-2"><span class="w"> </span><span class="nt">provider_interface</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">openai</span>
|
||||
</span><span id="line-3"><span class="w"> </span><span class="nt">model</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">llama3.2</span>
|
||||
</span><span id="line-4"><span class="w"> </span><span class="nt">endpoint</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">host.docker.internal:11434</span>
|
||||
</span></code></pre></div>
|
||||
</div>
|
||||
<p>For example following code block shows you how to add mistral llm provider in the <cite>arch_config.yaml</cite> file.</p>
|
||||
<div class="highlight-yaml notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">mistral-ai</span>
|
||||
</span><span id="line-2"><span class="w"> </span><span class="nt">provider_interface</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">openai</span>
|
||||
</span><span id="line-3"><span class="w"> </span><span class="nt">model</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">ministral-3b-latest</span>
|
||||
</span><span id="line-4"><span class="w"> </span><span class="nt">endpoint</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">api.mistral.ai:443</span>
|
||||
</span><span id="line-5"><span class="w"> </span><span class="nt">protocol</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">https</span>
|
||||
</span></code></pre></div>
|
||||
</div>
|
||||
<p>And in the following code block shows you how to add mistral llm provider in the <cite>arch_config.yaml</cite> file.</p>
|
||||
</section>
|
||||
<section id="example-using-the-openai-python-sdk">
|
||||
<h2>Example: Using the OpenAI Python SDK<a @click.prevent="window.navigator.clipboard.writeText($el.href); $el.setAttribute('data-tooltip', 'Copied!'); setTimeout(() => $el.setAttribute('data-tooltip', 'Copy link to this element'), 2000)" aria-label="Copy link to this element" class="headerlink" data-tooltip="Copy link to this element" href="#example-using-the-openai-python-sdk" x-intersect.margin.0%.0%.-70%.0%="activeSection = '#example-using-the-openai-python-sdk'"><svg height="1em" viewbox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><path d="M3.9 12c0-1.71 1.39-3.1 3.1-3.1h4V7H7c-2.76 0-5 2.24-5 5s2.24 5 5 5h4v-1.9H7c-1.71 0-3.1-1.39-3.1-3.1zM8 13h8v-2H8v2zm9-6h-4v1.9h4c1.71 0 3.1 1.39 3.1 3.1s-1.39 3.1-3.1 3.1h-4V17h4c2.76 0 5-2.24 5-5s-2.24-5-5-5z"></path></svg></a></h2>
|
||||
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="kn">from</span><span class="w"> </span><span class="nn">openai</span><span class="w"> </span><span class="kn">import</span> <span class="n">OpenAI</span>
|
||||
</span><span id="line-2">
|
||||
</span><span id="line-3"><span class="c1"># Initialize the Arch client</span>
|
||||
</span><span id="line-4"><span class="n">client</span> <span class="o">=</span> <span class="n">OpenAI</span><span class="p">(</span><span class="n">base_url</span><span class="o">=</span><span class="s2">"http://127.0.0.12000/"</span><span class="p">)</span>
|
||||
</span><span id="line-3"> <span class="c1"># Initialize the Arch client</span>
|
||||
</span><span id="line-4"> <span class="n">client</span> <span class="o">=</span> <span class="n">OpenAI</span><span class="p">(</span><span class="n">base_url</span><span class="o">=</span><span class="s2">"http://127.0.0.1:2000/"</span><span class="p">)</span>
|
||||
</span><span id="line-5">
|
||||
</span><span id="line-6"><span class="c1"># Define your LLM provider and prompt</span>
|
||||
</span><span id="line-7"><span class="n">llm_provider</span> <span class="o">=</span> <span class="s2">"openai"</span>
|
||||
</span><span id="line-8"><span class="n">prompt</span> <span class="o">=</span> <span class="s2">"What is the capital of France?"</span>
|
||||
</span><span id="line-6"> <span class="c1"># Define your model and messages</span>
|
||||
</span><span id="line-7"> <span class="n">model</span> <span class="o">=</span> <span class="s2">"llama3.2"</span>
|
||||
</span><span id="line-8"> <span class="n">messages</span> <span class="o">=</span> <span class="p">[{</span><span class="s2">"role"</span><span class="p">:</span> <span class="s2">"user"</span><span class="p">,</span> <span class="s2">"content"</span><span class="p">:</span> <span class="s2">"What is the capital of France?"</span><span class="p">}]</span>
|
||||
</span><span id="line-9">
|
||||
</span><span id="line-10"><span class="c1"># Send the prompt to the LLM through Arch</span>
|
||||
</span><span id="line-11"><span class="n">response</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="n">completions</span><span class="o">.</span><span class="n">create</span><span class="p">(</span><span class="n">llm_provider</span><span class="o">=</span><span class="n">llm_provider</span><span class="p">,</span> <span class="n">prompt</span><span class="o">=</span><span class="n">prompt</span><span class="p">)</span>
|
||||
</span><span id="line-10"> <span class="c1"># Send the messages to the LLM through Arch</span>
|
||||
</span><span id="line-11"> <span class="n">response</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="n">chat</span><span class="o">.</span><span class="n">completions</span><span class="o">.</span><span class="n">create</span><span class="p">(</span><span class="n">model</span><span class="o">=</span><span class="n">model</span><span class="p">,</span> <span class="n">messages</span><span class="o">=</span><span class="n">messages</span><span class="p">)</span>
|
||||
</span><span id="line-12">
|
||||
</span><span id="line-13"><span class="c1"># Print the response</span>
|
||||
</span><span id="line-14"><span class="nb">print</span><span class="p">(</span><span class="s2">"LLM Response:"</span><span class="p">,</span> <span class="n">response</span><span class="p">)</span>
|
||||
</span><span id="line-13"> <span class="c1"># Print the response</span>
|
||||
</span><span id="line-14"> <span class="nb">print</span><span class="p">(</span><span class="s2">"LLM Response:"</span><span class="p">,</span> <span class="n">response</span><span class="o">.</span><span class="n">choices</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">message</span><span class="o">.</span><span class="n">content</span><span class="p">)</span>
|
||||
</span></code></pre></div>
|
||||
</div>
|
||||
</section>
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue