mirror of
https://github.com/katanemo/plano.git
synced 2026-04-25 16:56:24 +02:00
3 lines
233 B
Markdown
3 lines
233 B
Markdown
This demo shows how you can use ollama as upstream LLM.
|
|
|
|
Before you can start the demo please make sure you have ollama up and running. You can use command `ollama run llama3.2` to start llama 3.2 (3b) model locally at port `11434`.
|