plano/demos/integrations/ollama
2026-03-03 15:08:50 -08:00
..
config.yaml Overhaul demos directory: cleanup, restructure, and standardize configs (#760) 2026-02-17 03:09:28 -08:00
docker-compose.yaml Overhaul demos directory: cleanup, restructure, and standardize configs (#760) 2026-02-17 03:09:28 -08:00
README.md Overhaul demos directory: cleanup, restructure, and standardize configs (#760) 2026-02-17 03:09:28 -08:00
run_demo.sh add --docker flag to E2E tests and demo scripts 2026-03-03 15:08:50 -08:00

This demo shows how you can use ollama as upstream LLM.

Before you can start the demo please make sure you have ollama up and running. You can use command ollama run llama3.2 to start llama 3.2 (3b) model locally at port 11434.