plano/demos/use_cases/ollama
2025-02-14 19:28:10 -08:00
..
arch_config.yaml Update arch_config and add tests for arch config file (#407) 2025-02-14 19:28:10 -08:00
docker-compose.yaml refactor demos (#398) 2025-02-07 18:45:42 -08:00
docker-compose_honeycomb.yaml refactor demos (#398) 2025-02-07 18:45:42 -08:00
README.md refactor demos (#398) 2025-02-07 18:45:42 -08:00
run_demo.sh refactor demos (#398) 2025-02-07 18:45:42 -08:00

This demo shows how you can use ollama as upstream LLM.

Before you can start the demo please make sure you have ollama up and running. You can use command ollama run llama3.2 to start llama 3.2 (3b) model locally at port 11434.