plano/demos/network_copilot
Salman Paracha 8654d3d5c5
simplify developer getting started experience (#102)
* Fixed build. Now, we have a bare bones version of the docker-compose file with only two services, archgw and archgw-model-server. Tested using CLI

* some pre-commit fixes

* fixed cargo formatting issues

* fixed model server conflict changes

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
2024-10-01 10:02:23 -07:00
..
api_server fix demos code (#76) 2024-09-24 14:34:22 -07:00
grafana fix demos code (#76) 2024-09-24 14:34:22 -07:00
prometheus Rename bolt_config to arch_config (#100) 2024-09-30 18:47:35 -07:00
Bolt-FC-1B-Q4_K_M.model_file fix demos code (#76) 2024-09-24 14:34:22 -07:00
bolt_config.yaml fix demos code (#76) 2024-09-24 14:34:22 -07:00
docker-compose.yaml simplify developer getting started experience (#102) 2024-10-01 10:02:23 -07:00
README.md Added Float type to the function parameter values (#77) 2024-09-25 13:29:20 -07:00

Function calling

This demo shows how you can use intelligent prompt gateway as a network copilot that could give information about correlation between packet loss with device reboots, downs, or maintainence. This demo assumes you are using ollama running natively. If you want to run ollama running inside docker then please update ollama endpoint in docker-compose file.

Starting the demo

  1. Create .env file and set OpenAI key using env var OPENAI_API_KEY
  2. Start services
    docker compose up
    
  3. Download Bolt-FC model. This demo assumes we have downloaded Bolt-Function-Calling-1B:Q4_K_M to local folder.
  4. If running ollama natively run
    ollama serve
    
  5. Create model file in ollama repository
    ollama create Bolt-Function-Calling-1B:Q4_K_M -f Bolt-FC-1B-Q4_K_M.model_file
    
  6. Navigate to http://localhost:18080/
  7. You can type in queries like "show me any packet drops due to interface failure in the past 3 days"
    • You can also ask follow up questions like "show me just the ones with maximum 200 in errors"
  8. To see metrics navigate to "http://localhost:3000/" (use admin/grafana for login)
    • Open up dahsboard named "Intelligent Gateway Overview"
    • On this dashboard you can see reuqest latency and number of requests