From 09824c723644acf28465e580eb33ab8fb67649f3 Mon Sep 17 00:00:00 2001 From: Adil Hafeez Date: Tue, 30 Jul 2024 16:30:19 -0700 Subject: [PATCH] Update README.md --- demos/weather-forecast/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/demos/weather-forecast/README.md b/demos/weather-forecast/README.md index 88576b6b..a1db72cf 100644 --- a/demos/weather-forecast/README.md +++ b/demos/weather-forecast/README.md @@ -8,12 +8,12 @@ This demo shows how you can use intelligent prompt gateway to provide realtime w ``` 1. Create `.env` file and set OpenAI key using env var `OPENAI_API_KEY` 1. Start services - ```sh - $ docker compose up - ``` + ```sh + $ docker compose up + ``` 1. Navigate to http://localhost:18080/ 1. You can type in queries like "how is the weather in Seattle" 1. You can also ask follow up questions like "show me sunny days" -2. To see metrics navigate to "http://localhost:3000/" (use admin/grafana for login) +1. To see metrics navigate to "http://localhost:3000/" (use admin/grafana for login) 1. Open up dahsboard named "Intelligent Gateway Overview" 2. On this dashboard you can see reuqest latency and number of requests