updated all demo READMes and minor doc changes (#154)

* updated all demo READMes and minor doc changes

* minor typo fixes

* updated main Readme

* fixed README and docs

* fixed README and docs

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
This commit is contained in:
Salman Paracha 2024-10-08 23:58:55 -07:00 committed by GitHub
parent b63a01fe82
commit 42d4a28e13
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
22 changed files with 324 additions and 1455 deletions

View file

@ -108,7 +108,7 @@ traffic, apply rate limits, and utilize a large set of traffic management capabi
.. Attention::
When you start Arch, it automatically creates a listener port for egress calls to upstream LLMs. This is based on the
``llm_providers`` configuration section in the ``prompt_config.yml`` file. Arch binds itself to a local address such as
127.0.0.1:51001/v1.
127.0.0.1:12000/v1.
Example: Using OpenAI Client with Arch as an Egress Gateway
@ -119,7 +119,7 @@ Example: Using OpenAI Client with Arch as an Egress Gateway
import openai
# Set the OpenAI API base URL to the Arch gateway endpoint
openai.api_base = "http://127.0.0.1:51001/v1"
openai.api_base = "http://127.0.0.1:12000/v1"
# No need to set openai.api_key since it's configured in Arch's gateway

View file

@ -21,7 +21,7 @@ before forwarding them to your application server endpoints. rch enables you to
.. Note::
When you start Arch, you specify a listener address/port that you want to bind downstream. But, Arch uses are predefined port
that you can use (``127.0.0.1:10000``) to proxy egress calls originating from your application to LLMs (API-based or hosted).
that you can use (``127.0.0.1:12000``) to proxy egress calls originating from your application to LLMs (API-based or hosted).
For more details, check out :ref:`LLM provider <llm_provider>`.
**Instance**: An instance of the Arch gateway. When you start Arch it creates at most two processes. One to handle Layer 7