diff --git a/docs/source/_config/getting-started.yml b/docs/source/_config/getting-started.yml index e058db5f..06f7b98c 100644 --- a/docs/source/_config/getting-started.yml +++ b/docs/source/_config/getting-started.yml @@ -1,5 +1,5 @@ version: "0.1-beta" -listen: +listener: address: 127.0.0.1 | 0.0.0.0 port_value: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates messages: tuple | hugging-face-messages-api diff --git a/docs/source/_config/rag-prompt-targets.yml b/docs/source/_config/rag-prompt-targets.yml index 2bb3a299..827b87bb 100644 --- a/docs/source/_config/rag-prompt-targets.yml +++ b/docs/source/_config/rag-prompt-targets.yml @@ -1,5 +1,5 @@ version: "0.1-beta" -listen: +listener: address: 127.0.0.1 | 0.0.0.0 port_value: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates diff --git a/docs/source/_static/img/arch-nav-logo.png b/docs/source/_static/img/arch-nav-logo.png new file mode 100644 index 00000000..5a1a7776 Binary files /dev/null and b/docs/source/_static/img/arch-nav-logo.png differ diff --git a/docs/source/_static/img/arch-system-architecture.jpg b/docs/source/_static/img/arch-system-architecture.jpg new file mode 100644 index 00000000..3c8839a7 Binary files /dev/null and b/docs/source/_static/img/arch-system-architecture.jpg differ diff --git a/docs/source/conf.py b/docs/source/conf.py index 3d68affe..a6141679 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -38,7 +38,7 @@ html_favicon = '_static/favicon.ico' html_theme = 'sphinx_book_theme' # You can change the theme to 'sphinx_rtd_theme' or another of your choice. # Specify the path to the logo image file (make sure the logo is in the _static directory) -html_logo = '_static/img/arch-logo.png' +html_logo = '_static/img/arch-nav-logo.png' html_theme_options = { 'navigation_depth': 4, diff --git a/docs/source/configuration_reference.rst b/docs/source/configuration_reference.rst index 23cd94f6..35425659 100644 --- a/docs/source/configuration_reference.rst +++ b/docs/source/configuration_reference.rst @@ -1,11 +1,11 @@ Configuration Reference ============================ -The following is a complete reference of the prompt-conifg.yml that controls the behavior of an Arch gateway. -We've kept things simple (less than 80 lines) and held off on exposing additional functionality (for e.g. suppporting -push observability stats, managing prompt-endpoints as virtual cluster, exposing more load balancing options, etc). Our -belief that the simple things, should be simple. So we offert good defaults for developers, so that they can spend more -of their time in building features unique to their AI experience. +The following is a complete reference of the ``prompt-conifg.yml`` that controls the behavior of a single instance of +the Arch gateway. We've kept things simple (less than 80 lines) and held off on exposing additional functionality (for +e.g. suppporting push observability stats, managing prompt-endpoints as virtual cluster, exposing more load balancing +options, etc). Our belief that the simple things, should be simple. So we offert good defaults for developers, so +that they can spend more of their time in building features unique to their AI experience. .. literalinclude:: /_config/prompt-config-full-reference.yml :language: yaml diff --git a/docs/source/getting_started/getting_started.rst b/docs/source/getting_started/getting_started.rst index aca639a2..6153176b 100644 --- a/docs/source/getting_started/getting_started.rst +++ b/docs/source/getting_started/getting_started.rst @@ -1,8 +1,8 @@ +.. _getting_started: + Getting Started ================ -This section gets you started with a very simple configuration and provides some example configurations. - .. sidebar:: Pre-requisites In order for you to get started, please make sure that `Docker `_ @@ -15,6 +15,34 @@ This section gets you started with a very simple configuration and provides some - ARM 64 +This section gets you started with a very simple configuration and provides some example configurations. + + The fastest way to get started using Arch is installing `pre-built binaries `_. You can also build it from source. +Step 1: Install the Arch CLI +---------------------------- +Arch's CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply +run the following command: + +.. code-block:: bash + + pip install archgw + +This will install the archgw command-line tool globally on your system. + +Step 2: Start Arch Gateway +-------------------------- + +.. code-block:: bash + + archgw up --quick-start + +Configuration +------------- + +Today, only support a static bootstrap configuration file for simplicity today: + +.. literalinclude:: /_config/getting-started.yml + :language: yaml diff --git a/docs/source/intro/architecture/architecture.rst b/docs/source/intro/architecture/architecture.rst index 9b6d663c..d278cb5d 100644 --- a/docs/source/intro/architecture/architecture.rst +++ b/docs/source/intro/architecture/architecture.rst @@ -8,3 +8,5 @@ Technical Architecture intro/threading_model listeners/listeners prompt_processing/prompt_processing + listeners/llm_provider + model_serving/model_serving diff --git a/docs/source/intro/architecture/intro/terminology.rst b/docs/source/intro/architecture/intro/terminology.rst index d2358aab..7f880b57 100644 --- a/docs/source/intro/architecture/intro/terminology.rst +++ b/docs/source/intro/architecture/intro/terminology.rst @@ -1,12 +1,14 @@ +.. _arch_terminology: + Terminology ============ A few definitions before we dive into the main architecture documentation. Arch borrows from Envoy's terminology to keep things consistent in logs, traces and in code. -**Downstream**: An downstream client (web application, etc.) connects to Arch, sends requests, and receives responses. +**Downstream(Ingress)**: An downstream client (web application, etc.) connects to Arch, sends prompts, and receives responses. -**Upstream**: An upstream host receives connections and prompts from Arch, and returns context or responses for a prompt +**Upstream(Egress)**: An upstream host that receives connections and prompts from Arch, and returns context or responses for a prompt .. image:: /_static/img/network-topology-ingress-egress.jpg :width: 100% @@ -18,27 +20,27 @@ before forwarding them to your application server endpoints. rch enables you to .. Note:: - When you start Arch, you specify a listener address/port that you want to bind downstream (. But Arch uses are predefined port that you - can use for outbound calls to LLMs and other services 127.0.0.1:10000 + When you start Arch, you specify a listener address/port that you want to bind downstream. But, Arch uses are predefined port + that you can use (``127.0.0.1:10000``) to proxy egress calls originating from your application to LLMs (API-based or hosted). + For more details, check out :ref:`LLM providers ` **Instance**: An instance of the Arch gateway. When you start Arch it creates at most two processes. One to handle Layer 7 networking operations (auth, tls, observability, etc) and the second process to serve models that enable it to make smart decisions on how to accept, handle and forward prompts. The second process is optional, as the model serving sevice could be hosted on a different network (an API call). But these two processes are considered a single instance of Arch. -**System Prompt**: An initial text or message that is provided by the developer that Arch can use to call an downstream LLM -in order to generate a response from the LLM model. The system prompt can be thought of as the input or query that the model -uses to generate its response. The quality and specificity of the system prompt can have a significant impact on the relevance -and accuracy of the model's response. Therefore, it is important to provide a clear and concise system prompt that accurately -conveys the user's intended message or question. - -**Prompt Targets**: Arch offers a primitive called “prompt targets” to help separate business logic from undifferentiated -work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Bolt. -For example, Bolt enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt -so that you can build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments -on a document - via prompts, Bolt uses its function calling abilities to extract critical information from the incoming prompt -(or a set of prompts) needed by a downstream backend API or function call before calling it directly. +**Prompt Targets**: Arch offers a primitive called ``prompt_targets`` to help separate business logic from undifferentiated +work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Arch. +For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt +so that you can build faster, more accurate retrieval (RAG) apps. To support agentic apps, like scheduling travel plans or +sharing comments on a document - via prompts, Bolt uses its function calling abilities to extract critical information from +the incoming prompt (or a set of prompts) needed by a downstream backend API or function call before calling it directly. **Error Targets**: Error targets are those endpoints that receive forwarded errors from Arch when issues arise, such as failing to properly call a function/API, detecting violations of guardrails, or encountering other processing errors. -These errors are communicated to the application via headers (X-Arch-[ERROR-TYPE]), allowing it to handle the errors gracefully and take appropriate actions. \ No newline at end of file +These errors are communicated to the application via headers (X-Arch-[ERROR-TYPE]), allowing it to handle the errors gracefully +and take appropriate actions. + +**Model Serving**: Arch is a set of **two** self-contained processes that are designed to run alongside your application servers +(or on a separate hostconnected via a network).The **model serving** process helps Arch make intelligent decisions about the +incoming prompts. The model server is designed to call the (fast) purpose-built :ref:`LLMs ` in Arch. diff --git a/docs/source/intro/architecture/listeners/listeners.rst b/docs/source/intro/architecture/listeners/listeners.rst index 39bf0447..e9165962 100644 --- a/docs/source/intro/architecture/listeners/listeners.rst +++ b/docs/source/intro/architecture/listeners/listeners.rst @@ -1,27 +1,37 @@ .. _arch_overview_listeners: Listener -======== -Arch leverages Envoy’s Listener subsystem to streamline connection management for developers. -By building on Envoy’s robust architecture, Arch simplifies the configuration required to bind incoming -connections from downstream clients and efficiently manages internal listeners for outgoing connections -to LLM hosts and APIs. +--------- +Listener is a top level primitive in Arch, which simplifies the configuration required to bind incoming +connections from downstream clients, and for egress connections to LLMs (hosted or API) -**Listener Subsystem Overview** +Arch builds on Envoy's Listener subsystem to streamline connection managemet for developers. Arch minimizes +the complexity of Envoy's listener setup by using best-practices and exposing only essential settings, +making it easier for developers to bind connections without deep knowledge of Envoy’s configuration model. This +simplification ensures that connections are secure, reliable, and optimized for performance. -- **Downstream Connections**: Arch uses Envoy's Listener subsystem to accept connections from downstream clients. - A listener acts as the primary entry point for incoming traffic, handling initial connection setup, including network - filtering and security checks, such as SNI and TLS termination. For more details on the listener subsystem, refer to the - `Envoy Listener Configuration `_. +Downstream (Ingress) +^^^^^^^^^^^^^^^^^^^^^^ +Developers can configure Arch to accept connections from downstream clients. A downstream listener acts as the +primary entry point for incoming traffic, handling initial connection setup, including network filtering, gurdrails, +and additional network security checks. For more details on prompt security and safety, +see :ref:`here ` -- **Internal Listeners for Outgoing Connections**: Arch automatically configures internal listeners to route requests - from prompts origination from your application services to appropriate upstream targets, including LLM hosts and backend APIs. - This configuration abstracts away complex networking setups, allowing developers to focus on business logic rather than the - intricacies of connection management and multiple SDKs to work with different LLM providers. +Upstream (Egress) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Arch automatically configures a listener to route requests from your application to upstream LLM API providers (or hosts). +When you start Arch, it creates a listener for egress traffic based on the presence of the ``llm_providers`` configuration +section in the ``prompt_config.yml`` file. Arch binds itself to a local address such as ``127.0.0.1:9000/v1`` or a DNS-based +address like ``arch.local:9000/v1`` for outgoing traffic. For more details on LLM providers, read :ref:`here ` + +Configure Listener +^^^^^^^^^^^^^^^^^^ -- **Simplified Configuration**: Arch minimizes the complexity of traditional Envoy setups by pre-defining essential - listener settings, making it easier for developers to bind connections without deep knowledge of Envoy’s configuration model. - This simplification ensures that connections are secure, reliable, and optimized for performance. +To configure a Downstream (Ingress) Listner, simply add the ``listener`` directive to your ``prompt_config.yml`` file: -Arch’s dependency on Envoy’s Listener subsystem provides a powerful, developer-friendly interface for managing connections, -enhancing the overall efficiency of handling prompts and routing them to the correct endpoints within a generative AI application. \ No newline at end of file +.. literalinclude:: /_config/getting-started.yml + :language: yaml + :linenos: + :lines: 1-18 + :emphasize-lines: 2-5 + :caption: :download:`arch-getting-started.yml ` \ No newline at end of file diff --git a/docs/source/intro/architecture/listeners/llm_provider.rst b/docs/source/intro/architecture/listeners/llm_provider.rst new file mode 100644 index 00000000..fa813dfb --- /dev/null +++ b/docs/source/intro/architecture/listeners/llm_provider.rst @@ -0,0 +1,52 @@ +.. _llm_providers: + +LLM Provider +------------ + +``llm_provider`` is a top-level primitive in Arch, helping developers centrally define, secure, observe, +and manage the usage of of their LLMs. Arch builds on Envoy's reliable `cluster subsystem `_ +to manage egress traffic to LLMs, which includes intelligent routing, retry and fail-over mechanisms, +ensuring high availability and fault tolerance. This abstraction also enables developers to seamlessly switching between LLM providers or upgrade LLM versions, simplifying the integration and scaling of LLMs across +applications. + + +Below is an example of how you can configure ``llm_providers`` with an instance of an Arch gateway. + +.. literalinclude:: /_config/getting-started.yml + :language: yaml + :linenos: + :lines: 1-20 + :emphasize-lines: 11-18 + :caption: :download:`arch-getting-started.yml ` + +.. Note:: + When you start Arch, it creates a listener port for egress traffic based on the presence of ``llm_providers`` + configuration section in the ``prompt_config.yml`` file. Arch binds itself to a local address such as + ``127.0.0.1:9000/v1`` or a DNS-based address like ``arch.local:9000/v1`` for egress traffic. + +Arch also offers vendor-agnostic SDKs and libraries to make LLM calls to API-based LLM providers (like OpenAI, +Anthropic, Mistral, Cohere, etc.) and supports calls to OSS LLMs that are hosted on your infrastructure. Arch +abstracts the complexities of integrating with different LLM providers, providing a unified interface for making +calls, handling retries, managing rate limits, and ensuring seamless integration with cloud-based and on-premise +LLMs. Simply configure the details of the LLMs your application will use, and Arch offers a unified interface to +make outbound LLM calls. + +Example: Using the Arch Python SDK +---------------------------------- + +.. code-block:: python + + from arch_client import ArchClient + + # Initialize the Arch client + client = ArchClient(base_url="http://127.0.0.1:9000/v1") + + # Define your LLM provider and prompt + model_id = "openai" + prompt = "What is the capital of France?" + + # Send the prompt to the LLM through Arch + response = client.completions.create(llm_provider=llm_provider, prompt=prompt) + + # Print the response + print("LLM Response:", response) \ No newline at end of file diff --git a/docs/source/intro/architecture/model_serving/model_serving.rst b/docs/source/intro/architecture/model_serving/model_serving.rst new file mode 100644 index 00000000..151babcd --- /dev/null +++ b/docs/source/intro/architecture/model_serving/model_serving.rst @@ -0,0 +1,56 @@ +.. _arch_model_serving: + +Model Serving +------------- + +Arch is a set of **two** self-contained processes that are designed to run alongside your application +servers (or on a separate host connected via a network). The first process is designated to manage low-level +networking and HTTP related comcerns, and the other process is for **model serving**, which helps Arch make +intelligent decisions about the incoming prompts. The model server is designed to call the purpose-built +:ref:`LLMs ` in Arch. + +.. image:: /_static/img/arch-system-architecture.jpg + :align: center + :width: 50% + +_____________________________________________________________________________________________________________ + +Arch' is designed to be deployed in your cloud VPC, on a on-premises host, and can work on devices that don't +have a GPU. Note, GPU devices are need for fast and cost-efficient use, so that Arch (model server, specifically) +can process prompts quickly and forward control back to the applicaton host. There are three modes in which Arch +can be configured to run its **model server** subsystem: + +Local Serving (CPU - Moderate) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +The following bash commands enable you to configure the model server subsystem in Arch to run local on device +and only use CPU devices. This will be the slowest option but can be useful in dev/test scenarios where GPUs +might not be available. + +.. code-block:: bash + + archgw up --local -cpu + +Local Serving (GPU- Fast) +^^^^^^^^^^^^^^^^^^^^^^^^^ +The following bash commands enable you to configure the model server subsystem in Arch to run locally on the +machine and utilize the GPU available for fast inference across all model use cases, including function calling +guardails, etc. + +.. code-block:: bash + + archgw up --local + +Cloud Serving (GPU - Blazing Fast) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +The command below instructs Arch to intelligently use GPUs locally for fast intent detection, but default to +cloud serving for function calling and guardails scenarios to dramatically improve the speed and overall performance +of your applications. + +.. code-block:: bash + + archgw up + +.. Note:: + Arch's model serving in the cloud is priced at $0.05M/token (156x cheaper than GPT-4o) with averlage latency + of 200ms (10x faster than GPT-4o). Please refer to our :ref:`getting started guide ` to know + how to generate API keys for model serving \ No newline at end of file diff --git a/docs/source/intro/architecture/prompt_processing/prompt_processing.rst b/docs/source/intro/architecture/prompt_processing/prompt_processing.rst index 0a2866c6..91c91ad7 100644 --- a/docs/source/intro/architecture/prompt_processing/prompt_processing.rst +++ b/docs/source/intro/architecture/prompt_processing/prompt_processing.rst @@ -1,23 +1,37 @@ .. _arch_overview_prompt_handling: Prompts -======= +------- Arch's primary design point is to securely accept, process and handle prompts. To do that effectively, Arch relies on Envoy's HTTP `connection management `_, -subsystem and its prompt-handler subsystem engineered with purpose-built :ref:`LLMs ` to implement -critical functionality on behalf of developers so that you can stay focused on business logic. +subsystem and its **prompt handler** subsystem engineered with purpose-built :ref:`LLMs ` to +implement critical functionality on behalf of developers so that you can stay focused on business logic. + +.. Note:: + Arch's **prompt handler** subsystem interacts with the **model** subsytem through Envoy's cluster manager + system to ensure robust, resilient and fault-tolerant experience in managing incoming prompts. Read more + about the :ref:`model subsystem ` and how the LLMs are hosted in Arch. + +Messages +-------- + +Arch accepts messages directly from the body of the HTTP request in a format that follows the `Hugging Face Messages API `_. +This design allows developers to pass a list of messages, where each message is represented as a dictionary +containing two key-value pairs: + + - **Role**: Defines the role of the message sender, such as "user" or "assistant". + - **Content**: Contains the actual text of the message. + Prompt Guardrails ----------------- Arch is engineered with :ref:`Arch-Guard `, an industry leading safety layer, powered by a -compact and high-performimg LLM that monitors incoming prompts to detect and reject jailbreak attempts and -several safety related concerns, ensuring that unauthorized or harmful behaviors are intercepted early in -the process. Arch-Guard is a composite model combining work from the industry leading Meta LLama models and -purposely-tuned models that offer exceptional overall performance. +compact and high-performimg LLM that monitors incoming prompts to detect and reject jailbreak attempts - +ensuring that unauthorized or harmful behaviors are intercepted early in the process. -To add prompt guardrails, see example below: +To add jailbreak guardrails, see example below: .. literalinclude:: /_config/getting-started.yml :language: yaml @@ -26,9 +40,9 @@ To add prompt guardrails, see example below: :caption: :download:`arch-getting-started.yml ` .. Note:: - As a roadmap item, Arch will expose the ability for developers to define custom guardrails via Arch-Guard-v2, - which would enforce instructions defined by the application developer to control conversational flow. To - offer feedback on our roadmap, please visit our `github page `_ + As a roadmap item, Arch will expose the ability for developers to define custom guardrails via Arch-Guard-v2, + and add support for additional safety checks defined by developers and hazardous categories like, violent crimes, privacy, hate, + etc. To offer feedback on our roadmap, please visit our `github page `_ Prompt Targets @@ -132,7 +146,6 @@ Example: Using OpenAI Client with Arch as an Egress Gateway print("OpenAI Response:", response.choices[0].text.strip()) - In these examples: The ArchClient is used to send traffic directly through the Arch egress proxy to the LLM of your choice, such as OpenAI. diff --git a/docs/source/intro/life_of_a_request.rst b/docs/source/intro/life_of_a_request.rst index 356713fe..d25da64a 100644 --- a/docs/source/intro/life_of_a_request.rst +++ b/docs/source/intro/life_of_a_request.rst @@ -15,32 +15,20 @@ dispatch upstream and the response path. Terminology ----------- -Arch uses the following terms through its' codebase and documentation: - -* *Listeners*: The Arch primitive responsible for binding to an IP/port, accepting new HTTP connections and orchestrating - the downstream facing aspects of prompt processing. Arch relies almostly exclusively on `Envoy's Listener subsystem `_. -* *Downstream*: an entity connecting to Arch. This may be another AI agent (side car or networked) or a remote client. -* *LLM Providers*: a set of upstream LLMs (API-based or network nodes) that Arch routes/forwards user and application-specific prompts to. - Arch offers a simply abstract to call different LLMs via model-id, add LLM specific retry, failover and routing capabilities. - Arch build's on top of Envoy's `Cluster substem ` -* *Upstream*: A set of hosts that can recieve traffic from an instance of the Arch gateway. -* *Prompt Targets*: A core primitive offered in Arch. Prompt targets are endpoints that receive prompts that are processed by Arch. - For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you can - build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, +We recommend that you get familiar with some of the :ref:`terminology ` used in Arch +before reading this section. Network topology ---------------- How a request flows through the components in a network (including Arch) depends on the network’s topology. Arch can be used in a wide variety of networking topologies. We focus on the inner operation of Arch below, -but briefly we address how Arch relates to the rest of the network in -this section. +but briefly we address how Arch relates to the rest of the network in this section. -* Ingress listeners take requests from upstream clients like a web UI or clients that forward prompts to you local application - Responses from the local application flow back through Arch to the downstream. +- **Downstream(Ingress)** listeners take requests from upstream clients like a web UI or clients that forward + prompts to you local application responses from the application flow back through Arch to the downstream. -* Egress listeners take requests from the local application and forward them to LLMs. These receiving nodes - will also be typically running Arch and accepting the request via their ingress listeners. +- **Upstream(Egress)** listeners take requests from the application and forward them to LLMs. .. image:: /_static/img/network-topology-ingress-egress.jpg :width: 100% @@ -53,6 +41,40 @@ traverse multiple Arch gateways: :width: 100% :align: center + +High level architecture +----------------------- +Arch is a set of **two** self-contained processes that are designed to run alongside your application servers +(or on a separate server connected to your application servers via a network). The first process is designated +to manage HTTP-level networking and connection management concerns (protocol management, request id generation, +header sanitization, etc.), and the other process is for **model serving**, which helps Arch make intelligent +decisions about the incoming prompts. The model server hosts the purpose-built :ref:`LLMs ` to +manage several critical, but undifferentiated, prompt related tasks on behalf of developers. + + +The request processing path in Arch has three main parts: + +* :ref:`Listener subsystem ` which handles **downstream** and **upstream** request + processing. It is responsible for managing the downstream (ingress) and the upstream (egress) request + lifecycle. The downstream and upstream HTTP/2 codec lives here. +* :ref:`Prompt handler subsystem ` which is responsible for selecting and + forwarding prompts ``prompt_targets`` and establishes the lifecycle of any **upstream** connection to a + hosted endpoint that implements domain-specific business logic for incoming promots. This is where knowledge + of targets and endpoint health, load balancing and connection pooling exists. +* :ref:`Model serving subsystem ` which helps Arch make intelligent decisions about the + incoming prompts. The model server is designed to call the purpose-built :ref:`LLMs ` in Arch. + +The three subsystems are bridged with either the HTTP router filter, and the cluster manager subsystems of Envoy. + +Also, Arch utilizes `Envoy event-based thread model `_. +A main thread is responsible forthe server lifecycle, configuration processing, stats, etc. and some number of +:ref:`worker threads ` process requests. All threads operate around an event loop (`libevent `_) +and any given downstream TCP connection will be handled by exactly one worker thread for its lifetime. Each worker +thread maintains its own pool of TCP connections to upstream endpoints. + +Worker threads rarely share state and operate in a trivially parallel fashion. This threading model +enables scaling to very high core count CPUs. + Configuration ------------- @@ -62,63 +84,93 @@ Today, only support a static bootstrap configuration file for simplicity today: :language: yaml -High level architecture ------------------------ - -The request processing path in Arch has two main parts: - -* :ref:`Listener subsystem ` which handles **downstream** request - processing. It is also responsible for managing the downstream request lifecycle and for the - response path to the client. The downstream HTTP/2 codec lives here. -* :ref:`Prompt subsystem ` which is responsible for selecting and - processing the **upstream** connection to an endpoint. This is where knowledge of targets and - endpoint health, load balancing and connection pooling exists. The upstream HTTP/2 codec lives - here. - -The two subsystems are bridged with the HTTP router filter, which forwards the HTTP request from -downstream to upstream. - -Arch utilizes `Envoy event-based thread model `_. -A main thread is responsible forthe server lifecycle, configuration processing, stats, etc. and some number -of :ref:`worker threads ` process requests. All threads operate around an event -loop (`libevent `_) and any given downstream TCP connection will be handled by exactly -one worker thread for its lifetime. Each worker thread maintains its own pool of TCP connections to upstream -endpoints. Today, Arch implemenents its core functionality around prompt handling in worker threads. - -Worker threads rarely share state and operate in a trivially parallel fashion. This threading model -enables scaling to very high core count CPUs. - -Request Flow ------------- +Request Flow (Ingress) +---------------------- Overview ^^^^^^^^ A brief outline of the life cycle of a request and response using the example configuration above: 1. **TCP Connection Establishment**: - A TCP connection from downstream is accepted by an Arch listener running on a worker thread. The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS, decrypts incoming data for processing. + A TCP connection from downstream is accepted by an Arch listener running on a worker thread. + The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS, + decrypts incoming data for processing. 2. **Prompt Guardrails Check**: - Arch first checks the incoming prompts for guardrails such as jailbreak attempts and toxicity. This ensures that harmful or unwanted behaviors are detected early in the request processing pipeline. + Arch first checks the incoming prompts for guardrails such as jailbreak attempts. This ensures + that harmful or unwanted behaviors are detected early in the request processing pipeline. 3. **Intent Matching**: - The decrypted data stream is deframed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs intent matching using the name and description of the defined prompt targets, determining which endpoint should handle the prompt. + The decrypted data stream is deframed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs + intent matching via is **prompt-handler** subsystem using the name and description of the defined prompt targets, + determining which endpoint should handle the prompt. -4. **Parameter Gathering with Arch-FC1B**: - If a prompt target requires specific parameters, Arch engages Arch-FC1B to extract the necessary details from the incoming prompt(s). This process gathers the critical information needed for downstream API calls. +4. **Parameter Gathering with Arch-FC**: + If a prompt target requires specific parameters, Arch engages Arch-FC to extract the necessary details + from the incoming prompt(s). This process gathers the critical information needed for downstream API calls. 5. **API Call Execution**: - Arch routes the prompt to the appropriate backend API or function call. If an endpoint cluster is identified, load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint. For more details on routing and load balancing, refer to the [Envoy routing documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/intro/arch_overview). - + Arch routes the prompt to the appropriate backend API or function call. If an endpoint cluster is identified, + load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint. + 6. **Default Summarization by Upstream LLM**: - By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization. This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval-Augmented Generation) and agentic applications. + By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization. + This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval-Augmented Generation) + and agentic applications. 7. **Error Handling and Forwarding**: - Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to designated error targets. Error details are communicated through specific headers to the application: + Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to + designated error targets. Error details are communicated through specific headers to the application: - ``X-Function-Error-Code``: Code indicating the type of function call error. - ``X-Prompt-Guard-Error-Code``: Code specifying violations detected by prompt guardrails. - Additional headers carry messages and timestamps to aid in debugging and logging. 8. **Response Handling**: - The upstream endpoint’s TLS transport socket encrypts the response, which is then proxied back downstream. Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery. + The upstream endpoint’s TLS transport socket encrypts the response, which is then proxied back downstream. + Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery. + + +Request Flow (Egress) +--------------------- + +Overview +-------- + +A brief outline of the life cycle of a request and response in the context of egress traffic from an application +to Large Language Models (LLMs) via Arch: + +1. **HTTP Connection Establishment to LLM**: + Arch initiates an HTTP connection to the upstream LLM service. This connection is handled by Arch’s egress listener + running on a worker thread. The connection typically uses a secure transport protocol such as HTTPS, ensuring the + prompt data is encrypted before being sent to the LLM service. + +2. **Rate Limiting**: + Before sending the request to the LLM, Arch applies rate-limiting policies to ensure that the upstream LLM service + is not overwhelmed by excessive traffic. Rate limits are enforced per client or service, ensuring fair usage and + preventing accidental or malicious overload. If the rate limit is exceeded, Arch may return an appropriate HTTP + error (e.g., 429 Too Many Requests) without sending the prompt to the LLM. + +3. **Load Balancing to (hosted) LLM Endpoints**: + After passing the rate-limiting checks, Arch routes the prompt to the appropriate LLM endpoint. + If multiple LLM providers instances are available, load balancing is performed to distribute traffic evenly + across the instances. Arch checks the health of the LLM endpoints using circuit breakers and health checks, + ensuring that the prompt is only routed to a healthy, responsive instance. + +4. **Response Reception and Forwarding**: + Once the LLM processes the prompt, Arch receives the response from the LLM service. The response is typically a + generated text, completion, or summarization. Upon reception, Arch decrypts (if necessary) and handles the response, + passing it through any egress processing pipeline defined by the application, such as logging or additional response filtering. + + +Post-request processing +^^^^^^^^^^^^^^^^^^^^^^^^ +Once a request completes, the stream is destroyed. The following also takes places: + +* The post-request :ref:`monitoring ` are updated (e.g. timing, active requests, upgrades, health checks). + Some statistics are updated earlier however, during request processing. Stats are batchedand written by the main + thread periodically. +* :ref:`Access logs ` are written to the access log +* :ref:`Trace ` spans are finalized. If our example request was traced, a + trace span, describing the duration and details of the request would be created by the HCM when + processing request headers and then finalized by the HCM during post-request processing. \ No newline at end of file diff --git a/docs/source/intro/what_is_arch.rst b/docs/source/intro/what_is_arch.rst index ede19a08..36b02d89 100644 --- a/docs/source/intro/what_is_arch.rst +++ b/docs/source/intro/what_is_arch.rst @@ -9,12 +9,17 @@ attempts, intelligently calling “backend” APIs to fulfill the user's request and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way. +.. image:: /_static/img/arch-logo.png + :width: 100% + :align: center + **The project was born out of the belief that:** *Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization - all outside business logic.* + In practice, achieving the above goal is incredibly difficult. Arch attempts to do so by providing the following high level features: diff --git a/docs/source/observability/access_logs.rst b/docs/source/observability/access_logs.rst new file mode 100644 index 00000000..6a2df20a --- /dev/null +++ b/docs/source/observability/access_logs.rst @@ -0,0 +1,23 @@ +.. _arch_access_logging: + +Access Logging +============== + +Access logging in Arch refers to the logging of detailed information about each request and response that flows through Arch. +It provides visibility into the traffic passing through Arch, which is crucial for monitoring, debugging, and analyzing the +behavior of AI applications and their interactions. + +Key Features of Access Logging in Arch: +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +* **Per-Request Logging**: + Each request that passes through Arch is logged. This includes important metadata such as HTTP method, + path, response status code, request duration, upstream host, and more. +* **Integration with Monitoring Tools**: + Access logs can be exported to centralized logging systems (e.g., ELK stack or Fluentd) or used to feed monitoring and alerting systems. +* **Structured Logging**: where each request is logged as a object, making it easier to parse and analyze using tools like Elasticsearch and Kibana. + +.. code-block:: yaml + + [2024-09-27T14:52:01.123Z] "ARCH REQUEST" GET /path/to/resource HTTP/1.1 200 512 1024 56 upstream_service.com D + X-Arch-Upstream-Service-Time: 25 + X-Arch-Attempt-Count: 1 \ No newline at end of file diff --git a/docs/source/observability/observability.rst b/docs/source/observability/observability.rst index 20eb975d..5d66f6c7 100644 --- a/docs/source/observability/observability.rst +++ b/docs/source/observability/observability.rst @@ -7,4 +7,5 @@ Observability :maxdepth: 2 tracing - stats \ No newline at end of file + stats + access_logs \ No newline at end of file diff --git a/docs/source/observability/stats.rst b/docs/source/observability/stats.rst index 3313520f..a84ce911 100644 --- a/docs/source/observability/stats.rst +++ b/docs/source/observability/stats.rst @@ -1,3 +1,5 @@ +.. _monitoring: + Monitoring ========== diff --git a/docs/source/root.rst b/docs/source/root.rst index 52eff178..efd13b08 100644 --- a/docs/source/root.rst +++ b/docs/source/root.rst @@ -1,9 +1,19 @@ -Arch Documentation -================== +Documentation +============= + +.. image:: /_static/img/arch-logo.png + :width: 100% + :align: center + +**Arch is built on (and by the core contributors of) Envoy proxy with the belief that:** + + *Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests + including secure handling, intelligent routing, robust observability, and integration with backend (API) + systems for personalization - all outside business logic.* .. toctree:: - :maxdepth: 2 - + :maxdepth: 1 + intro/intro getting_started/getting_started getting_started/use_cases