diff --git a/README.md b/README.md index ba271dd3..8a416d51 100644 --- a/README.md +++ b/README.md @@ -32,14 +32,14 @@ Past the thrill of an AI demo, have you found yourself hitting these walls? You - You're bogged down with prompt engineering just to **clarify user intent and validate inputs** effectively? - You're wasting cycles choosing and integrating code for **observability** instead of it happening transparently? -And you think to youself, can't I move faster by focusing on higher-level objectives in a language/framework agnostic way? Well, you can! **Arch Gateway** was built by the contributors of [Envoy Proxy](https://www.envoyproxy.io/) with the belief that: +And you think to yourself, can't I move faster by focusing on higher-level objectives in a language/framework agnostic way? Well, you can! **Arch Gateway** was built by the contributors of [Envoy Proxy](https://www.envoyproxy.io/) with the belief that: >Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems to improve speed and accuracy for common agentic scenarios – all outside core application logic.* **Core Features**: - `🚦 Routing`. Engineered with purpose-built [LLMs](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) for fast (<100ms) agent routing and hand-off scenarios - - `⚡ Tools Use`: For common agentic scenarios let Arch instantly clarfiy and convert prompts to tools/API calls + - `⚡ Tools Use`: For common agentic scenarios let Arch instantly clarify and convert prompts to tools/API calls - `⛨ Guardrails`: Centrally configure and prevent harmful outcomes and ensure safe user interactions - `🔗 Access to LLMs`: Centralize access and traffic to LLMs with smart retries for continuous availability - `🕵 Observability`: W3C compatible request tracing and LLM metrics that instantly plugin with popular tools @@ -151,7 +151,7 @@ $ archgw up arch_config.yaml 2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.1.5 ... 2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful! -2024-12-05 16:56:28,485 - cli.main - INFO - Starging arch model server and arch gateway +2024-12-05 16:56:28,485 - cli.main - INFO - Starting arch model server and arch gateway ... 2024-12-05 16:56:51,647 - cli.core - INFO - Container is healthy! diff --git a/demos/samples_python/network_switch_operator_agent/README.md b/demos/samples_python/network_switch_operator_agent/README.md index e7038732..848f2410 100644 --- a/demos/samples_python/network_switch_operator_agent/README.md +++ b/demos/samples_python/network_switch_operator_agent/README.md @@ -28,7 +28,7 @@ The assistant can perform several key operations, including rebooting devices, a 4. Tell me what can you do for me?" # Observability -Arch gateway publishes stats endpoint at http://localhost:19901/stats. In this demo we are using prometheus to pull stats from arch and we are using grafana to visalize the stats in dashboard. To see grafana dashboard follow instructions below, +Arch gateway publishes stats endpoint at http://localhost:19901/stats. In this demo we are using prometheus to pull stats from arch and we are using grafana to visualize the stats in dashboard. To see grafana dashboard follow instructions below, 1. Start grafana and prometheus using following command ```yaml diff --git a/demos/samples_python/weather_forecast/README.md b/demos/samples_python/weather_forecast/README.md index 2dfd6a8f..26eea157 100644 --- a/demos/samples_python/weather_forecast/README.md +++ b/demos/samples_python/weather_forecast/README.md @@ -1,6 +1,6 @@ # Function calling -This demo shows how you can use Arch's core function calling capabilites. +This demo shows how you can use Arch's core function calling capabilities. # Starting the demo diff --git a/docs/source/concepts/prompt_target.rst b/docs/source/concepts/prompt_target.rst index 8e5994c7..c708a9e7 100644 --- a/docs/source/concepts/prompt_target.rst +++ b/docs/source/concepts/prompt_target.rst @@ -9,7 +9,7 @@ This section covers the essentials of prompt targets—what they are, how to con What Are Prompt Targets? ------------------------ -Prompt targets are endpoints within Arch that handle specific types of user prompts. They act as the bridge between user inputs and your backend agemts or tools (APIs), enabling Arch to route, process, and manage prompts efficiently. Defining prompt targets helps you decouple your application's core logic from processing and handling complexities, leading to clearer code organization, better scalability, and easier maintenance. +Prompt targets are endpoints within Arch that handle specific types of user prompts. They act as the bridge between user inputs and your backend agents or tools (APIs), enabling Arch to route, process, and manage prompts efficiently. Defining prompt targets helps you decouple your application's core logic from processing and handling complexities, leading to clearer code organization, better scalability, and easier maintenance. .. table:: @@ -71,7 +71,7 @@ Each parameter can be marked as required or optional. Here is a full list of par ``default`` Specifies a default value for the parameter if not provided by the user. ``format`` Specifies a format for the parameter value. For example: `2019-12-31` for a date value. ``enum`` Lists of allowable values for the parameter with data type matching the ``type`` attribute. **Usage Example**: ``enum: ["celsius`", "fahrenheit"]`` - ``items`` Specifies the attribute of the elements when type euqals **list**, **set**, **dict**, **tuple**. **Usage Example**: ``items: {"type": "str"}`` + ``items`` Specifies the attribute of the elements when type equals **list**, **set**, **dict**, **tuple**. **Usage Example**: ``items: {"type": "str"}`` ``required`` Indicates whether the parameter is mandatory or optional. Valid values: **true** or **false** ======================== ============================================================================ diff --git a/docs/source/concepts/tech_overview/listener.rst b/docs/source/concepts/tech_overview/listener.rst index 632d5315..b6795ce6 100644 --- a/docs/source/concepts/tech_overview/listener.rst +++ b/docs/source/concepts/tech_overview/listener.rst @@ -5,7 +5,7 @@ Listener **Listener** is a top level primitive in Arch, which simplifies the configuration required to bind incoming connections from downstream clients, and for egress connections to LLMs (hosted or API) -Arch builds on Envoy's Listener subsystem to streamline connection managemet for developers. Arch minimizes +Arch builds on Envoy's Listener subsystem to streamline connection management for developers. Arch minimizes the complexity of Envoy's listener setup by using best-practices and exposing only essential settings, making it easier for developers to bind connections without deep knowledge of Envoy’s configuration model. This simplification ensures that connections are secure, reliable, and optimized for performance. @@ -13,7 +13,7 @@ simplification ensures that connections are secure, reliable, and optimized for Downstream (Ingress) ^^^^^^^^^^^^^^^^^^^^^^ Developers can configure Arch to accept connections from downstream clients. A downstream listener acts as the -primary entry point for incoming traffic, handling initial connection setup, including network filtering, gurdrails, +primary entry point for incoming traffic, handling initial connection setup, including network filtering, guardrails, and additional network security checks. For more details on prompt security and safety, see :ref:`here `. @@ -27,7 +27,7 @@ address like ``arch.local:12000/v1`` for outgoing traffic. For more details on L Configure Listener ^^^^^^^^^^^^^^^^^^ -To configure a Downstream (Ingress) Listner, simply add the ``listener`` directive to your configuration file: +To configure a Downstream (Ingress) Listener, simply add the ``listener`` directive to your configuration file: .. literalinclude:: ../includes/arch_config.yaml :language: yaml diff --git a/docs/source/concepts/tech_overview/model_serving.rst b/docs/source/concepts/tech_overview/model_serving.rst index da4b3341..53a4e377 100644 --- a/docs/source/concepts/tech_overview/model_serving.rst +++ b/docs/source/concepts/tech_overview/model_serving.rst @@ -5,7 +5,7 @@ Model Serving Arch is a set of `two` self-contained processes that are designed to run alongside your application servers (or on a separate host connected via a network). The first process is designated to manage low-level -networking and HTTP related comcerns, and the other process is for model serving, which helps Arch make +networking and HTTP related concerns, and the other process is for model serving, which helps Arch make intelligent decisions about the incoming prompts. The model server is designed to call the purpose-built LLMs in Arch. @@ -16,7 +16,7 @@ LLMs in Arch. Arch' is designed to be deployed in your cloud VPC, on a on-premises host, and can work on devices that don't have a GPU. Note, GPU devices are need for fast and cost-efficient use, so that Arch (model server, specifically) -can process prompts quickly and forward control back to the applicaton host. There are three modes in which Arch +can process prompts quickly and forward control back to the application host. There are three modes in which Arch can be configured to run its **model server** subsystem: Local Serving (CPU - Moderate) @@ -32,7 +32,7 @@ might not be available. Cloud Serving (GPU - Blazing Fast) ---------------------------------- The command below instructs Arch to intelligently use GPUs locally for fast intent detection, but default to -cloud serving for function calling and guardails scenarios to dramatically improve the speed and overall performance +cloud serving for function calling and guardrails scenarios to dramatically improve the speed and overall performance of your applications. .. code-block:: console @@ -40,6 +40,6 @@ of your applications. $ archgw up .. Note:: - Arch's model serving in the cloud is priced at $0.05M/token (156x cheaper than GPT-4o) with averlage latency + Arch's model serving in the cloud is priced at $0.05M/token (156x cheaper than GPT-4o) with average latency of 200ms (10x faster than GPT-4o). Please refer to our :ref:`Get Started ` to know how to generate API keys for model serving diff --git a/docs/source/concepts/tech_overview/prompt.rst b/docs/source/concepts/tech_overview/prompt.rst index d386ffd6..0ba7b316 100644 --- a/docs/source/concepts/tech_overview/prompt.rst +++ b/docs/source/concepts/tech_overview/prompt.rst @@ -8,7 +8,7 @@ Arch relies on Envoy's HTTP `connection management ` and how the LLMs are hosted in Arch. @@ -28,7 +28,7 @@ Prompt Guard ----------------- Arch is engineered with `Arch-Guard `_, an industry leading safety layer, powered by a -compact and high-performimg LLM that monitors incoming prompts to detect and reject jailbreak attempts - +compact and high-performing LLM that monitors incoming prompts to detect and reject jailbreak attempts - ensuring that unauthorized or harmful behaviors are intercepted early in the process. To add jailbreak guardrails, see example below: @@ -50,7 +50,7 @@ Prompt Targets -------------- Once a prompt passes any configured guardrail checks, Arch processes the contents of the incoming conversation -and identifies where to forwad the conversation to via its ``prompt target`` primitve. Prompt targets are endpoints +and identifies where to forward the conversation to via its ``prompt target`` primitive. Prompt targets are endpoints that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with metadata like knowing when a user's intent has changed so that you can build faster, more accurate RAG apps. @@ -72,7 +72,7 @@ Intent Matching Arch uses fast text embedding and intent recognition approaches to first detect the intent of each incoming prompt. This intent matching phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint. -Arch’s intent matching framework considers both the name and description of each prompt target, and uses a composite matching score between embedding similarity and intent classification scores to enchance accuracy in forwarding decisions. +Arch’s intent matching framework considers both the name and description of each prompt target, and uses a composite matching score between embedding similarity and intent classification scores to enhance accuracy in forwarding decisions. - **Intent Recognition**: NLI techniques further refine the matching process by evaluating the semantic alignment between the prompt and potential targets. diff --git a/docs/source/concepts/tech_overview/request_lifecycle.rst b/docs/source/concepts/tech_overview/request_lifecycle.rst index dd3bcc8f..160eb85e 100644 --- a/docs/source/concepts/tech_overview/request_lifecycle.rst +++ b/docs/source/concepts/tech_overview/request_lifecycle.rst @@ -5,7 +5,7 @@ Request Lifecycle Below we describe the events in the lifecycle of a request passing through an Arch gateway instance. We first describe how Arch fits into the request path and then the internal events that take place following -the arrival of a request at Arch from downtream clients. We follow the request until the corresponding +the arrival of a request at Arch from downstream clients. We follow the request until the corresponding dispatch upstream and the response path. .. image:: /_static/img/network-topology-ingress-egress.jpg @@ -59,7 +59,7 @@ The request processing path in Arch has three main parts: lifecycle. The downstream and upstream HTTP/2 codec lives here. * :ref:`Prompt handler subsystem ` which is responsible for selecting and forwarding prompts ``prompt_targets`` and establishes the lifecycle of any **upstream** connection to a - hosted endpoint that implements domain-specific business logic for incoming promots. This is where knowledge + hosted endpoint that implements domain-specific business logic for incoming prompts. This is where knowledge of targets and endpoint health, load balancing and connection pooling exists. * :ref:`Model serving subsystem ` which helps Arch make intelligent decisions about the incoming prompts. The model server is designed to call the purpose-built LLMs in Arch. @@ -67,7 +67,7 @@ The request processing path in Arch has three main parts: The three subsystems are bridged with either the HTTP router filter, and the cluster manager subsystems of Envoy. Also, Arch utilizes `Envoy event-based thread model `_. -A main thread is responsible forthe server lifecycle, configuration processing, stats, etc. and some number of +A main thread is responsible for the server lifecycle, configuration processing, stats, etc. and some number of :ref:`worker threads ` process requests. All threads operate around an event loop (`libevent `_) and any given downstream TCP connection will be handled by exactly one worker thread for its lifetime. Each worker thread maintains its own pool of TCP connections to upstream endpoints. @@ -99,7 +99,7 @@ A brief outline of the lifecycle of a request and response using the example con that harmful or unwanted behaviors are detected early in the request processing pipeline. 3. **Intent Matching**: - The decrypted data stream is deframed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs + The decrypted data stream is de-framed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs intent matching via is **prompt-handler** subsystem using the name and description of the defined prompt targets, determining which endpoint should handle the prompt. @@ -162,7 +162,7 @@ Post-request processing Once a request completes, the stream is destroyed. The following also takes places: * The post-request :ref:`monitoring ` are updated (e.g. timing, active requests, upgrades, health checks). - Some statistics are updated earlier however, during request processing. Stats are batchedand written by the main + Some statistics are updated earlier however, during request processing. Stats are batched and written by the main thread periodically. * :ref:`Access logs ` are written to the access log * :ref:`Trace ` spans are finalized. If our example request was traced, a diff --git a/docs/source/concepts/tech_overview/terminology.rst b/docs/source/concepts/tech_overview/terminology.rst index dff32957..0184ed40 100644 --- a/docs/source/concepts/tech_overview/terminology.rst +++ b/docs/source/concepts/tech_overview/terminology.rst @@ -7,12 +7,12 @@ A few definitions before we dive into the main architecture documentation. Also to keep things consistent in logs and traces, and introduces and clarifies concepts are is relates to LLM applications. **Agent**: An application that uses LLMs to handle wide-ranging tasks from users via prompts. This could be as simple -as retrieving or summarizing data from an API, or being able to trigger compleix actions like adjusting ad campaigns, or +as retrieving or summarizing data from an API, or being able to trigger complex actions like adjusting ad campaigns, or changing travel plans via prompts. **Arch Config**: Arch operates based on a configuration that controls the behavior of a single instance of the Arch gateway. This where you enable capabilities like LLM routing, fast function calling (via prompt_targets), applying guardrails, and enabling critical -features like metrics and tracing. For the full configuration reference of `arch_config.yaml` see :ref:`here `. +features like metrics and tracing. For the full configuration reference of `arch_config.yaml` see :ref:`here `. **Downstream(Ingress)**: An downstream client (web application, etc.) connects to Arch, sends prompts, and receives responses. @@ -37,11 +37,11 @@ code to LLMs. undifferentiated work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you can build faster, more accurate retrieval (RAG) apps. To support agentic apps, like scheduling travel plans or sharing comments on a -document - via prompts, Arch uses its function calling abilities to extract critical information fromthe incoming prompt (or a set of +document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of prompts) needed by a downstream backend API or function call before calling it directly. **Model Serving**: Arch is a set of `two` self-contained processes that are designed to run alongside your application servers -(or on a separate hostconnected via a network).The :ref:`model serving ` process helps Arch make intelligent decisions +(or on a separate host connected via a network).The :ref:`model serving ` process helps Arch make intelligent decisions about the incoming prompts. The model server is designed to call the (fast) purpose-built LLMs in Arch. **Error Target**: :ref:`Error targets ` are those endpoints that receive forwarded errors from Arch when issues arise, diff --git a/docs/source/get_started/intro_to_arch.rst b/docs/source/get_started/intro_to_arch.rst index 14a84bab..de00c974 100644 --- a/docs/source/get_started/intro_to_arch.rst +++ b/docs/source/get_started/intro_to_arch.rst @@ -13,7 +13,7 @@ Past the thrill of an AI demo, have you found yourself hitting these walls? You - You're **trapped in tedious prompting work** to clarify inputs and user intents? - You're **wasting cycles** choosing and integrating **code for observability** instead of it just happening transparently? -And you think to youself, can't I move faster by focusing on higher-level objectives in a language and framework agnostic way? Well, you can! +And you think to yourself, can't I move faster by focusing on higher-level objectives in a language and framework agnostic way? Well, you can! .. figure:: /_static/img/arch_network_diagram_high_level.png :width: 100% @@ -35,7 +35,7 @@ Arch takes a dependency on Envoy and is a self-contained process that is designe Arch uses Envoy's HTTP connection management subsystem, HTTP L7 filtering and telemetry capabilities to extend the functionality exclusively for prompts and LLMs. This gives Arch several advantages: -* Arch builds on Envoy's proven success. Envoy is used at masssive scale by the leading technology companies of our time including `AirBnB `_, `Dropbox `_, `Google `_, `Reddit `_, `Stripe `_, etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters: application features and business logic. +* Arch builds on Envoy's proven success. Envoy is used at massive scale by the leading technology companies of our time including `AirBnB `_, `Dropbox `_, `Google `_, `Reddit `_, `Stripe `_, etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters: application features and business logic. * Arch works with any application language. A single Arch deployment can act as gateway for AI applications written in Python, Java, C++, Go, Php, etc. @@ -54,7 +54,7 @@ These LLMs are designed to be best-in-class for critical prompt-related tasks li With prompt guardrails you can prevent ``jailbreak attempts`` present in user's prompts without having to write a single line of code. To learn more about how to configure guardrails available in Arch, read :ref:`Prompt Guard `. -**Traffic Management:** Arch offers several capabilities for LLM calls originating from your applications, including smart retries on errors from upstream LLMs, and automatic cutover to other LLMs configured in Arch for continuous availability and disaster recovery scenarios. +**Traffic Management:** Arch offers several capabilities for LLM calls originating from your applications, including smart retries on errors from upstream LLMs, and automatic cut-over to other LLMs configured in Arch for continuous availability and disaster recovery scenarios. Arch extends Envoy's `cluster subsystem `_ to manage upstream connections to LLMs so that you can build resilient AI applications. **Front/edge Gateway:** There is substantial benefit in using the same software at the edge (observability, traffic shaping algorithms, applying guardrails, etc.) as for outbound LLM inference use cases. diff --git a/docs/source/guides/agent_routing.rst b/docs/source/guides/agent_routing.rst index 418d674d..5effad69 100644 --- a/docs/source/guides/agent_routing.rst +++ b/docs/source/guides/agent_routing.rst @@ -81,7 +81,7 @@ the workflow, configuration, and implementation of Agent routing and hand off in return agent.handle(req) .. note:: - The above example demonstrates a simple implementation of Agent Routing and Hand Off using FastAPI. For the full implemenation of this example + The above example demonstrates a simple implementation of Agent Routing and Hand Off using FastAPI. For the full implementation of this example please see our `GitHub demo `_. Example Use Cases @@ -96,10 +96,10 @@ Best Practices and Tips ------------------------ When implementing Agent Routing and Hand Off in your applications, consider these best practices: -- Clearly Define Agent Responsibilities: Ensure each agent or human endpoint has a clear, specific description of the prompts they handle, reducing misrouting. -- Monitor and Optimize Routes: Regularly review how prompts are routed to adjust and optimize agent definitions and configurations. +- Clearly define agent responsibilities: Ensure each agent or human endpoint has a clear, specific description of the prompts they handle, reducing mis-routing. +- Monitor and optimize routes: Regularly review how prompts are routed to adjust and optimize agent definitions and configurations. .. note:: - To observe traffic to and from agents, please read more about :ref:`observabiliuty ` in Arch. + To observe traffic to and from agents, please read more about :ref:`observability ` in Arch. By carefully configuring and managing your Agent routing and hand off, you can significantly improve your application's responsiveness, performance, and overall user satisfaction. diff --git a/docs/source/resources/configuration_reference.rst b/docs/source/resources/configuration_reference.rst index ef7baceb..af60a642 100644 --- a/docs/source/resources/configuration_reference.rst +++ b/docs/source/resources/configuration_reference.rst @@ -1,9 +1,9 @@ -.. _configuration_refernce: +.. _configuration_reference: Configuration Reference ======================= -The following is a complete reference of the ``arch_conifg.yml`` that controls the behavior of a single instance of +The following is a complete reference of the ``arch_config.yml`` that controls the behavior of a single instance of the Arch gateway. This where you enable capabilities like routing to upstream LLm providers, defining prompt_targets where prompts get routed to, apply guardrails, and enable critical agent observability features. diff --git a/docs/source/resources/includes/arch_config_full_reference.yaml b/docs/source/resources/includes/arch_config_full_reference.yaml index 90bbef56..5ef2639c 100644 --- a/docs/source/resources/includes/arch_config_full_reference.yaml +++ b/docs/source/resources/includes/arch_config_full_reference.yaml @@ -48,8 +48,8 @@ llm_providers: # provides a way to override default settings for the arch system overrides: - # By default Arch uses an NLI + embedding approach to match an incomming prompt to a prompt target. - # The intent matching threshold is kept at 0.80, you can overide this behavior if you would like + # By default Arch uses an NLI + embedding approach to match an incoming prompt to a prompt target. + # The intent matching threshold is kept at 0.80, you can override this behavior if you would like prompt_target_intent_matching_threshold: 0.60 # default system prompt used by all prompt targets