diff --git a/README.md b/README.md
index c8717dd5..8a0f35b3 100644
--- a/README.md
+++ b/README.md
@@ -2,11 +2,11 @@
-Build fast, robust, and personalized GenAI applications.
+Build fast, robust, and personalized GenAI applications (agents, assistants, etc.)
Arch is an intelligent [Layer 7](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) gateway designed for generative AI apps, AI agents, and co-pilots that work with prompts. Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting [jailbreak](https://github.com/verazuo/jailbreak_llms) attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
- Arch is built on and by the core contributors of the popular [Envoy Distributed Proxy](https://www.envoyproxy.io/) with the belief that:
+ Arch is built on (and by the core contributors of) the wildly popular and robust [Envoy Proxy](https://www.envoyproxy.io/) with the belief that:
*Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic.*
diff --git a/docs/source/_static/css/arch.css b/docs/source/_static/css/arch.css
index 08f9e5c4..ca14dfda 100644
--- a/docs/source/_static/css/arch.css
+++ b/docs/source/_static/css/arch.css
@@ -1,4 +1,5 @@
-.bd-article {
- padding-left: 3rem;
- padding-right: 3rem;
-}
+@import url("theme.css");
+
+body {
+ font-size: 1em;
+}
\ No newline at end of file
diff --git a/docs/source/_static/img/arch-logo-clear.png b/docs/source/_static/img/arch-logo-clear.png
deleted file mode 100644
index 68108a08..00000000
Binary files a/docs/source/_static/img/arch-logo-clear.png and /dev/null differ
diff --git a/docs/source/_static/img/network-topology-app-server.jpg b/docs/source/_static/img/network-topology-app-server.jpg
deleted file mode 100644
index 87d7dc15..00000000
Binary files a/docs/source/_static/img/network-topology-app-server.jpg and /dev/null differ
diff --git a/docs/source/configuration_reference.rst b/docs/source/configuration_reference.rst
index cec47656..23cd94f6 100644
--- a/docs/source/configuration_reference.rst
+++ b/docs/source/configuration_reference.rst
@@ -2,7 +2,7 @@ Configuration Reference
============================
The following is a complete reference of the prompt-conifg.yml that controls the behavior of an Arch gateway.
-We've kept things simple (less than 100 lines) and held off on exposing additional functionality (for e.g. suppporting
+We've kept things simple (less than 80 lines) and held off on exposing additional functionality (for e.g. suppporting
push observability stats, managing prompt-endpoints as virtual cluster, exposing more load balancing options, etc). Our
belief that the simple things, should be simple. So we offert good defaults for developers, so that they can spend more
of their time in building features unique to their AI experience.
diff --git a/docs/source/getting_started/getting_started.rst b/docs/source/getting_started/getting_started.rst
index e661276d..aca639a2 100644
--- a/docs/source/getting_started/getting_started.rst
+++ b/docs/source/getting_started/getting_started.rst
@@ -3,5 +3,18 @@ Getting Started
This section gets you started with a very simple configuration and provides some example configurations.
+.. sidebar:: Pre-requisites
+
+ In order for you to get started, please make sure that `Docker `_
+ and `Python `_ are installed locally.
+
+ As the examples use the pre-built `Arch Docker images `_,
+ they should work on the following architectures:
+
+ - x86_64
+ - ARM 64
+
+
The fastest way to get started using Arch is installing `pre-built binaries `_.
You can also build it from source.
+
diff --git a/docs/source/getting_started/use_cases/function_calling.rst b/docs/source/getting_started/use_cases/function_calling.rst
index 78e3d65a..20f4e17d 100644
--- a/docs/source/getting_started/use_cases/function_calling.rst
+++ b/docs/source/getting_started/use_cases/function_calling.rst
@@ -1,17 +1,25 @@
+.. _arch_function_calling_agentic_guide:
+
Agentic (Text-to-Action) Apps
==============================
-Arch helps you easily personalize your applications by enabling calls to application-specific (API) operations
+Arch helps you easily personalize your applications by calling application-specific (API) functions
via user prompts. This involves any predefined functions or APIs you want to expose to users to perform tasks,
-gather information, or manipulate data. With function calling, you have flexibility to support “agentic” apps
-tailored to specific use cases - from updating insurance claims to creating ad campaigns - via prompts.
+gather information, or manipulate data. This capability is generally referred to as **function calling**, where
+you have the flexibility to support “agentic” apps tailored to specific use cases - from updating insurance
+claims to creating ad campaigns - via prompts.
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with
the user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
-Arch does this via its purpose-built Arch-FC1B LLM - the fastest (200ms p90 - 10x faser than GPT-4o) and cheapest
-(100x than GPT-40) function-calling LLM that matches performance with frontier models.
+Arch does this via its purpose-built :ref:`Arch-FC LLM ` - the fastest (200ms p90 - 10x faser than GPT-4o)
+and cheapest (100x than GPT-40) function-calling LLM that matches performance with frontier models.
______________________________________________________________________________________________
+.. image:: /_static/img/function-calling-network-flow.jpg
+ :width: 100%
+ :align: center
+
+
Single Function Call
--------------------
In the most common scenario, users will request a single action via prompts, and Arch efficiently processes the
@@ -54,4 +62,10 @@ When enabling multiple function calling, define the prompt targets in a way that
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
the user's intent.
-Example of Multiple Prompt Targets in YAML:
\ No newline at end of file
+Example of Multiple Prompt Targets in YAML:
+
+.. literalinclude:: /_config/function-calling-network-agent.yml
+ :language: yaml
+ :linenos:
+ :emphasize-lines: 16-37
+ :caption: Define prompt targets that can enable users to engage with API and backened functions of an app
\ No newline at end of file
diff --git a/docs/source/getting_started/use_cases/rag.rst b/docs/source/getting_started/use_cases/rag.rst
index 4c6f4b94..23b84453 100644
--- a/docs/source/getting_started/use_cases/rag.rst
+++ b/docs/source/getting_started/use_cases/rag.rst
@@ -1,10 +1,12 @@
+.. _arch_rag_guide:
+
Retrieval-Augmented (RAG)
-====================================
+=========================
The following section describes how Arch can help you build faster, smarter and more accurate
Retrieval-Augmented Generation (RAG) applications.
-Intent-drift detection
+Intent-drift Detection
----------------------
Developers struggle to handle `follow-up `_
@@ -65,8 +67,8 @@ You can used the last set of messages that match to an intent to prompt an LLM,
improved retrieval, etc. With Arch and a few lines of code, you can improve the retrieval accuracy, lower overall
token cost and dramatically improve the speed of their responses back to users.
-Smarter retrival with parameter extraction
-------------------------------------------
+Parameter Extraction for RAG
+----------------------------
To build RAG (Retrieval-Augmented Generation) applications, you can configure prompt targets with parameters,
enabling Arch to retrieve critical information in a structured way for processing. This approach improves the
diff --git a/docs/source/intro/architecture/prompt_processing/prompt_processing.rst b/docs/source/intro/architecture/prompt_processing/prompt_processing.rst
index 967a82f1..0a2866c6 100644
--- a/docs/source/intro/architecture/prompt_processing/prompt_processing.rst
+++ b/docs/source/intro/architecture/prompt_processing/prompt_processing.rst
@@ -1,20 +1,23 @@
.. _arch_overview_prompt_handling:
-Prompt Processing
-=================
+Prompts
+=======
-Arch's model serving process is designed to securely handle incoming prompts by detecting jailbreak attempts,
-processing the prompts, and routing them to appropriate functions or prompt targets based on intent detection.
-The serving workflow integrates several key components, each playing a crucial role in managing generative
-AI interactions:
+Arch's primary design point is to securely accept, process and handle prompts. To do that effectively,
+Arch relies on Envoy's HTTP `connection management `_,
+subsystem and its prompt-handler subsystem engineered with purpose-built :ref:`LLMs ` to implement
+critical functionality on behalf of developers so that you can stay focused on business logic.
-Jailbreak and Toxicity Guardrails
----------------------------------
+Prompt Guardrails
+-----------------
-Arch employs Arch-Guard, a security layer powered by a compact and high-performimg LLM that monitors incoming prompts to detect
-and reject jailbreak attempts, ensuring that unauthorized or harmful behaviors are intercepted early in the process. Arch-Guard
-is the leading model in the industry for jailbreak and toxicity detection. Configuring guardrails is super simple. See example
-below.
+Arch is engineered with :ref:`Arch-Guard `, an industry leading safety layer, powered by a
+compact and high-performimg LLM that monitors incoming prompts to detect and reject jailbreak attempts and
+several safety related concerns, ensuring that unauthorized or harmful behaviors are intercepted early in
+the process. Arch-Guard is a composite model combining work from the industry leading Meta LLama models and
+purposely-tuned models that offer exceptional overall performance.
+
+To add prompt guardrails, see example below:
.. literalinclude:: /_config/getting-started.yml
:language: yaml
@@ -22,36 +25,118 @@ below.
:emphasize-lines: 24-27
:caption: :download:`arch-getting-started.yml `
+.. Note::
+ As a roadmap item, Arch will expose the ability for developers to define custom guardrails via Arch-Guard-v2,
+ which would enforce instructions defined by the application developer to control conversational flow. To
+ offer feedback on our roadmap, please visit our `github page `_
+
Prompt Targets
----------------
+--------------
-Once a prompt passes the security checks, Arch processes the content and identifies if any specific functions need to be called.
-Arch-FC1B, a dedicated function calling module, extracts critical information from the prompt and executes the necessary
-backend API calls or internal functions. This capability allows for efficient handling of agentic tasks, such as scheduling or
-data retrieval, by dynamically interacting with backend services.
+Once a prompt passes any configured guardrail checks, Arch processes the contents of the incoming conversation
+and identifies where to forwad the conversation to via its essential ``prompt_targets`` primitve. Prompt targets
+are endpoints that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with
+metadata like knowing when a user's intent has changed so that you can build faster, more accurate RAG apps.
+
+Configuring ``prompt_targets`` is simple. See example below:
+
+.. literalinclude:: /_config/getting-started.yml
+ :language: yaml
+ :linenos:
+ :emphasize-lines: 29-38
+ :caption: :download:`arch-getting-started.yml `
-.. image:: /_static/img/function-calling-network-flow.jpg
- :width: 100%
- :align: center
Intent Detection and Prompt Matching:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Arch uses Natural Language Inference (NLI) and embedding-based approaches to detect the intent of each incoming prompt.
-This intent detection phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt
-is forwarded to the most appropriate endpoint. Arch’s intent detection framework considers both the name and description of each prompt target,
-enhancing accuracy in forwarding decisions.
+Arch uses fast Natural Language Inference (NLI) and embedding approaches to first detect the intent of each
+incoming prompt. This intent detection phase analyzes the prompt's content and matches it against predefined
+prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint. Arch’s intent
+detection framework considers both the name and description of each prompt target, and uses a composite matching
+score between an NLI and cosine similarity to enchance accuracy in forwarding decisions.
-- **Embedding Approaches**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match,
- ensuring that the prompt is handled by the correct downstream service.
+- **Embeddings**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies
+ the closest match, ensuring that the prompt is handled by the correct downstream service.
-- **NLI Integration**: Natural Language Inference techniques further refine the matching process by evaluating the semantic alignment
- between the prompt and potential targets.
+- **NLI**: NLI techniques further refine the matching process by evaluating the semantic alignment between the
+ prompt and potential targets.
-Forwarding Prompts to Downstream Targets:
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-After determining the correct target, Arch forwards the prompt to the designated endpoint, such as an LLM host or API service.
-This seamless routing mechanism integrates with Arch's broader ecosystem, enabling efficient communication and response generation tailored to the user's intent.
+Agentic Apps via Prompt Targets
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Arch's model serving process combines robust security measures with advanced intent detection and function calling capabilities, creating a reliable and adaptable environment for managing generative AI workflows. This approach not only enhances the accuracy and relevance of responses but also safeguards against malicious usage patterns, aligning with best practices in AI governance.
+To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, Arch uses
+its function calling abilities to extract critical information from the incoming prompt (or a set of prompts)
+needed by a downstream backend API or function call before calling it directly. For more details on how you can
+build agentic applications using Arch, see our full guide :ref:`here `:
+
+.. Note::
+ Arch :ref:`Arch-FC ` is the dedicated agentic model engineered in Arch to extract information from
+ a (set of) prompts and executes necessary backend API calls. This allows for efficient handling of agentic tasks,
+ such as scheduling data retrieval, by dynamically interacting with backend services. Arch-FC is a flagship 1.3
+ billion parameter model that matches performance with frontier models like Claude Sonnet 3.5 ang GPT-4, while
+ being 100x cheaper ($0.05M/token hosted) and 10x faster (p50 latencies of 200ms).
+
+Prompting LLMs
+--------------
+Arch is a single piece of software that is designed to manage both ingress and egress prompt traffic, drawing its
+distributed proxy nature from the robust `Envoy `_. This makes it extremely efficient and capable
+of handling upstream connections to LLMs. If your application is originating code to an API-based LLM, simply use
+Arch's Python or JavaScript client SDK to send traffic to the desired LLM of choice. By sending traffic through Arch,
+you can propagate traces, manage and monitor traffic, apply rate limits, and utilize a large set of traffic management
+capabilities in a central place.
+
+.. Attention::
+ When you start Arch, it automatically creates a listener port for egress calls to upstream LLMs. This is based on the
+ ``llm_providers`` configuration section in the ``prompt_config.yml`` file. Arch binds itself to a local address such as
+ 127.0.0.1:9000/v1 or a DNS-based address like arch.local:9000/v1 for outgoing traffic.
+
+Example: Using the Arch Python SDK
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: python
+
+ from arch_client import ArchClient
+
+ # Initialize the Arch client
+ client = ArchClient(base_url="http://127.0.0.1:9000/v1")
+
+ # Define your LLM provider and prompt
+ model_id = "openai"
+ prompt = "What is the capital of France?"
+
+ # Send the prompt to the LLM through Arch
+ response = client.completions.create(llm_provider=llm_provider, prompt=prompt)
+
+ # Print the response
+ print("LLM Response:", response)
+
+Example: Using OpenAI Client with Arch as an Egress Gateway
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: python
+
+ import openai
+
+ # Set the OpenAI API base URL to the Arch gateway endpoint
+ openai.api_base = "http://127.0.0.1:9000/v1"
+
+ # No need to set openai.api_key since it's configured in Arch's gateway
+
+ # Use the OpenAI client as usual
+ response = openai.Completion.create(
+ model="text-davinci-003",
+ prompt="What is the capital of France?"
+ )
+
+ print("OpenAI Response:", response.choices[0].text.strip())
+
+
+In these examples:
+
+ The ArchClient is used to send traffic directly through the Arch egress proxy to the LLM of your choice, such as OpenAI.
+ The OpenAI client is configured to route traffic via Arch by setting the proxy to 127.0.0.1:9000, assuming Arch is
+ running locally and bound to that address and port.
+
+This setup allows you to take advantage of Arch's advanced traffic management features while interacting with LLM APIs like OpenAI.
\ No newline at end of file
diff --git a/docs/source/intro/what_is_arch.rst b/docs/source/intro/what_is_arch.rst
index 280feb1e..ede19a08 100644
--- a/docs/source/intro/what_is_arch.rst
+++ b/docs/source/intro/what_is_arch.rst
@@ -3,7 +3,7 @@ What is Arch
Arch is an intelligent `(Layer 7) `_ gateway
designed for generative AI apps, AI agents, and Co-pilots that work with prompts. Engineered with purpose-built
-:ref:`LLMs `, Arch handles the critical but undifferentiated tasks related to the handling and
+:ref:`LLMs `, Arch handles all the critical but undifferentiated tasks related to the handling and
processing of prompts, including detecting and rejecting `jailbreak `_
attempts, intelligently calling “backend” APIs to fulfill the user's request represented in a prompt, routing to
and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions
@@ -34,11 +34,11 @@ functionality exclusively for prompts and LLMs. This gives Arch several advantag
* Arch works with any application language. A single Arch deployment can act as gateway for AI applications
written in Python, Java, C++, Go, Php, etc.
-* Arch can be deployed and upgraded quickly across your infrastructure transparently without horrid pain of
- deploying library upgrades in your applications.
+* Arch can be deployed and upgraded quickly across your infrastructure transparently without the horrid pain
+ of deploying library upgrades in your applications.
-**Engineered with Fast LLMs:** Arch is engineered with specialized (sub-billion) LLMs that are desgined for fast,
-cost-effective and acurrate handling of prompts. These :ref:`LLMs ` are designed to be
+**Engineered with Fast LLMs:** Arch is engineered with specialized (sub-billion) LLMs that are desgined for
+fast, cost-effective and acurrate handling of prompts. These :ref:`LLMs ` are designed to be
best-in-class for critcal prompt-related tasks like:
* **Function/API Calling:** Arch helps you easily personalize your applications by enabling calls to
@@ -46,8 +46,8 @@ best-in-class for critcal prompt-related tasks like:
you want to expose to users to perform tasks, gather information, or manipulate data. With function calling,
you have flexibility to support "agentic" experiences tailored to specific use cases - from updating insurance
claims to creating ad campaigns - via prompts. Arch analyzes prompts, extracts critical information from
- prompts, engages in lightweight conversation with the user to gather any missing parameters and makes API
- calls so that you can focus on writing business logic. For more details, read :ref:`prompt processing `.
+ prompts, engages in lightweight conversation to gather any missing parameters and makes API calls so that you can
+ focus on writing business logic. For more details, read :ref:`prompt processing `.
* **Prompt Guardrails:** Arch helps you improve the safety of your application by applying prompt guardrails in
a centralized way for better governance hygiene. With prompt guardrails you can prevent `jailbreak `_
diff --git a/docs/source/llms/llms.rst b/docs/source/llms/llms.rst
index 947b13dd..e2925f48 100644
--- a/docs/source/llms/llms.rst
+++ b/docs/source/llms/llms.rst
@@ -2,19 +2,21 @@
LLMs
====
-Arch utilizes purpose-built, industry leading, LLMs to handle the crufty and undifferentiated
-work around accepting, handling and processing prompts. The following
-Arch-Guard
-----------
-LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s
-intended behavior of the LLM.Arch-Guard is a classifier model trained on a large corpus of attacks, capable of detecting explicitly
-malicious prompts (and toxicity).
+Arch utilizes purpose-built, industry leading, LLMs to handle the crufty and undifferentiated work around
+accepting, handling and processing prompts. The following sections talk about some of the core models that
+are built-in Arch.
-The model is useful as a starting point for identifying and guardrailing against the most risky realistic inputs to
-LLM-powered applications. Our goal in embedding Arch-Guard in the Arch gateway is to enable developers to focus on their business logic
-and factor out security and safety outside application logic. Wth Arch-Guard= developers can take to significantly reduce prompt attack
-risk while maintaining control over the user experience.
+Arch-Guard-v1
+-------------
+LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to
+subvert the developer’s intended behavior of the LLM. Arch-Guard-v1 is a classifier model trained on a large
+corpus of attacks, capable of detecting explicitly malicious prompts (and toxicity).
+
+The model is useful as a starting point for identifying and guardrailing against the most risky realistic
+inputs to LLM-powered applications. Our goal in embedding Arch-Guard in the Arch gateway is to enable developers
+to focus on their business logic and factor out security and safety outside application logic. Wth Arch-Guard-v1
+developers can take to significantly reduce prompt attack risk while maintaining control over the user experience.
Below is our test results of the strength of our model as compared to Prompt-Guard from `Meta LLama `_.
@@ -135,5 +137,27 @@ Below is our test results of the strength of our model as compared to Prompt-Gua
-Arch-FC1B
----------
\ No newline at end of file
+Arch-FC
+-------
+Arch-FC is a lean, powerful and cost-effective agentic model designed for function calling scenarios.
+You can run Arch-FC locally, or use the cloud-hosted version for as little as $0.05/M token (100x cheaper
+than GPT-4o), with a p50 latency of 200ms (5x faster than GPT-4o), while meeting frontier model performance.
+
+.. Note::
+ Function calling helps you personalize the GenAI experience by calling application-specific operations via
+ prompts. This involves any predefined functions or APIs you want to expose to perform tasks, gather
+ information, or manipulate data - via prompts.
+
+ You can get started with function calling simply by configuring a prompt target with a name, description
+ and set of parameters needed by a specific backend function or a hosted API. The name, and description helps
+ Arch-FC match a user prompt to a function or API that can process it.
+
+By using Arch-FC, Arch enables you to easily build agentic workflows tailored to domain-specific use cases -
+from updating insurance claims to creating ad campaigns. Arch-FC analyzes prompts, extracts critical information
+from prompts, engages in lightweight conversations with the user to gather any missing parameters need before
+handling control back to Arch to make the API call to your hosted backend. Arch-FC handles the muck of information
+extraction so that you can focus on the business logic of your application.
+
+
+
+
diff --git a/docs/source/observability/stats.rst b/docs/source/observability/stats.rst
index fcfe4806..3313520f 100644
--- a/docs/source/observability/stats.rst
+++ b/docs/source/observability/stats.rst
@@ -1,3 +1,7 @@
-Metrics and Statistics
-======================
+Monitoring
+==========
+Arch offers several monitoring metrics that help you understand three critical aspects of your application:
+latency, token usage, and error rates by an upstream LLM provider. Latency measures the speed at which your
+application is responding to users, which includes metrics like time to first token (TFT), time per output
+token (TOT) metrics, and the total latency as perceived by users.
\ No newline at end of file
diff --git a/docs/source/observability/tracing.rst b/docs/source/observability/tracing.rst
index acea1dcf..d60421ce 100644
--- a/docs/source/observability/tracing.rst
+++ b/docs/source/observability/tracing.rst
@@ -175,33 +175,33 @@ specialized services and external systems.
Trace Breakdown:
****************
-- **Customer Interaction**:
+- Customer Interaction:
- Span 1: Customer initiates a request via the AI-powered chatbot for billing support (e.g., asking for payment details).
-- **AI Agent 1 (Main - Arch)**:
+- AI Agent 1 (Main - Arch):
- Span 2: AI Agent 1 (Main) processes the request and identifies it as related to billing, forwarding the request
to an external payment service.
- Span 3: AI Agent 1 determines that additional technical support is needed for processing and forwards the request
to AI Agent 2.
-- **External Payment Service**:
+- External Payment Service:
- Span 4: The external payment service processes the payment-related request (e.g., verifying payment status) and sends
the response back to AI Agent 1.
-- **AI Agent 2 (Tech - Arch)**:
+- AI Agent 2 (Tech - Arch):
- Span 5: AI Agent 2, responsible for technical queries, processes a request forwarded from AI Agent 1 (e.g., checking for
any account issues).
- Span 6: AI Agent 2 forwards the query to Internal Tech Support for further investigation.
-- **Internal Tech Support**:
+- Internal Tech Support:
- Span 7: Internal Tech Support processes the request (e.g., resolving account access issues) and responds to AI Agent 2.
-- **AI Agent 3 (Orders - Arch)**:
+- AI Agent 3 (Orders - Arch):
- Span 8: AI Agent 3 handles order-related queries. AI Agent 1 forwards the request to AI Agent 3 after payment verification
is completed.
- Span 9: AI Agent 3 forwards a request to the Inventory Management system to confirm product availability for a pending order.
-- **Inventory Management**:
+- Inventory Management:
- Span 10: The Inventory Management system checks stock and availability and returns the information to AI Agent 3.
Integrating with Tracing Tools