Arch helps you easily personalize your applications by calling application-specific (API) functions
via user prompts. This involves any predefined functions or APIs you want to expose to users to perform tasks,
-gather information, or manipulate data. This capability is generally referred to as function calling, where
+gather information, or manipulate data. This capability is generally referred to as function calling, where
you have the flexibility to support “agentic” apps tailored to specific use cases - from updating insurance
claims to creating ad campaigns - via prompts.
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with
the user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
-Arch does this via its purpose-built Arch-Function - the fastest (200ms p90 - 10x faser than GPT-4o)
-and cheapest (100x than GPT-40) function-calling LLM that matches performance with frontier models.
+Arch does this via its purpose-built Arch-Function - the fastest (200ms p90 - 10x faser than GPT-4o)
+and cheapest (100x than GPT-4o) function calling LLM that matches performance with frontier models.
@@ -191,9 +191,9 @@ is how you would go about enabling this scenario with Arch:
16system_prompt:|17You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.18
-19prompt_targets:
-20-name:network_qa
-21endpoint:
+19prompt_targets:
+20-name:network_qa
+21endpoint:22name:app_server23path:/agent/network_summary24description:Handle general Q/A related to networking.
@@ -207,22 +207,22 @@ is how you would go about enabling this scenario with Arch:
32-name:device_ids33type:list34description:A list of device identifiers (IDs) to reboot.
-35required:true
-36-name:device_summary
-37description:Retrieve statistics for specific devices within a time range
-38endpoint:
-39name:app_server
-40path:/agent/device_summary
-41parameters:
-42-name:device_ids
-43type:list
-44description:A list of device identifiers (IDs) to retrieve statistics for.
-45required:true# device_ids are required to get device statistics
-46-name:time_range
-47type:int
-48description:Time range in days for which to gather device statistics. Defaults to 7.
-49default:"7"
-50
+35required:true
+36-name:device_summary
+37description:Retrieve statistics for specific devices within a time range
+38endpoint:
+39name:app_server
+40path:/agent/device_summary
+41parameters:
+42-name:device_ids
+43type:list
+44description:A list of device identifiers (IDs) to retrieve statistics for.
+45required:true# device_ids are required to get device statistics
+46-name:time_range
+47type:int
+48description:Time range in days for which to gather device statistics. Defaults to 7.
+49default:7
+5051# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.52endpoints:53app_server:
@@ -324,9 +324,9 @@ the user’s intent.
16system_prompt:|17You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.18
-19prompt_targets:
-20-name:network_qa
-21endpoint:
+19prompt_targets:
+20-name:network_qa
+21endpoint:22name:app_server23path:/agent/network_summary24description:Handle general Q/A related to networking.
@@ -340,22 +340,22 @@ the user’s intent.
32-name:device_ids33type:list34description:A list of device identifiers (IDs) to reboot.
-35required:true
-36-name:device_summary
-37description:Retrieve statistics for specific devices within a time range
-38endpoint:
-39name:app_server
-40path:/agent/device_summary
-41parameters:
-42-name:device_ids
-43type:list
-44description:A list of device identifiers (IDs) to retrieve statistics for.
-45required:true# device_ids are required to get device statistics
-46-name:time_range
-47type:int
-48description:Time range in days for which to gather device statistics. Defaults to 7.
-49default:"7"
-50
+35required:true
+36-name:device_summary
+37description:Retrieve statistics for specific devices within a time range
+38endpoint:
+39name:app_server
+40path:/agent/device_summary
+41parameters:
+42-name:device_ids
+43type:list
+44description:A list of device identifiers (IDs) to retrieve statistics for.
+45required:true# device_ids are required to get device statistics
+46-name:time_range
+47type:int
+48description:Time range in days for which to gather device statistics. Defaults to 7.
+49default:7
+5051# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.52endpoints:53app_server:
diff --git a/build_with_arch/rag.html b/build_with_arch/rag.html
index fafdf3c5..580b9b97 100755
--- a/build_with_arch/rag.html
+++ b/build_with_arch/rag.html
@@ -157,7 +157,7 @@
Retrieval-Augmented Generation (RAG) applications.
Parameter Extraction for RAG
-
To build RAG (Retrieval-Augmented Generation) applications, you can configure prompt targets with parameters,
+
To build RAG (Retrieval Augmented Generation) applications, you can configure prompt targets with parameters,
enabling Arch to retrieve critical information in a structured way for processing. This approach improves the
retrieval quality and speed of your application. By extracting parameters from the conversation, you can pull
the appropriate chunks from a vector database or SQL-like data store to enhance accuracy. With Arch, you can
@@ -243,11 +243,11 @@ streamline data retrieval and processing to build more efficient and precise RAG
Developers struggle to efficiently handle follow-up or clarification questions. Specifically, when users ask for
changes or additions to previous responses their AI applications often generate entirely new responses instead of adjusting
-previous ones.Arch offers intent tracking as a feature so that developers can know when the user has shifted away from a
+previous ones. Arch offers intenttracking as a feature so that developers can know when the user has shifted away from a
previous intent so that they can dramatically improve retrieval accuracy, lower overall token cost and improve the speed of
their responses back to users.
Arch uses its built-in lightweight NLI and embedding models to know if the user has steered away from an active intent.
-Arch’s intent-drift detection mechanism is based on its’ prompt_targets primtive. Arch tries to match an incoming
+Arch’s intent-drift detection mechanism is based on its prompt target primtive. Arch tries to match an incoming
prompt to one of the prompt_targets configured in the gateway. Once it detects that the user has moved away from an active
active intent, Arch adds the x-arch-intent-marker headers to the request before sending it your application servers.
@@ -276,8 +276,8 @@ active intent, Arch adds the 22return(23jsonify({"error":"Invalid value for x-arch-prompt-intent-change header"}),24400,
-25)
-26
+25)
+2627# Update user conversation based on intent change28memory=update_user_conversation(user_id,client_messages,intent_changed)29
@@ -315,8 +315,8 @@ active intent, Arch adds the
Note
Arch is (mostly) stateless so that it can scale in an embarrassingly parrallel fashion. So, while Arch offers
-intent-drift detetction, you still have to maintain converational state with intent drift as meta-data. The
-following code snippets show how easily you can build and enrich conversational history with Langchain (in python),
+intent-drift detetction, you still have to maintain converational state with intent drift as metadata. The
+following code snippets show how easily you can build and enrich conversational history with Langchain (in Python),
so that you can use the most relevant prompts for your retrieval and for prompting upstream LLMs.
diff --git a/concepts/llm_provider.html b/concepts/llm_provider.html
index 4d757dea..acbe7c1d 100755
--- a/concepts/llm_provider.html
+++ b/concepts/llm_provider.html
@@ -188,7 +188,7 @@ across applications.
Note
When you start Arch, it creates a listener port for egress traffic based on the presence of llm_providers
configuration section in the arch_config.yml file. Arch binds itself to a local address such as
-127.0.0.1:51001/v1.
+127.0.0.1:12000.
Arch also offers vendor-agnostic SDKs and libraries to make LLM calls to API-based LLM providers (like OpenAI,
Anthropic, Mistral, Cohere, etc.) and supports calls to OSS LLMs that are hosted on your infrastructure. Arch
@@ -201,7 +201,7 @@ make outbound LLM calls.
fromopenaiimportOpenAI# Initialize the Arch client
-client=OpenAI(base_url="http://127.0.0.1:51001/v1")
+client=OpenAI(base_url="http://127.0.0.12000/")# Define your LLM provider and promptllm_provider="openai"
diff --git a/concepts/prompt_target.html b/concepts/prompt_target.html
index ca84ef9d..0da40072 100755
--- a/concepts/prompt_target.html
+++ b/concepts/prompt_target.html
@@ -254,22 +254,22 @@ Here is a full list of parameter attributes that Arch can support:
prompt_targets:
+-name:get_weather
+description:Get the current weather for a location
+parameters:
+-name:location
+description:The city and state, e.g. San Francisco, New York
+type:str
+required:true
+-name:unit
+description:The unit of temperature
+type:str
+default:fahrenheit
+enum:[celsius,fahrenheit]
+endpoint:
+name:api_server
+path:/weather
diff --git a/concepts/tech_overview/error_target.html b/concepts/tech_overview/error_target.html
index 9030d0a5..f87a3ff2 100755
--- a/concepts/tech_overview/error_target.html
+++ b/concepts/tech_overview/error_target.html
@@ -162,7 +162,6 @@ The errors are communicated to the application via headers like
Error Type: Categorizes the nature of the error, such as “ValidationError” or “RuntimeError.” These error types help in identifying what kind of issue occurred and provide context for troubleshooting.
Error Message: A clear, human-readable message describing the error. This should provide enough detail to inform users or developers of the root cause or required action.
-
Target Prompt: The specific prompt or operation where the error occurred. Understanding where the error happened helps with debugging and pinpointing the source of the problem.
Parameter-Specific Errors: Errors that arise due to invalid or missing parameters when invoking a function. These errors are critical for ensuring the correctness of inputs.
diff --git a/concepts/tech_overview/prompt.html b/concepts/tech_overview/prompt.html
index 58d6464a..ff98230a 100755
--- a/concepts/tech_overview/prompt.html
+++ b/concepts/tech_overview/prompt.html
@@ -177,7 +177,7 @@ containing two key-value pairs:
Prompt Guard
-
Arch is engineered with Arch-Guard, an industry leading safety layer, powered by a
+
Arch is engineered with Arch-Guard, an industry leading safety layer, powered by a
compact and high-performimg LLM that monitors incoming prompts to detect and reject jailbreak attempts -
ensuring that unauthorized or harmful behaviors are intercepted early in the process.
To add jailbreak guardrails, see example below:
@@ -221,7 +221,7 @@ etc. To offer feedback on our roadmap, please visit our
Once a prompt passes any configured guardrail checks, Arch processes the contents of the incoming conversation
-and identifies where to forwad the conversation to via its prompt_targets primitve. Prompt targets are endpoints
+and identifies where to forwad the conversation to via its prompttarget primitve. Prompt targets are endpoints
that receive prompts that are processed by Arch. For example, Arch enriches incoming prompts with metadata like knowing
when a user’s intent has changed so that you can build faster, more accurate RAG apps.
Configuring prompt_targets is simple. See example below:
@@ -302,55 +302,46 @@ when a user’s intent has changed so that you can build faster, more accurate R
Arch uses fast Natural Language Inference (NLI) and embedding approaches to first detect the intent of each
-incoming prompt. This intent detection phase analyzes the prompt’s content and matches it against predefined
-prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint. Arch’s intent
-detection framework considers both the name and description of each prompt target, and uses a composite matching
-score between an NLI and cosine similarity to enchance accuracy in forwarding decisions.
+
+
Intent Matching
+
Arch uses fast text embedding and intent recognition approaches to first detect the intent of each incoming prompt.
+This intent matching phase analyzes the prompt’s content and matches it against predefined prompt targets, ensuring that each prompt is forwarded to the most appropriate endpoint.
+Arch’s intent matching framework considers both the name and description of each prompt target, and uses a composite matching score between embedding similarity and intent classification scores to enchance accuracy in forwarding decisions.
-
Embeddings: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies
-the closest match, ensuring that the prompt is handled by the correct downstream service.
-
NLI: NLI techniques further refine the matching process by evaluating the semantic alignment between the
-prompt and potential targets.
+
Intent Recognition: NLI techniques further refine the matching process by evaluating the semantic alignment between the prompt and potential targets.
+
Text Embedding: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match, ensuring that the prompt is handled by the correct downstream service.
Agentic Apps via Prompt Targets
-
To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, Arch uses
-its function calling abilities to extract critical information from the incoming prompt (or a set of prompts)
-needed by a downstream backend API or function call before calling it directly. For more details on how you can
-build agentic applications using Arch, see our full guide here:
+
To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts, Arch uses its function calling abilities to extract critical information from the incoming prompt (or a set of prompts) needed by a downstream backend API or function call before calling it directly.
+For more details on how you can build agentic applications using Arch, see our full guide here:
Note
-
Arch Arch-Function is the dedicated agentic model engineered in Arch to extract information from
-a (set of) prompts and executes necessary backend API calls. This allows for efficient handling of agentic tasks,
-such as scheduling data retrieval, by dynamically interacting with backend services. Arch-Function is a flagship 1.3
-billion parameter model that matches performance with frontier models like Claude Sonnet 3.5 ang GPT-4, while
-being 100x cheaper ($0.05M/token hosted) and 10x faster (p50 latencies of 200ms).
+
Arch-Function is a collection of dedicated agentic models engineered in Arch to extract information from a (set of) prompts and executes necessary backend API calls.
+This allows for efficient handling of agentic tasks, such as scheduling data retrieval, by dynamically interacting with backend services.
+Arch-Function achieves state-of-the-art performance, comparable with frontier models like Claude Sonnet 3.5 ang GPT-4, while being 100x cheaper ($0.05M/token hosted) and 10x faster (p50 latencies of 200ms).
Prompting LLMs
-
Arch is a single piece of software that is designed to manage both ingress and egress prompt traffic, drawing its
-distributed proxy nature from the robust Envoy. This makes it extremely efficient and capable
-of handling upstream connections to LLMs. If your application is originating code to an API-based LLM, simply use
-the OpenAI client and configure it with Arch. By sending traffic through Arch, you can propagate traces, manage and monitor
-traffic, apply rate limits, and utilize a large set of traffic management capabilities in a centralized way.
+
Arch is a single piece of software that is designed to manage both ingress and egress prompt traffic, drawing its distributed proxy nature from the robust Envoy.
+This makes it extremely efficient and capable of handling upstream connections to LLMs.
+If your application is originating code to an API-based LLM, simply use the OpenAI client and configure it with Arch.
+By sending traffic through Arch, you can propagate traces, manage and monitor traffic, apply rate limits, and utilize a large set of traffic management capabilities in a centralized way.
Attention
When you start Arch, it automatically creates a listener port for egress calls to upstream LLMs. This is based on the
llm_providers configuration section in the arch_config.yml file. Arch binds itself to a local address such as
-127.0.0.1:12000/v1.
+127.0.0.1:12000.
Example: Using OpenAI Client with Arch as an Egress Gateway
importopenai# Set the OpenAI API base URL to the Arch gateway endpoint
-openai.api_base="http://127.0.0.1:12000/v1"
+openai.api_base="http://127.0.0.1:12000"# No need to set openai.api_key since it's configured in Arch's gateway
@@ -364,7 +355,7 @@ traffic, apply rate limits, and utilize a large set of traffic management capabi
In these examples, the OpenAI client is used to send traffic directly through the Arch egress proxy to the LLM of your choice, such as OpenAI.
-The OpenAI client is configured to route traffic via Arch by setting the proxy to 127.0.0.1:51001, assuming Arch is running locally and bound to that address and port.
+The OpenAI client is configured to route traffic via Arch by setting the proxy to 127.0.0.1:12000, assuming Arch is running locally and bound to that address and port.
This setup allows you to take advantage of Arch’s advanced traffic management features while interacting with LLM APIs like OpenAI.
@@ -392,7 +383,7 @@ This setup allows you to take advantage of Arch’s advanced traffic management
diff --git a/concepts/tech_overview/request_lifecycle.html b/concepts/tech_overview/request_lifecycle.html
index 81ae1d5b..250e2403 100755
--- a/concepts/tech_overview/request_lifecycle.html
+++ b/concepts/tech_overview/request_lifecycle.html
@@ -287,8 +287,6 @@ enables scaling to very high core count CPUs.
Request Flow (Ingress)
-
-
Overview
A brief outline of the lifecycle of a request and response using the example configuration above:
TCP Connection Establishment:
@@ -302,7 +300,7 @@ that harmful or unwanted behaviors are detected early in the request processing
The decrypted data stream is deframed by the HTTP/2 codec in Arch’s HTTP connection manager. Arch performs
intent matching via is prompt-handler subsystem using the name and description of the defined prompt targets,
determining which endpoint should handle the prompt.
-
Parameter Gathering with Arch-FC:
+
Parameter Gathering with Arch-Function:
If a prompt target requires specific parameters, Arch engages Arch-FC to extract the necessary details
from the incoming prompt(s). This process gathers the critical information needed for downstream API calls.
API Call Execution:
@@ -310,7 +308,7 @@ Arch routes the prompt to the appropriate backend API or function call. If an en
load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint.
Default Summarization by Upstream LLM:
By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization.
-This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval-Augmented Generation)
+This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval Augmented Generation)
and agentic applications.
Error Handling and Forwarding:
Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to
@@ -326,14 +324,9 @@ The upstream endpoint’s TLS transport socket encrypts the response, which is t
Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.
-
Request Flow (Egress)
-
-
-
Overview
-
A brief outline of the lifecycle of a request and response in the context of egress traffic from an application
-to Large Language Models (LLMs) via Arch:
+
A brief outline of the lifecycle of a request and response in the context of egress traffic from an application to Large Language Models (LLMs) via Arch:
HTTP Connection Establishment to LLM:
Arch initiates an HTTP connection to the upstream LLM service. This connection is handled by Arch’s egress listener
@@ -393,12 +386,8 @@ processing request headers and then finalized by the HCM during post-request pro
diff --git a/concepts/tech_overview/terminology.html b/concepts/tech_overview/terminology.html
index 79203c6c..8fd24eb1 100755
--- a/concepts/tech_overview/terminology.html
+++ b/concepts/tech_overview/terminology.html
@@ -173,7 +173,7 @@ For more details, check out prompt_target to help separate business logic from undifferentiated
+
Prompt Target: Arch offers a primitive called prompt target to help separate business logic from undifferentiated
work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Arch.
For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt
so that you can build faster, more accurate retrieval (RAG) apps. To support agentic apps, like scheduling travel plans or
diff --git a/get_started/intro_to_arch.html b/get_started/intro_to_arch.html
index 6abbb3f1..6da0149a 100755
--- a/get_started/intro_to_arch.html
+++ b/get_started/intro_to_arch.html
@@ -153,13 +153,8 @@
Intro to Arch
-
Arch is an intelligent (Layer 7) gateway
-designed for generative AI apps, AI agents, and Co-pilots that work with prompts. Engineered with purpose-built
-large language models (LLMs), Arch handles all the critical but undifferentiated tasks related to the handling and
-processing of prompts, including detecting and rejecting jailbreak
-attempts, intelligently calling “backend” APIs to fulfill the user’s request represented in a prompt, routing to
-and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions
-in a centralized way.
+
Arch is an intelligent (Layer 7) gateway designed for generative AI apps, AI agents, and AI copilots that work with prompts.
+Engineered with purpose-built large language models (LLMs), Arch handles all the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting jailbreak attempts, intelligently calling “backend” APIs to fulfill the user’s request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
The project was born out of the belief that:
@@ -168,63 +163,46 @@ in a centralized way.
including secure handling, intelligent routing, robust observability, and integration with backend (API)
systems for personalization - all outside business logic.
-
In practice, achieving the above goal is incredibly difficult. Arch attempts to do so by providing the
-following high level features:
-
-
Out-of-process architecture, built onEnvoy: Arch is takes a dependency on
-Envoy and is a self-contained process that is designed to run alongside your application servers. Arch uses
-Envoy’s HTTP connection management subsystem, HTTP L7 filtering and telemetry capabilities to extend the
-functionality exclusively for prompts and LLMs. This gives Arch several advantages:
+
In practice, achieving the above goal is incredibly difficult.
+Arch attempts to do so by providing the following high level features:
+
Out-of-process architecture, built onEnvoy:
+Arch is takes a dependency on Envoy and is a self-contained process that is designed to run alongside your application servers.
+Arch uses Envoy’s HTTP connection management subsystem, HTTP L7 filtering and telemetry capabilities to extend the functionality exclusively for prompts and LLMs.
+This gives Arch several advantages:
-
Arch builds on Envoy’s proven success. Envoy is used at masssive sacle by the leading technology companies of
-our time including AirBnB, Dropbox,
-Google, Reddit, Stripe,
-etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters:
-application features and business logic.
-
Arch works with any application language. A single Arch deployment can act as gateway for AI applications
-written in Python, Java, C++, Go, Php, etc.
-
Arch can be deployed and upgraded quickly across your infrastructure transparently without the horrid pain
-of deploying library upgrades in your applications.
+
Arch builds on Envoy’s proven success. Envoy is used at masssive sacle by the leading technology companies of our time including AirBnB, Dropbox, Google, Reddit, Stripe, etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters: application features and business logic.
+
Arch works with any application language. A single Arch deployment can act as gateway for AI applications written in Python, Java, C++, Go, Php, etc.
+
Arch can be deployed and upgraded quickly across your infrastructure transparently without the horrid pain of deploying library upgrades in your applications.
-
Engineered with Fast LLMs: Arch is engineered with specialized (sub-billion) LLMs that are desgined for
-fast, cost-effective and acurrate handling of prompts. These LLMs are designed to be
-best-in-class for critcal prompt-related tasks like:
+
Engineered with Fast LLMs: Arch is engineered with specialized tiny LLMs that are desgined for fast, cost-effective and acurrate handling of prompts.
+These LLMs are designed to be best-in-class for critcal prompt-related tasks like:
-
Function/API Calling: Arch helps you easily personalize your applications by enabling calls to
-application-specific (API) operations via user prompts. This involves any predefined functions or APIs
-you want to expose to users to perform tasks, gather information, or manipulate data. With function calling,
-you have flexibility to support “agentic” experiences tailored to specific use cases - from updating insurance
-claims to creating ad campaigns - via prompts. Arch analyzes prompts, extracts critical information from
-prompts, engages in lightweight conversation to gather any missing parameters and makes API calls so that you can
-focus on writing business logic. For more details, read prompt processing.
-
Prompt Guardrails: Arch helps you improve the safety of your application by applying prompt guardrails in
-a centralized way for better governance hygiene. With prompt guardrails you can prevent jailbreak
-attempts or toxicity present in user’s prompts without having to write a single line of code. To learn more
-about how to configure guardrails available in Arch, read prompt processing.
-
[Coming Soon] Intent-Markers: Developers struggle to handle follow-up,
-or clarifying
-questions. Specifically, when users ask for modifications or additions to previous responses their AI applications
-often generate entirely new responses instead of adjusting the previous ones. Arch offers intent-markers as a
-feature so that developers know when the user has shifted away from the previous intent so that they can improve
-their retrieval, lower overall token cost and dramatically improve the speed and accuracy of their responses back
-to users. For more details intent markers
+
Function Calling: Arch helps you easily personalize your applications by enabling calls to application-specific (API) operations via user prompts.
+This involves any predefined functions or APIs you want to expose to users to perform tasks, gather information, or manipulate data.
+With function calling, you have flexibility to support “agentic” experiences tailored to specific use cases - from updating insurance claims to creating ad campaigns - via prompts.
+Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation to gather any missing parameters and makes API calls so that you can focus on writing business logic.
+For more details, read Function Calling.
+
Prompt Guard: Arch helps you improve the safety of your application by applying prompt guardrails in a centralized way for better governance hygiene.
+With prompt guardrails you can prevent jailbreakattempts present in user’s prompts without having to write a single line of code.
+To learn more about how to configure guardrails available in Arch, read Prompt Guard.
+
[Coming Soon] Intent-Markers: Developers struggle to handle follow-up or clarifying questions.
+Specifically, when users ask for modifications or additions to previous responses their AI applications often generate entirely new responses instead of adjusting the previous ones.
+Arch offers intent-markers as a feature so that developers know when the user has shifted away from the previous intent so that they can improve their retrieval, lower overall token cost and dramatically improve the speed and accuracy of their responses back to users.
+For more details intent markers.
-
Traffic Management: Arch offers several capabilities for LLM calls originating from your applications, including smart
-retries on errors from upstream LLMs, and automatic cutover to other LLMs configured in Arch for continuous availability
-and disaster recovery scenarios. Arch extends Envoy’s cluster subsystem
-to manage upstream connections to LLMs so that you can build resilient AI applications.
-
Front/edge Gateway: There is substantial benefit in using the same software at the edge (observability,
-traffic shaping alogirithms, applying guardrails, etc.) as for outbound LLM inference use cases. Arch has the feature set
-that makes it exceptionally well suited as an edge gateway for AI applications. This includes TLS termination, applying
-guardrail early in the pricess, intelligent parameter gathering from prompts, and prompt-based routing to backend APIs.
+
Traffic Management: Arch offers several capabilities for LLM calls originating from your applications, including smart retries on errors from upstream LLMs, and automatic cutover to other LLMs configured in Arch for continuous availability and disaster recovery scenarios.
+Arch extends Envoy’s cluster subsystem to manage upstream connections to LLMs so that you can build resilient AI applications.
+
Front/edge Gateway: There is substantial benefit in using the same software at the edge (observability, traffic shaping alogirithms, applying guardrails, etc.) as for outbound LLM inference use cases.
+Arch has the feature set that makes it exceptionally well suited as an edge gateway for AI applications.
+This includes TLS termination, applying guardrail early in the pricess, intelligent parameter gathering from prompts, and prompt-based routing to backend APIs.
Best-In Class Monitoring: Arch offers several monitoring metrics that help you understand three critical aspects of
your application: latency, token usage, and error rates by an upstream LLM provider. Latency measures the speed at which
your application is responding to users, which includes metrics like time to first token (TFT), time per output token (TOT)
metrics, and the total latency as perceived by users.
-
End-to-End Tracing: Arch propagates trace context using the W3C Trace Context standard, specifically through the
-traceparent header. This allows each component in the system to record its part of the request flow, enabling end-to-end tracing
-across the entire application. By using OpenTelemetry, Arch ensures that developers can capture this trace data consistently and
-in a format compatible with various observability tools. For more details, read tracing.
+
End-to-End Tracing: Arch propagates trace context using the W3C Trace Context standard, specifically through the traceparent header.
+This allows each component in the system to record its part of the request flow, enabling end-to-end tracing across the entire application.
+By using OpenTelemetry, Arch ensures that developers can capture this trace data consistently and in a format compatible with various observability tools.
+For more details, read Tracing.
APIKeys for LLM providers (if using external LLMs)
-
The fastest way to get started using Arch is to use katanemo/arch pre-built binaries.
+
The fastest way to get started using Arch is to use katanemo/archgw pre-built binaries.
You can also build it from source.
diff --git a/guides/function_calling.html b/guides/function_calling.html
index 7f26b7af..6d80ebbe 100755
--- a/guides/function_calling.html
+++ b/guides/function_calling.html
@@ -187,7 +187,7 @@ This feature bridges the gap between generative AI systems and functional busine
Arch-Function
-
The Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for function calling tasks.
+
The Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for function calling tasks.
The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts.
Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
In summary, the Arch-Function collection demonstrates:
diff --git a/guides/prompt_guard.html b/guides/prompt_guard.html
index 545a577d..53fbe053 100755
--- a/guides/prompt_guard.html
+++ b/guides/prompt_guard.html
@@ -206,7 +206,7 @@ These attacks involve malicious prompts crafted to manipulate the intended behav
Arch-Guard is designed to address this challenge.
What Is Arch-Guard
-
Arch-Guard is a robust classifier model specifically trained on a diverse corpus of prompt attacks.
+
Arch-Guard is a robust classifier model specifically trained on a diverse corpus of prompt attacks.
It excels at detecting explicitly malicious prompts, providing an essential layer of security for LLM applications.
By embedding Arch-Guard within the Arch architecture, we empower developers to build robust, LLM-powered applications while prioritizing security and safety. With Arch-Guard, you can navigate the complexities of prompt management with confidence, knowing you have a reliable defense against malicious input.