Docs branch - v1 of our tech docs (#69)

* added the first set of docs for our technical docs

* more docuemtnation changes

* added support for prompt processing and updated life of a request

* updated docs to including getting help sections and updated life of a request

* committing local changes for getting started guide, sample applications, and full reference spec for prompt-config

* updated configuration reference, added sample app skeleton, updated favico

* fixed the configuration refernce file, and made minor changes to the intent detection. commit v1 for now

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
Co-authored-by: Adil Hafeez <adil@katanemo.com>
This commit is contained in:
Salman Paracha 2024-09-20 17:08:42 -07:00 committed by GitHub
parent 233976a568
commit 80c554ce1a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
34 changed files with 1040 additions and 0 deletions

1
.gitignore vendored
View file

@ -13,4 +13,5 @@ generated
venv venv
demos/function_calling/ollama/models/ demos/function_calling/ollama/models/
demos/function_calling/ollama/id_ed* demos/function_calling/ollama/id_ed*
docs/build/
open-webui/ open-webui/

20
docs/Makefile Normal file
View file

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

35
docs/make.bat Normal file
View file

@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
if "%1" == "" goto help
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

24
docs/requirements.txt Normal file
View file

@ -0,0 +1,24 @@
alabaster==0.7.16
babel==2.16.0
certifi==2024.8.30
charset-normalizer==3.3.2
docutils==0.20.1
idna==3.10
imagesize==1.4.1
Jinja2==3.1.4
MarkupSafe==2.1.5
packaging==24.1
Pygments==2.18.0
requests==2.32.3
snowballstemmer==2.2.0
Sphinx==7.4.7
sphinx-copybutton==0.5.2
sphinx-rtd-theme==2.0.0
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jquery==4.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
urllib3==2.2.3

View file

@ -0,0 +1,39 @@
version: "0.1-beta"
listen:
address: 127.0.0.1 | 0.0.0.0
port_value: 8080 #If you configure port 443, you'll need to update the listener to with your tls_certificates
messages: tuple | hugging-face-messages-api
system_prompts:
- name: network_assistant
content: you are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions
llm_providers:
- name: "OpenAI"
access_key: $OPEN_AI_KEY
model: gpt-4
default: true
- name: "Mistral"
access_key: $MISTRAL_KEY
model: "mixtral8-7B"
prompt_endpoints:
- "http://127.0.0.2"
- "http://127.0.0.1"
prompt_guards:
input-guard:
- name: #jailbreak
on-exception-message: Looks like you are curious about my abilities. But I can only
prompt_targets:
- name: information_extraction
type: RAG
description: this prompt handles all information extractions scenarios
path: /agent/summary
- name: reboot_network_device
path: /agent/action
description: used to help network operators with perform device operations like rebooting a device.
parameters:
error_target: #handle errors from Bolt or upstream LLMs
name: “error_handler”
path: /errors

View file

@ -0,0 +1,78 @@
version: "0.1-beta"
listener:
address: 0.0.0.0 # or 127.0.0.1
port_value: 8080
messages: "hugging-face-messages-json" # Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
common_tls_context: # If you configure port 443, you'll need to update the listener with your TLS certificates
tls_certificates:
- certificate_chain:
filename: "/etc/arch/certs/cert.pem"
private_key:
filename: "/etc/arch/certs/key.pem"
system_prompts:
- name: "network_assistant"
content: |
You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
#Centralized way to manage LLM providers that the application has access to. Manage keys retry logic, failover, and limits in a central way
llm_providers:
- name: "OpenAI"
access_key: $OPENAI_API_KEY
model: "gpt-40"
default: true
stream: true
rate_limit:
selector: #optional headers, to add rate limiting based on http headers like JWT tokens or API keys
http-header:
name: "Authorization"
value: "" # Empty value means each separate value has a separate limit
limit:
tokens: 100000 # Tokens per unit
unit: "minute"
- name: "Mistral"
access_key: $MISTRAL_API_KEY
model: "mistral-7B"
prompt_endpoints: #Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
- "http://127.0.0.2" #assumes port 8000, unless port is specified with :5000
- "http://127.0.0.1:5000"
prompt_guards:
input_guard:
- name: "jailbreak"
on_exception:
forward_to_error_target: true
# Additional guard configurations can be added here
- name: "toxicity"
on_exception:
message: "Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters."
prompt_targets:
- name: "information_extraction"
type: "default"
description: "This prompt handles all scenarios that are question and answer in nature. Like summarization, information extraction, etc."
path: "/agent/summary"
auto-llm-dispatch-on-response: true #Arch uses the default LLM and treats the response from the endpoint as the prompt to send to the LLM
- name: "reboot_network_device"
path: "/agent/action"
description: "Helps network operators perform device operations like rebooting a device."
parameters:
- name: "device_id"
type: "string" # additional type options include: integer | float | list | dictionary | set
description: "Identifier of the network device to reboot."
default_value: ""
required: true
- name: "confirmation"
type: "integer" # additional type options include: integer | float | list | dictionary | set
description: "Confirmation flag to proceed with reboot."
required: true
error_target:
name: "error_handler"
path: "/errors"
intent-detection-threshold-override: 0.60 # By default Arch uses an NLI + embedding approach to match an incomming prompt to a prompt target.
# The intent matching threshold is kept at 0.80, you can overide this behavior if you would like

View file

@ -0,0 +1,119 @@
from flask import Flask, request, jsonify
from datetime import datetime
import uuid
app = Flask(__name__)
# Global dictionary to keep track of user conversations
user_conversations = {}
def get_user_conversation(user_id):
"""
Retrieve the user's conversation history.
If the user does not exist, initialize their conversation data.
"""
if user_id not in user_conversations:
user_conversations[user_id] = {
'messages': []
}
return user_conversations[user_id]
def update_user_conversation(user_id, client_messages, intent_changed):
"""
Update the user's conversation history with new messages.
Each message is augmented with a UUID, timestamp, and intent change marker.
Only new messages are added to avoid duplication.
"""
user_data = get_user_conversation(user_id)
# Existing messages in the user's conversation
stored_messages = user_data['messages']
# Determine the number of stored messages
num_stored_messages = len(stored_messages)
# Check for out-of-sync messages
if num_stored_messages > len(client_messages):
return jsonify({'error': 'Client messages are out of sync with server'}), 400
# Determine new messages by slicing the client messages
new_messages = client_messages[num_stored_messages:]
# Process each new message
for index, message in enumerate(new_messages):
message_entry = {
'uuid': str(uuid.uuid4()),
'timestamp': datetime.utcnow().isoformat(),
'role': message.get('role'),
'content': message.get('content'),
'intent_changed': False # Default value
}
# Mark the intent change on the last message if detected
if intent_changed and index == len(new_messages) - 1:
message_entry['intent_changed'] = True
user_data['messages'].append(message_entry)
return user_data
def get_messages_since_last_intent(messages):
"""
Retrieve messages from the last intent change onwards.
"""
messages_since_intent = []
for message in reversed(messages):
messages_since_intent.insert(0, message)
if message.get('intent_changed'):
break
return messages_since_intent
def forward_to_llm(messages):
"""
Simulate forwarding messages to an upstream LLM.
Replace this with the actual API call to the LLM.
"""
# For demonstration purposes, we'll return a placeholder response
return "LLM response based on provided messages."
@app.route('/process_rag', methods=['POST'])
def process_rag():
# Extract JSON data from the request
data = request.get_json()
user_id = data.get('user_id')
if not user_id:
return jsonify({'error': 'User ID is required'}), 400
client_messages = data.get('messages')
if not client_messages or not isinstance(client_messages, list):
return jsonify({'error': 'Messages array is required'}), 400
# Extract the intent change marker from Arch's headers if present for the current prompt
intent_changed_header = request.headers.get('x-arch-intent-marker', '').lower()
if intent_changed_header in ['', 'false']:
intent_changed = False
elif intent_changed_header == 'true':
intent_changed = True
else:
# Invalid value provided
return jsonify({'error': 'Invalid value for x-arch-prompt-intent-change header'}), 400
# Update user conversation based on intent change
user_data = update_user_conversation(user_id, client_messages, intent_changed)
# Retrieve messages since last intent change for LLM
messages_for_llm = get_messages_since_last_intent(user_data['messages'])
# Forward messages to upstream LLM
llm_response = forward_to_llm(messages_for_llm)
# Prepare the response
response = {
'user_id': user_id,
'messages': user_data['messages'],
'llm_response': llm_response
}
return jsonify(response), 200
if __name__ == '__main__':
app.run(debug=True)

View file

@ -0,0 +1,6 @@
{
"user_id": "user123",
"messages": [
{"role": "user", "content": "Tell me a joke."}
]
}

View file

@ -0,0 +1,13 @@
{
"user_id": "user123",
"messages": [
{
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2023-10-05T12:34:56.789123",
"role": "user",
"content": "Tell me a joke.",
"intent_changed": true
}
]
}

View file

@ -0,0 +1,55 @@
@import url("theme.css");
/* Splits a long line descriptions in tables in to multiple lines */
.wy-table-responsive table td, .wy-table-responsive table th {
white-space: normal !important;
}
/* align multi line csv table columns */
table.docutils div.line-block {
margin-left: 0;
}
/* Breaking long words */
.wy-nav-content {
overflow-wrap: break-word;
max-width: 1000px;
}
/* To style the API version label of a search result item */
.xds-version-label {
border-radius: 20%;
background-color: #aaa;
color: #ffffff;
margin-left: 4px;
padding: 4px;
}
/* make inline sidebars flow down the right of page */
.rst-content .sidebar {
clear: right;
}
/* make code.literals more muted - dont use red! */
.rst-content code.literal {
color: #555;
background-color: rgba(27, 31, 35, 0.05);
padding: 2px 2px;
border: solid #eee 1px;
}
/* restore margin bottom on aligned images */
.rst-content img.align-center {
margin-bottom: 24px
}
/* suppress errs on pseudo-json code highlights */
.highlight-json .highlight .err {
border: inherit;
box-sizing: inherit;
}
/* tame the search highlight colours */
.rst-content .highlighted {
background: #f6f5db;
box-shadow: 0 0 0 2px #e7e6b6;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 722 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 446 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 298 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 282 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

55
docs/source/conf.py Normal file
View file

@ -0,0 +1,55 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'Arch'
copyright = '2024, Katanemo Labs, Inc'
author = 'Katanemo Labs, Inc'
release = '0.1-beta'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
root_doc = 'root'
# -- General configuration ---------------------------------------------------
extensions = [
'sphinx.ext.autodoc', # For generating documentation from docstrings
'sphinx.ext.napoleon', # For Google style and NumPy style docstrings
'sphinx_copybutton',
'sphinx.ext.viewcode',
]
# Paths that contain templates, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and directories
# to ignore when looking for source files.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
html_favicon = '_static/favicon.ico'
# -- Options for HTML output -------------------------------------------------
html_theme = 'sphinx_rtd_theme' # You can change the theme to 'sphinx_rtd_theme' or another of your choice.
# Specify the path to the logo image file (make sure the logo is in the _static directory)
html_logo = '_static/img/arch-logo.png'
html_theme_options = {
'logo_only': True,
'includehidden': False,
'navigation_depth': 4,
'collapse_navigation': False,
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_style = 'css/arch.css'

View file

@ -0,0 +1,13 @@
Configuration Reference
============================
The following is a complete reference of the prompt-conifg.yml that controls the behavior of an Arch gateway.
We've kept things simple (less than 100 lines) and held off on exposing additional functionality (for e.g. suppporting
push observability stats, managing prompt-endpoints as virtual cluster, expose more load balancing options to endpoints,
etc). Our focus has been to choose the best defaults for developers, so that they can spend more of their time in building
features unique to their AI experience.
.. literalinclude:: /_config/prompt-config-full-reference.yml
:language: yaml
:linenos:
:caption: :download:`prompt-config-full-reference-beta-1-0.yml </_config/prompt-config-full-reference.yml>`

View file

@ -0,0 +1,8 @@
Getting Started
================
This section gets you started with a very simple configuration and provides some example configurations.
The fastest way to get started using Arch is installing `pre-built binaries <https://hub.docker.com/r/katanemo/arch>`_.
You can also build it from source.

View file

@ -0,0 +1,6 @@
.. toctree::
:maxdepth: 2
:caption: Sample Applications
sample_apps/rag
sample_apps/function_calling

View file

@ -0,0 +1,6 @@
Function Calling (Agentic) Apps
===============================
Building something more than a summary/qa experience requires giving users access to you data and APIs - via prompts.
Arch enables that use case by offering a capability called "Function Calling". Arch extracts critical imformation
from a prompt and can match the intent of the user to an API or business function hosted in your application.

View file

@ -0,0 +1,27 @@
Retrieval-Augmented Generation (RAG)
====================================
The following section describes how Arch can help you build faster, more smarter Retrieval-Augmented Generation (RAG) applications.
Intent Markers (Multi-Turn Chat)
----------------------------------
Developers struggle to handle follow-up questions, or clarifying questions from users in their AI applications. Specifically, when
users ask for modifications or additions to previous responses, their AI applications often generates entirely new responses instead
of adjusting the previous ones. Developers are facing challenges in maintaining context across interactions, despite using tools like
ConversationBufferMemory and chat_history from Langchain.
There are several documented cases of this issue, `here <https://www.reddit.com/r/ChatGPTPromptGenius/comments/17dzmpy/how_to_use_rag_with_conversation_history_for/?>`_,
`and here <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ and `again here <https://www.reddit.com/r/LangChain/comments/1bajhg8/chat_with_rag_further_questions/>`_.
Arch helps developer with intent detection tracking. Arch uses its lightweight NLI and embedding-based intent detection models to know
if the user's last prompt represents a new intent or not. This way developers can easily build an intent tracker and only use a subset of prompts
to process from the history to improve the retrieval and speed of their applications.
.. literalinclude:: /_include/intent_detection.py
:language: python
:linenos:
:lines: 77-
:emphasize-lines: 15-22
:caption: :download:`intent-detection-python-example.py </_include/intent_detection.py>`

View file

@ -0,0 +1,10 @@
Technical Architecture
======================
.. toctree::
:maxdepth: 2
intro/terminology
intro/threading_model
listeners/listeners
prompt_processing/prompt_processing

View file

@ -0,0 +1,44 @@
Terminology
============
A few definitions before we dive into the main architecture documentation. Arch borrows from Envoy's terminology
to keep things consistent in logs, traces and in code.
**Downstream**: An downstream client (web application, etc.) connects to Arch, sends requests, and receives responses.
**Upstream**: An upstream host receives connections and prompts from Arch, and returns context or responses for a prompt
.. image:: /_static/img/network-topology-ingress-egress.jpg
:width: 100%
:align: center
**Listener**: A listener is a named network location (e.g., port, address, path etc.) that Arch listens on to process prompts
before forwarding them to your application server endpoints. rch enables you to configure one listener for downstream connections
(like port 80, 443) and creates a separate internal listener for calls that initiate from your application code to LLMs.
.. Note::
When you start Arch, you specify a listener address/port that you want to bind downstream (. But Arch uses are predefined port that you
can use for outbound calls to LLMs and other services 127.0.0.1:10000
**Instance**: An instance of the Arch gateway. When you start Arch it creates at most two processes. One to handle Layer 7
networking operations (auth, tls, observability, etc) and the second process to serve models that enable it to make smart
decisions on how to accept, handle and forward prompts. The second process is optional, as the model serving sevice could be
hosted on a different network (an API call). But these two processes are considered a single instance of Arch.
**System Prompt**: An initial text or message that is provided by the developer that Arch can use to call an downstream LLM
in order to generate a response from the LLM model. The system prompt can be thought of as the input or query that the model
uses to generate its response. The quality and specificity of the system prompt can have a significant impact on the relevance
and accuracy of the model's response. Therefore, it is important to provide a clear and concise system prompt that accurately
conveys the user's intended message or question.
**Prompt Targets**: Arch offers a primitive called “prompt targets” to help separate business logic from undifferentiated
work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Bolt.
For example, Bolt enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt
so that you can build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments
on a document - via prompts, Bolt uses its function calling abilities to extract critical information from the incoming prompt
(or a set of prompts) needed by a downstream backend API or function call before calling it directly.
**Error Targets**: Error targets are those endpoints that receive forwarded errors from Arch when issues arise,
such as failing to properly call a function/API, detecting violations of guardrails, or encountering other processing errors.
These errors are communicated to the application via headers (X-Arch-[ERROR-TYPE]), allowing it to handle the errors gracefully and take appropriate actions.

View file

@ -0,0 +1,21 @@
.. _arch_overview_threading:
Threading model
===============
Arch builds on top of Envoy's single process with multiple threads architecture.
A single *primary* thread controls various sporadic coordination tasks while some number of *worker*
threads perform filtering, and forwarding.
Once a connection is accepted, the connection spends the rest of its lifetime bound to a single worker
thread. All the functionality around prompt handling from a downstream client is handled in a separate worker thread.
This allows the majority of Arch to be largely single threaded (embarrassingly parallel) with a small amount
of more complex code handling coordination between the worker threads.
Generally Arch is written to be 100% non-blocking.
.. tip::
For most workloads we recommend configuring the number of worker threads to be equal to the number of
hardware threads on the machine.

View file

@ -0,0 +1,27 @@
.. _arch_overview_listeners:
Listener
========
Arch leverages Envoys Listener subsystem to streamline connection management for developers.
By building on Envoys robust architecture, Arch simplifies the configuration required to bind incoming
connections from downstream clients and efficiently manages internal listeners for outgoing connections
to LLM hosts and APIs.
**Listener Subsystem Overview**
- **Downstream Connections**: Arch uses Envoy's Listener subsystem to accept connections from downstream clients.
A listener acts as the primary entry point for incoming traffic, handling initial connection setup, including network
filtering and security checks, such as SNI and TLS termination. For more details on the listener subsystem, refer to the
`Envoy Listener Configuration <https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/listeners>`_.
- **Internal Listeners for Outgoing Connections**: Arch automatically configures internal listeners to route requests
from prompts origination from your application services to appropriate upstream targets, including LLM hosts and backend APIs.
This configuration abstracts away complex networking setups, allowing developers to focus on business logic rather than the
intricacies of connection management and multiple SDKs to work with different LLM providers.
- **Simplified Configuration**: Arch minimizes the complexity of traditional Envoy setups by pre-defining essential
listener settings, making it easier for developers to bind connections without deep knowledge of Envoys configuration model.
This simplification ensures that connections are secure, reliable, and optimized for performance.
Archs dependency on Envoys Listener subsystem provides a powerful, developer-friendly interface for managing connections,
enhancing the overall efficiency of handling prompts and routing them to the correct endpoints within a generative AI application.

View file

@ -0,0 +1,60 @@
.. _arch_overview_prompt_handling:
Prompt Processing
=================
.. contents::
:local:
:depth: 2
Arch's model serving process is designed to securely handle incoming prompts by detecting jailbreak attempts,
processing the prompts, and routing them to appropriate functions or prompt targets based on intent detection.
The serving workflow integrates several key components, each playing a crucial role in managing generative AI interactions:
Jailbreak and Toxicity Guardrails
---------------------------------
Arch employs Arch-Guard, a security layer powered by a compact and high-performimg LLM that monitors incoming prompts to detect
and reject jailbreak attempts, ensuring that unauthorized or harmful behaviors are intercepted early in the process. Arch-Guard
is the leading model in the industry for jailbreak and toxicity detection. Configuring guardrails is super simple. See example
below.
.. literalinclude:: /_config/getting-started.yml
:language: yaml
:linenos:
:emphasize-lines: 18-21
:caption: :download:`arch-getting-started.yml </_config/getting-started.yml>`
Prompt Targets
---------------
Once a prompt passes the security checks, Arch processes the content and identifies if any specific functions need to be called.
Arch-FC1B, a dedicated function calling module, extracts critical information from the prompt and executes the necessary
backend API calls or internal functions. This capability allows for efficient handling of agentic tasks, such as scheduling or
data retrieval, by dynamically interacting with backend services.
.. image:: /_static/img/function-calling-network-flow.jpg
:width: 100%
:align: center
Intent Detection and Prompt Matching:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Arch uses Natural Language Inference (NLI) and embedding-based approaches to detect the intent of each incoming prompt.
This intent detection phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt
is forwarded to the most appropriate endpoint. Archs intent detection framework considers both the name and description of each prompt target,
enhancing accuracy in forwarding decisions.
- **Embedding Approaches**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match,
ensuring that the prompt is handled by the correct downstream service.
- **NLI Integration**: Natural Language Inference techniques further refine the matching process by evaluating the semantic alignment
between the prompt and potential targets.
Forwarding Prompts to Downstream Targets:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
After determining the correct target, Arch forwards the prompt to the designated endpoint, such as an LLM host or API service.
This seamless routing mechanism integrates with Arch's broader ecosystem, enabling efficient communication and response generation tailored to the user's intent.
Arch's model serving process combines robust security measures with advanced intent detection and function calling capabilities, creating a reliable and adaptable environment for managing generative AI workflows. This approach not only enhances the accuracy and relevance of responses but also safeguards against malicious usage patterns, aligning with best practices in AI governance.

View file

@ -0,0 +1,15 @@
.. _getting_help:
Getting help
============
We are very interested in building a community around Arch. Please reach out to us if you are
interested in using it and need help or want to contribute.
Please see `contact info <https://github.com/katanemo/arch#contact>`_.
Reporting security vulnerabilities
----------------------------------
Please see `security contact info
<https://github.com/katanemo/arch#reporting-security-vulnerabilities>`_.

View file

@ -0,0 +1,12 @@
.. _intro:
Introduction
============
.. toctree::
:maxdepth: 3
what_is_arch
architecture/architecture
life_of_a_request
getting_help

View file

@ -0,0 +1,124 @@
.. _life_of_a_request:
Life of a Request
=================
Below we describe the events in the life of a request passing through an Arch gateway instance. We first
describe how Arch fits into the request path and then the internal events that take place following
the arrival of a request at Arch from downtream clients. We follow the request until the corresponding
dispatch upstream and the response path.
.. image:: /_static/img/network-topology-app-server.jpg
:width: 100%
:align: center
Terminology
-----------
Arch uses the following terms through its' codebase and documentation:
* *Listeners*: The Arch primitive responsible for binding to an IP/port, accepting new HTTP connections and orchestrating
the downstream facing aspects of prompt processing. Arch relies almostly exclusively on `Envoy's Listener subsystem <arch_overview_listeners>`_.
* *Downstream*: an entity connecting to Arch. This may be another AI agent (side car or networked) or a remote client.
* *LLM Providers*: a set of upstream LLMs (API-based or network nodes) that Arch routes/forwards user and application-specific prompts to.
Arch offers a simply abstract to call different LLMs via model-id, add LLM specific retry, failover and routing capabilities.
Arch build's on top of Envoy's `Cluster substem <https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager#arch-overview-cluster-manager>`
* *Upstream*: A set of hosts that can recieve traffic from an instance of the Arch gateway.
* *Prompt Targets*: A core primitive offered in Arch. Prompt targets are endpoints that receive prompts that are processed by Arch.
For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you can
build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts,
Network topology
----------------
How a request flows through the components in a network (including Arch) depends on the networks topology.
Arch can be used in a wide variety of networking topologies. We focus on the inner operation of Arch below,
but briefly we address how Arch relates to the rest of the network in
this section.
* Ingress listeners take requests from upstream clients like a web UI or clients that forward prompts to you local application
Responses from the local application flow back through Arch to the downstream.
* Egress listeners take requests from the local application and forward them to LLMs. These receiving nodes
will also be typically running Arch and accepting the request via their ingress listeners.
.. image:: /_static/img/network-topology-app-server.jpg
:width: 100%
:align: center
In practice, Arch can be deployed on the edge and as an internal load balancer between AI agents. A request path may
traverse multiple Arch gateways:
.. image:: /_static/img/network-topology-agent.jpg
:width: 100%
:align: center
Configuration
-------------
Today, only support a static bootstrap configuration file for simplicity today:
.. literalinclude:: /_config/getting-started.yml
:language: yaml
High level architecture
-----------------------
The request processing path in Arch has two main parts:
* :ref:`Listener subsystem <arch_overview_listeners>` which handles **downstream** request
processing. It is also responsible for managing the downstream request lifecycle and for the
response path to the client. The downstream HTTP/2 codec lives here.
* :ref:`Prompt subsystem <arch_overview_prompt_handling>` which is responsible for selecting and
processing the **upstream** connection to an endpoint. This is where knowledge of targets and
endpoint health, load balancing and connection pooling exists. The upstream HTTP/2 codec lives
here.
The two subsystems are bridged with the HTTP router filter, which forwards the HTTP request from
downstream to upstream.
Arch utilizes `Envoy event-based thread model <https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310>`_.
A main thread is responsible forthe server lifecycle, configuration processing, stats, etc. and some number
of :ref:`worker threads <arch_overview_threading>` process requests. All threads operate around an event
loop (`libevent <https://libevent.org/>`_) and any given downstream TCP connection will be handled by exactly
one worker thread for its lifetime. Each worker thread maintains its own pool of TCP connections to upstream
endpoints. Today, Arch implemenents its core functionality around prompt handling in worker threads.
Worker threads rarely share state and operate in a trivially parallel fashion. This threading model
enables scaling to very high core count CPUs.
Request Flow
------------
Overview
^^^^^^^^
A brief outline of the life cycle of a request and response using the example configuration above:
1. **TCP Connection Establishment**:
A TCP connection from downstream is accepted by an Arch listener running on a worker thread. The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS, decrypts incoming data for processing.
2. **Prompt Guardrails Check**:
Arch first checks the incoming prompts for guardrails such as jailbreak attempts and toxicity. This ensures that harmful or unwanted behaviors are detected early in the request processing pipeline.
3. **Intent Matching**:
The decrypted data stream is deframed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs intent matching using the name and description of the defined prompt targets, determining which endpoint should handle the prompt.
4. **Parameter Gathering with Arch-FC1B**:
If a prompt target requires specific parameters, Arch engages Arch-FC1B to extract the necessary details from the incoming prompt(s). This process gathers the critical information needed for downstream API calls.
5. **API Call Execution**:
Arch routes the prompt to the appropriate backend API or function call. If an endpoint cluster is identified, load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint. For more details on routing and load balancing, refer to the [Envoy routing documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/intro/arch_overview).
6. **Default Summarization by Upstream LLM**:
By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization. This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval-Augmented Generation) and agentic applications.
7. **Error Handling and Forwarding**:
Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to designated error targets. Error details are communicated through specific headers to the application:
- ``X-Function-Error-Code``: Code indicating the type of function call error.
- ``X-Prompt-Guard-Error-Code``: Code specifying violations detected by prompt guardrails.
- Additional headers carry messages and timestamps to aid in debugging and logging.
8. **Response Handling**:
The upstream endpoints TLS transport socket encrypts the response, which is then proxied back downstream. Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.

View file

@ -0,0 +1,72 @@
What is Arch
============
Arch is an intelligent Layer 7 gateway designed for generative AI apps, agents, and Co-pilots that work
with prompts. Written in `Rust <https://www.rust-lang.org/>`_, and engineered with purpose-built
:ref:`LLMs <llms_in_arch>`, Arch handles all the critical but undifferentiated tasks related to handling and
processing prompts, including rejecting `jailbreak <https://github.com/verazuo/jailbreak_llms>`_ attempts,
intelligently calling “backend” APIs to fulfill a user's request represented in a prompt, routing/disaster
recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
The project was born out of the belief that:
*prompts are nuanced and opaque user requests that need the same capabilities as network requests
in modern (cloud-native) applications, including secure handling, intelligent routing, robust observability,
and integration with backend (API) systems for personalization.*
In practice, achieving the above goal is incredibly difficult. Arch attempts to do so by providing the
following high level features:
**Out of process archtiecture, built on Envoy:** Arch is takes a dependency on `Envoy <http://envoyproxy.io/>`_
and is a self-contained process that is designed to run alongside your application servers. Arch uses
Envoy's HTTP connection management subsystem and HTTP L7 filtering capabilities to extend its' proxying
functionality. This gives Arch several advantages:
* Arch builds on Envoy's success. Envoy is used at masssive sacle by the leading technology companies of
our time including `AirBnB <https://www.airbnb.com>`_, `Dropbox <https://www.dropbox.com>`_,
`Google <https://www.google.com>`_, `Reddit <https://www.reddit.com>`_, `Stripe <https://www.stripe.com>`_,
etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters:
application and business logic.
* Arch works with any application language. A single Arch deployment can act as gateway for AI applications
written in Python, Java, C++, Go, Php, etc.
* As anyone that has worked with a modern application architecture knows, deploying library upgrades
can be incredibly painful. Arch can be deployed and upgraded quickly across your infrastructure
transparently.
**Engineered with LLMs:** Arch is engineered with specialized LLMs that are desgined for fast, cost-effective
and acurrate handling of prompts. These (sub-billion parameter) :ref:`LLMs <llms_in_arch>` are designed to be
best-in-class for critcal but undifferentiated prompt-related tasks like 1) applying guardrails for jailbreak
attempts 2) extracting critical information from prompts (like follow-on, clarifying questions, etc.) so that
you can improve the speed and accuracy of retrieval, and be able to convert prompts into API sematics when necessary
to build text-to-action (or agentic) applications. The focus for Arch is to make prompt processing indistiguishable
from the processing of a traditional HTTP request before forwarding it to an application server. With our focus on
speed and cost, Arch uses purpose-built LLMs and will continue to invest in those to lower latency (and cost) while
maintaining exceptional baseline performance with frontier LLMs like `OpenAI <https:openai.com>`_, and
`Anthropic <https:www.anthropic.com>`_.
**Prompt Guardrails:** Arch helps you apply prompt guardrails in a centralized way for better governance
hygiene. With prompt guardrails you can prevent `jailbreak <https://github.com/verazuo/jailbreak_llms>`_
attempts or toxicity present in user's prompts without having to write a single line of code. To learn more about
how to configure guardrails available in Arch, read :ref:`more <llms_in_arch>`.
**Function Calling:** Arch helps you personalize GenAI apps by enabling calls to application-specific (API)
operations using prompts. This involves any predefined functions or APIs you want to expose to users to
perform tasks, gather information, or manipulate data. With function calling, you have flexibilityto support
agentic workflows tailored to specific use cases - from updating insurance claims to creating ad campaigns.
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with the
user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
**Best-In Class Monitoring & Traffic Management:** Arch offers several monitoring metrics that help you
understand three critical aspects of your application: latency, token usage, and error rates by LLM provider.
Latency measures the speed at which your application is responding to users, which includes metrics like time
to first token (TFT), time per output token (TOT) metrics, and the total latency as perceived by users. In
addition, Arch offers several capabilities for calls originating from your applications to upstream LLMs,
including a vendor-agnostic SDK to make LLM calls, smart retries on errors from upstream LLMs, and automatic
cutover to other LLMs configured for continuous availability and disaster recovery scenarios.
**Front/edge proxy support:** There is substantial benefit in using the same software at the edge (observability,
prompt management, load balancing algorithms, etc.) as it is . Arch has a feature set that makes it well suited
as an edge proxy for most modern web application use cases. This includes TLS termination, HTTP/1.1 HTTP/2 and
HTTP/3 support and prompt-based routing.

139
docs/source/llms/llms.rst Normal file
View file

@ -0,0 +1,139 @@
.. _llms_in_arch:
LLMs
====
Arch utilizes purpose-built, industry leading, LLMs to handle the crufty and undifferentiated
work around accepting, handling and processing prompts. The following
Arch-Guard
----------
LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developers
intended behavior of the LLM.Arch-Guard is a classifier model trained on a large corpus of attacks, capable of detecting explicitly
malicious prompts (and toxicity).
The model is useful as a starting point for identifying and guardrailing against the most risky realistic inputs to
LLM-powered applications. Our goal in embedding Arch-Guard in the Arch gateway is to enable developers to focus on their business logic
and factor out security and safety outside application logic. Wth Arch-Guard= developers can take to significantly reduce prompt attack
risk while maintaining control over the user experience.
Below is our test results of the strength of our model as compared to Prompt-Guard from `Meta LLama <https://huggingface.co/meta-llama/Prompt-Guard-86M>`_.
.. list-table::
:header-rows: 1
:widths: 15 15 10 15 15
* - Dataset
- Jailbreak (Yes/No)
- Samples
- Prompt-Guard Accuracy
- Arch-Guard Accuracy
* - casual_conversation
- 0
- 3725
- 1.00
- 1.00
* - commonqa
- 0
- 9741
- 1.00
- 1.00
* - financeqa
- 0
- 1585
- 1.00
- 1.00
* - instruction
- 0
- 5000
- 1.00
- 1.00
* - jailbreak_behavior_benign
- 0
- 100
- 0.10
- 0.20
* - jailbreak_behavior_harmful
- 1
- 100
- 0.30
- 0.52
* - jailbreak_judge
- 1
- 300
- 0.33
- 0.49
* - jailbreak_prompts
- 1
- 79
- 0.99
- 1.00
* - jailbreak_tweet
- 1
- 1282
- 0.16
- 0.35
* - jailbreak_v
- 1
- 20000
- 0.90
- 0.93
* - jailbreak_vigil
- 1
- 104
- 1.00
- 1.00
* - mental_health
- 0
- 3512
- 1.00
- 1.00
* - telecom
- 0
- 4000
- 1.00
- 1.00
* - truthqa
- 0
- 817
- 1.00
- 0.98
* - weather
- 0
- 3121
- 1.00
- 1.00
.. list-table::
:header-rows: 1
:widths: 15 20
* - Statistics
- Overall performance
* - Overall Accuracy
- 0.93568 (Prompt-Guard), 0.95267 (Arch-Guard)
* - True positives rate (TPR)
- 0.8468 (Prompt-Guard), 0.8887 (Arch-Guard)
* - True negative rate (TNR)
- 0.9972 (Prompt-Guard), 0.9970 (Arch-Guard)
* - False positive rate (FPR)
- 0.0028 (Prompt-Guard), 0.0030 (Arch-Guard)
* - False negative rate (FNR)
- 0.1532 (Prompt-Guard), 0.1113 (Arch-Guard)
.. list-table::
:header-rows: 1
:widths: 15 20
* - Metrics
- Values
* - AUC
- 0.857 (Prompt-Guard), 0.880 (Arch-Guard)
* - Precision
- 0.715 (Prompt-Guard), 0.761 (Arch-Guard)
* - Recall
- 0.999 (Prompt-Guard), 0.999 (Arch-Guard)
Arch-FC1B
---------

11
docs/source/root.rst Normal file
View file

@ -0,0 +1,11 @@
Arch Documentation
==================
.. toctree::
:maxdepth: 3
intro/intro
llms/llms
configuration_reference
getting_started/getting_started
getting_started/sample_apps