Adil/fix salman docs (#75)
* added the first set of docs for our technical docs * more docuemtnation changes * added support for prompt processing and updated life of a request * updated docs to including getting help sections and updated life of a request * committing local changes for getting started guide, sample applications, and full reference spec for prompt-config * updated configuration reference, added sample app skeleton, updated favico * fixed the configuration refernce file, and made minor changes to the intent detection. commit v1 for now * Updated docs with use cases and example code, updated what is arch, and made minor changes throughout * fixed imaged and minor doc fixes * add sphinx_book_theme * updated README, and make some minor fixes to documetnation * fixed README.md * fixed image width --------- Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local> Co-authored-by: Adil Hafeez <adil@katanemo.com>
2
.gitignore
vendored
|
|
@ -14,4 +14,4 @@ venv
|
|||
demos/function_calling/ollama/models/
|
||||
demos/function_calling/ollama/id_ed*
|
||||
docs/build/
|
||||
open-webui/
|
||||
demos/function_calling/open-webui/
|
||||
|
|
|
|||
13
README.md
|
|
@ -1,4 +1,15 @@
|
|||
A open source project for developers to build and secure faster, more personalized generative AI apps. Katanemo is a high performance gateway designed with state of the art (SOTA) fast LLMs to process, route and evaluate prompts.
|
||||
<p>
|
||||
<img src="docs/source/_static/img/arch-logo.png" alt="Arch Gateway Logo" title="Arch Gateway Logo">
|
||||
</p>
|
||||
|
||||
<h2>Build fast, robust, and personalized GenAI applications.</h2>
|
||||
|
||||
Arch is an intelligent [Layer 7](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) gateway designed for generative AI apps, AI agents, and co-pilots that work with prompts. Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting [jailbreak](https://github.com/verazuo/jailbreak_llms) attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
|
||||
|
||||
Arch is built on and by the core contributors of the popular [Envoy Distributed Proxy](https://www.envoyproxy.io/) with the belief that:
|
||||
|
||||
*Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – all outside business logic.*
|
||||
|
||||
|
||||
# Demos
|
||||
## Complete
|
||||
|
|
|
|||
|
|
@ -22,3 +22,4 @@ sphinxcontrib-jsmath==1.0.1
|
|||
sphinxcontrib-qthelp==2.0.0
|
||||
sphinxcontrib-serializinghtml==2.0.0
|
||||
urllib3==2.2.3
|
||||
sphinx_book_theme
|
||||
|
|
|
|||
41
docs/source/_config/function-calling-network-agent.yml
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
version: "0.1-beta"
|
||||
listen:
|
||||
address: 127.0.0.1 | 0.0.0.0
|
||||
port_value: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates
|
||||
|
||||
system_prompts:
|
||||
- name: network_assistant
|
||||
content: You are a network assistant that just offers facts about the operational health of the network
|
||||
|
||||
llm_providers:
|
||||
- name: "OpenAI"
|
||||
access_key: $OPEN_AI_KEY
|
||||
model: gpt-4o
|
||||
default: true
|
||||
|
||||
prompt_targets:
|
||||
- name: reboot_devices
|
||||
description: >
|
||||
This prompt target handles user requests to reboot devices.
|
||||
It ensures that when users request to reboot specific devices or device groups, the system processes the reboot commands accurately.
|
||||
|
||||
**Examples of user prompts:**
|
||||
|
||||
- "Please reboot device 12345."
|
||||
- "Restart all devices in tenant group tenant-XYZ
|
||||
- "I need to reboot devices A, B, and C."
|
||||
|
||||
path: /agent/device_reboot
|
||||
parameters:
|
||||
- name: "device_ids"
|
||||
type: list # Options: integer | float | list | dictionary | set
|
||||
description: "A list of device identifiers (IDs) to reboot."
|
||||
required: false
|
||||
- name: "device_group"
|
||||
type: string # Options: string | integer | float | list | dictionary | set
|
||||
description: "The name of the device group to reboot."
|
||||
required: false
|
||||
|
||||
prompt_endpoints:
|
||||
- "http://127.0.0.2"
|
||||
- "http://127.0.0.1"
|
||||
|
|
@ -1,21 +1,22 @@
|
|||
version: "0.1-beta"
|
||||
listen:
|
||||
address: 127.0.0.1 | 0.0.0.0
|
||||
port_value: 8080 #If you configure port 443, you'll need to update the listener to with your tls_certificates
|
||||
messages: tuple | hugging-face-messages-api
|
||||
|
||||
port_value: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates
|
||||
messages: tuple | hugging-face-messages-api
|
||||
|
||||
system_prompts:
|
||||
- name: network_assistant
|
||||
content: you are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions
|
||||
content: You are a network assistant that just offers facts about the operational health of the network
|
||||
|
||||
llm_providers:
|
||||
llm_providers:
|
||||
- name: "OpenAI"
|
||||
access_key: $OPEN_AI_KEY
|
||||
model: gpt-4
|
||||
model: gpt-4o
|
||||
default: true
|
||||
- name: "Mistral"
|
||||
access_key: $MISTRAL_KEY
|
||||
model: "mixtral8-7B"
|
||||
model: mixtral8-7B
|
||||
|
||||
prompt_endpoints:
|
||||
- "http://127.0.0.2"
|
||||
- "http://127.0.0.1"
|
||||
|
|
@ -23,17 +24,18 @@ prompt_endpoints:
|
|||
prompt_guards:
|
||||
input-guard:
|
||||
- name: #jailbreak
|
||||
on-exception-message: Looks like you are curious about my abilities. But I can only
|
||||
prompt_targets:
|
||||
on-exception-message: Looks like you are curious about my abilities. But I can only
|
||||
|
||||
prompt_targets:
|
||||
- name: information_extraction
|
||||
type: RAG
|
||||
type: RAG
|
||||
description: this prompt handles all information extractions scenarios
|
||||
path: /agent/summary
|
||||
|
||||
- name: reboot_network_device
|
||||
path: /agent/action
|
||||
description: used to help network operators with perform device operations like rebooting a device.
|
||||
parameters:
|
||||
error_target: #handle errors from Bolt or upstream LLMs
|
||||
parameters:
|
||||
error_target: #handle errors from Bolt or upstream LLMs
|
||||
name: “error_handler”
|
||||
path: /errors
|
||||
path: /errors
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ version: "0.1-beta"
|
|||
|
||||
listener:
|
||||
address: 0.0.0.0 # or 127.0.0.1
|
||||
port_value: 8080
|
||||
port_value: 8080
|
||||
messages: "hugging-face-messages-json" # Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
|
||||
common_tls_context: # If you configure port 443, you'll need to update the listener with your TLS certificates
|
||||
tls_certificates:
|
||||
|
|
@ -16,18 +16,17 @@ system_prompts:
|
|||
content: |
|
||||
You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
|
||||
|
||||
#Centralized way to manage LLM providers that the application has access to. Manage keys retry logic, failover, and limits in a central way
|
||||
llm_providers:
|
||||
llm_providers: #Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
|
||||
- name: "OpenAI"
|
||||
access_key: $OPENAI_API_KEY
|
||||
model: "gpt-40"
|
||||
model: gpt-4o
|
||||
default: true
|
||||
stream: true
|
||||
rate_limit:
|
||||
selector: #optional headers, to add rate limiting based on http headers like JWT tokens or API keys
|
||||
http-header:
|
||||
name: "Authorization"
|
||||
value: "" # Empty value means each separate value has a separate limit
|
||||
value: "" # Empty value means each separate value has a separate limit
|
||||
limit:
|
||||
tokens: 100000 # Tokens per unit
|
||||
unit: "minute"
|
||||
|
|
@ -44,7 +43,6 @@ prompt_guards:
|
|||
- name: "jailbreak"
|
||||
on_exception:
|
||||
forward_to_error_target: true
|
||||
# Additional guard configurations can be added here
|
||||
- name: "toxicity"
|
||||
on_exception:
|
||||
message: "Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters."
|
||||
|
|
@ -74,5 +72,7 @@ error_target:
|
|||
name: "error_handler"
|
||||
path: "/errors"
|
||||
|
||||
intent-detection-threshold-override: 0.60 # By default Arch uses an NLI + embedding approach to match an incomming prompt to a prompt target.
|
||||
# The intent matching threshold is kept at 0.80, you can overide this behavior if you would like
|
||||
tracing: 100 #sampling rate. Note by default Arch works on OpenTelemetry compatible tracing.
|
||||
|
||||
intent-detection-threshold-override: 0.60 # By default Arch uses an NLI + embedding approach to match an incomming prompt to a prompt target.
|
||||
# The intent matching threshold is kept at 0.80, you can overide this behavior if you would like
|
||||
|
|
|
|||
40
docs/source/_config/rag-prompt-targets.yml
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
version: "0.1-beta"
|
||||
listen:
|
||||
address: 127.0.0.1 | 0.0.0.0
|
||||
port_value: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates
|
||||
|
||||
system_prompts:
|
||||
- name: network_assistant
|
||||
content: You are a network assistant that just offers facts about the operational health of the network
|
||||
|
||||
llm_providers:
|
||||
- name: "OpenAI"
|
||||
access_key: $OPEN_AI_KEY
|
||||
model: gpt-4o
|
||||
default: true
|
||||
|
||||
prompt_targets:
|
||||
- name: get_device_statistics
|
||||
description: >
|
||||
This prompt target ensures that when users request device-related statistics, the system accurately retrieves and presents the relevant data
|
||||
based on the specified devices and time range. Examples of user prompts, include:
|
||||
|
||||
- "Show me the performance stats for device 12345 over the past week."
|
||||
- "What are the error rates for my devices in the last 24 hours?"
|
||||
- "I need statistics on device 789 over the last 10 days."
|
||||
|
||||
path: /agent/device_summary
|
||||
parameters:
|
||||
- name: "device_ids"
|
||||
type: list # Options: integer | float | list | dictionary | set
|
||||
description: "A list of device identifiers (IDs) for which the statistics are requested."
|
||||
required: true
|
||||
- name: "time_range"
|
||||
type: integer # Options: integer | float | list | dictionary | set
|
||||
description: "The number of days in the past over which to retrieve device statistics. Defaults to 7 days if not specified."
|
||||
required: false
|
||||
default: 7
|
||||
|
||||
prompt_endpoints:
|
||||
- "http://127.0.0.2"
|
||||
- "http://127.0.0.1"
|
||||
72
docs/source/_include/function_calling_flask.py
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
from flask import Flask, request, jsonify
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route('/agent/device_reboot', methods=['POST'])
|
||||
def reboot_devices():
|
||||
"""
|
||||
Endpoint to reboot devices based on device IDs or a device group.
|
||||
"""
|
||||
data = request.get_json()
|
||||
|
||||
# Extract parameters based on the prompt targets definition
|
||||
device_ids = data.get('device_ids')
|
||||
device_group = data.get('device_group')
|
||||
|
||||
# Validate that at least one parameter is provided
|
||||
if not device_ids and not device_group:
|
||||
return jsonify({'error': "At least one of 'device_ids' or 'device_group' must be provided."}), 400
|
||||
|
||||
devices_to_reboot = []
|
||||
|
||||
# Process 'device_ids' if provided
|
||||
if device_ids:
|
||||
if not isinstance(device_ids, list):
|
||||
return jsonify({'error': "'device_ids' must be a list."}), 400
|
||||
devices_to_reboot.extend(device_ids)
|
||||
|
||||
# Process 'device_group' if provided
|
||||
if device_group:
|
||||
if not isinstance(device_group, str):
|
||||
return jsonify({'error': "'device_group' must be a string."}), 400
|
||||
# Simulate retrieving device IDs from the device group
|
||||
# In a real application, replace this with actual data retrieval
|
||||
group_devices = get_devices_by_group(device_group)
|
||||
if not group_devices:
|
||||
return jsonify({'error': f"No devices found in group '{device_group}'."}), 404
|
||||
devices_to_reboot.extend(group_devices)
|
||||
|
||||
# Remove duplicates in case of overlap between device_ids and device_group
|
||||
devices_to_reboot = list(set(devices_to_reboot))
|
||||
|
||||
# Simulate rebooting devices
|
||||
reboot_results = []
|
||||
for device_id in devices_to_reboot:
|
||||
# Placeholder for actual reboot logic
|
||||
result = {
|
||||
'device_id': device_id,
|
||||
'status': 'Reboot initiated'
|
||||
}
|
||||
reboot_results.append(result)
|
||||
|
||||
response = {
|
||||
'reboot_results': reboot_results
|
||||
}
|
||||
|
||||
return jsonify(response), 200
|
||||
|
||||
def get_devices_by_group(group_name):
|
||||
"""
|
||||
Simulate retrieving device IDs based on a device group name.
|
||||
In a real application, this would query a database or external service.
|
||||
"""
|
||||
# Placeholder data for demonstration purposes
|
||||
device_groups = {
|
||||
'Sales': ['1001', '1002', '1003'],
|
||||
'Engineering': ['2001', '2002', '2003'],
|
||||
'Data Center': ['3001', '3002', '3003']
|
||||
}
|
||||
return device_groups.get(group_name, [])
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
|
|
@ -1,78 +1,96 @@
|
|||
from flask import Flask, request, jsonify
|
||||
from datetime import datetime
|
||||
import uuid
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
from langchain.schema import AIMessage, HumanMessage
|
||||
from langchain import OpenAI
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
# Global dictionary to keep track of user conversations
|
||||
user_conversations = {}
|
||||
# Global dictionary to keep track of user memories
|
||||
user_memories = {}
|
||||
|
||||
def get_user_conversation(user_id):
|
||||
"""
|
||||
Retrieve the user's conversation history.
|
||||
If the user does not exist, initialize their conversation data.
|
||||
Retrieve the user's conversation memory using LangChain.
|
||||
If the user does not exist, initialize their conversation memory.
|
||||
"""
|
||||
if user_id not in user_conversations:
|
||||
user_conversations[user_id] = {
|
||||
'messages': []
|
||||
}
|
||||
return user_conversations[user_id]
|
||||
if user_id not in user_memories:
|
||||
user_memories[user_id] = ConversationBufferMemory(return_messages=True)
|
||||
return user_memories[user_id]
|
||||
|
||||
def update_user_conversation(user_id, client_messages, intent_changed):
|
||||
"""
|
||||
Update the user's conversation history with new messages.
|
||||
Update the user's conversation memory with new messages using LangChain.
|
||||
Each message is augmented with a UUID, timestamp, and intent change marker.
|
||||
Only new messages are added to avoid duplication.
|
||||
"""
|
||||
user_data = get_user_conversation(user_id)
|
||||
|
||||
# Existing messages in the user's conversation
|
||||
stored_messages = user_data['messages']
|
||||
memory = get_user_conversation(user_id)
|
||||
stored_messages = memory.chat_memory.messages
|
||||
|
||||
# Determine the number of stored messages
|
||||
num_stored_messages = len(stored_messages)
|
||||
|
||||
# Check for out-of-sync messages
|
||||
if num_stored_messages > len(client_messages):
|
||||
return jsonify({'error': 'Client messages are out of sync with server'}), 400
|
||||
|
||||
# Determine new messages by slicing the client messages
|
||||
new_messages = client_messages[num_stored_messages:]
|
||||
|
||||
# Process each new message
|
||||
for index, message in enumerate(new_messages):
|
||||
message_entry = {
|
||||
role = message.get('role')
|
||||
content = message.get('content')
|
||||
metadata = {
|
||||
'uuid': str(uuid.uuid4()),
|
||||
'timestamp': datetime.utcnow().isoformat(),
|
||||
'role': message.get('role'),
|
||||
'content': message.get('content'),
|
||||
'intent_changed': False # Default value
|
||||
}
|
||||
|
||||
# Mark the intent change on the last message if detected
|
||||
if intent_changed and index == len(new_messages) - 1:
|
||||
message_entry['intent_changed'] = True
|
||||
user_data['messages'].append(message_entry)
|
||||
metadata['intent_changed'] = True
|
||||
|
||||
return user_data
|
||||
# Create a new message with metadata
|
||||
if role == 'user':
|
||||
memory.chat_memory.add_message(
|
||||
HumanMessage(content=content, additional_kwargs={'metadata': metadata})
|
||||
)
|
||||
elif role == 'assistant':
|
||||
memory.chat_memory.add_message(
|
||||
AIMessage(content=content, additional_kwargs={'metadata': metadata})
|
||||
)
|
||||
else:
|
||||
# Handle other roles if necessary
|
||||
pass
|
||||
|
||||
return memory
|
||||
|
||||
def get_messages_since_last_intent(messages):
|
||||
"""
|
||||
Retrieve messages from the last intent change onwards.
|
||||
Retrieve messages from the last intent change onwards using LangChain.
|
||||
"""
|
||||
messages_since_intent = []
|
||||
for message in reversed(messages):
|
||||
# Insert message at the beginning to maintain correct order
|
||||
messages_since_intent.insert(0, message)
|
||||
if message.get('intent_changed'):
|
||||
metadata = message.additional_kwargs.get('metadata', {})
|
||||
# Break if intent_changed is True
|
||||
if metadata.get('intent_changed', False) == True:
|
||||
break
|
||||
return messages_since_intent
|
||||
|
||||
def forward_to_llm(messages):
|
||||
"""
|
||||
Simulate forwarding messages to an upstream LLM.
|
||||
Replace this with the actual API call to the LLM.
|
||||
Forward messages to an upstream LLM using LangChain.
|
||||
"""
|
||||
# For demonstration purposes, we'll return a placeholder response
|
||||
return "LLM response based on provided messages."
|
||||
# Convert messages to a conversation string
|
||||
conversation = ""
|
||||
for message in messages:
|
||||
role = 'User' if isinstance(message, HumanMessage) else 'Assistant'
|
||||
content = message.content
|
||||
conversation += f"{role}: {content}\n"
|
||||
# Use LangChain's LLM to get a response. This call is proxied through Arch for end-to-end observability and traffic management
|
||||
llm = OpenAI()
|
||||
# Create a prompt that includes the conversation
|
||||
prompt = f"{conversation}Assistant:"
|
||||
response = llm(prompt)
|
||||
return response
|
||||
|
||||
@app.route('/process_rag', methods=['POST'])
|
||||
def process_rag():
|
||||
|
|
@ -98,22 +116,37 @@ def process_rag():
|
|||
return jsonify({'error': 'Invalid value for x-arch-prompt-intent-change header'}), 400
|
||||
|
||||
# Update user conversation based on intent change
|
||||
user_data = update_user_conversation(user_id, client_messages, intent_changed)
|
||||
memory = update_user_conversation(user_id, client_messages, intent_changed)
|
||||
|
||||
# Retrieve messages since last intent change for LLM
|
||||
messages_for_llm = get_messages_since_last_intent(user_data['messages'])
|
||||
messages_for_llm = get_messages_since_last_intent(memory.chat_memory.messages)
|
||||
|
||||
# Forward messages to upstream LLM
|
||||
llm_response = forward_to_llm(messages_for_llm)
|
||||
|
||||
# Prepare the messages to return
|
||||
messages_to_return = []
|
||||
for message in memory.chat_memory.messages:
|
||||
role = 'user' if isinstance(message, HumanMessage) else 'assistant'
|
||||
content = message.content
|
||||
metadata = message.additional_kwargs.get('metadata', {})
|
||||
message_entry = {
|
||||
'uuid': metadata.get('uuid'),
|
||||
'timestamp': metadata.get('timestamp'),
|
||||
'role': role,
|
||||
'content': content,
|
||||
'intent_changed': metadata.get('intent_changed', False)
|
||||
}
|
||||
messages_to_return.append(message_entry)
|
||||
|
||||
# Prepare the response
|
||||
response = {
|
||||
'user_id': user_id,
|
||||
'messages': user_data['messages'],
|
||||
'messages': messages_to_return,
|
||||
'llm_response': llm_response
|
||||
}
|
||||
|
||||
return jsonify(response), 200
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
app.run(debug=True)
|
||||
|
|
|
|||
41
docs/source/_include/parameter_handling_flask.py
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
from flask import Flask, request, jsonify
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route('/agent/device_summary', methods=['POST'])
|
||||
def get_device_summary():
|
||||
"""
|
||||
Endpoint to retrieve device statistics based on device IDs and an optional time range.
|
||||
"""
|
||||
data = request.get_json()
|
||||
|
||||
# Validate 'device_ids' parameter
|
||||
device_ids = data.get('device_ids')
|
||||
if not device_ids or not isinstance(device_ids, list):
|
||||
return jsonify({'error': "'device_ids' parameter is required and must be a list"}), 400
|
||||
|
||||
# Validate 'time_range' parameter (optional, defaults to 7)
|
||||
time_range = data.get('time_range', 7)
|
||||
if not isinstance(time_range, int):
|
||||
return jsonify({'error': "'time_range' must be an integer"}), 400
|
||||
|
||||
# Simulate retrieving statistics for the given device IDs and time range
|
||||
# In a real application, you would query your database or external service here
|
||||
statistics = []
|
||||
for device_id in device_ids:
|
||||
# Placeholder for actual data retrieval
|
||||
stats = {
|
||||
'device_id': device_id,
|
||||
'time_range': f'Last {time_range} days',
|
||||
'data': f'Statistics data for device {device_id} over the last {time_range} days.'
|
||||
}
|
||||
statistics.append(stats)
|
||||
|
||||
response = {
|
||||
'statistics': statistics
|
||||
}
|
||||
|
||||
return jsonify(response), 200
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
|
|
@ -1,55 +1,4 @@
|
|||
@import url("theme.css");
|
||||
|
||||
/* Splits a long line descriptions in tables in to multiple lines */
|
||||
.wy-table-responsive table td, .wy-table-responsive table th {
|
||||
white-space: normal !important;
|
||||
.bd-article {
|
||||
padding-left: 3rem;
|
||||
padding-right: 3rem;
|
||||
}
|
||||
|
||||
/* align multi line csv table columns */
|
||||
table.docutils div.line-block {
|
||||
margin-left: 0;
|
||||
}
|
||||
/* Breaking long words */
|
||||
.wy-nav-content {
|
||||
overflow-wrap: break-word;
|
||||
max-width: 1000px;
|
||||
}
|
||||
|
||||
/* To style the API version label of a search result item */
|
||||
.xds-version-label {
|
||||
border-radius: 20%;
|
||||
background-color: #aaa;
|
||||
color: #ffffff;
|
||||
margin-left: 4px;
|
||||
padding: 4px;
|
||||
}
|
||||
|
||||
/* make inline sidebars flow down the right of page */
|
||||
.rst-content .sidebar {
|
||||
clear: right;
|
||||
}
|
||||
|
||||
/* make code.literals more muted - dont use red! */
|
||||
.rst-content code.literal {
|
||||
color: #555;
|
||||
background-color: rgba(27, 31, 35, 0.05);
|
||||
padding: 2px 2px;
|
||||
border: solid #eee 1px;
|
||||
}
|
||||
|
||||
/* restore margin bottom on aligned images */
|
||||
.rst-content img.align-center {
|
||||
margin-bottom: 24px
|
||||
}
|
||||
|
||||
/* suppress errs on pseudo-json code highlights */
|
||||
.highlight-json .highlight .err {
|
||||
border: inherit;
|
||||
box-sizing: inherit;
|
||||
}
|
||||
|
||||
/* tame the search highlight colours */
|
||||
.rst-content .highlighted {
|
||||
background: #f6f5db;
|
||||
box-shadow: 0 0 0 2px #e7e6b6;
|
||||
}
|
||||
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 446 KiB After Width: | Height: | Size: 311 KiB |
|
Before Width: | Height: | Size: 298 KiB After Width: | Height: | Size: 297 KiB |
|
Before Width: | Height: | Size: 309 KiB After Width: | Height: | Size: 264 KiB |
|
Before Width: | Height: | Size: 324 KiB After Width: | Height: | Size: 281 KiB |
|
|
@ -35,14 +35,12 @@ exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
|||
html_favicon = '_static/favicon.ico'
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
html_theme = 'sphinx_rtd_theme' # You can change the theme to 'sphinx_rtd_theme' or another of your choice.
|
||||
html_theme = 'sphinx_book_theme' # You can change the theme to 'sphinx_rtd_theme' or another of your choice.
|
||||
|
||||
# Specify the path to the logo image file (make sure the logo is in the _static directory)
|
||||
html_logo = '_static/img/arch-logo.png'
|
||||
|
||||
html_theme_options = {
|
||||
'logo_only': True,
|
||||
'includehidden': False,
|
||||
'navigation_depth': 4,
|
||||
'collapse_navigation': False,
|
||||
}
|
||||
|
|
@ -52,4 +50,4 @@ html_theme_options = {
|
|||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
html_style = 'css/arch.css'
|
||||
#html_style = 'css/arch.css'
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
Configuration Reference
|
||||
============================
|
||||
|
||||
The following is a complete reference of the prompt-conifg.yml that controls the behavior of an Arch gateway.
|
||||
We've kept things simple (less than 100 lines) and held off on exposing additional functionality (for e.g. suppporting
|
||||
push observability stats, managing prompt-endpoints as virtual cluster, expose more load balancing options to endpoints,
|
||||
etc). Our focus has been to choose the best defaults for developers, so that they can spend more of their time in building
|
||||
features unique to their AI experience.
|
||||
The following is a complete reference of the prompt-conifg.yml that controls the behavior of an Arch gateway.
|
||||
We've kept things simple (less than 100 lines) and held off on exposing additional functionality (for e.g. suppporting
|
||||
push observability stats, managing prompt-endpoints as virtual cluster, exposing more load balancing options, etc). Our
|
||||
belief that the simple things, should be simple. So we offert good defaults for developers, so that they can spend more
|
||||
of their time in building features unique to their AI experience.
|
||||
|
||||
.. literalinclude:: /_config/prompt-config-full-reference.yml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:caption: :download:`prompt-config-full-reference-beta-1-0.yml </_config/prompt-config-full-reference.yml>`
|
||||
:caption: :download:`prompt-config-full-reference-beta-1-0.yml </_config/prompt-config-full-reference.yml>`
|
||||
|
|
|
|||
|
|
@ -3,6 +3,5 @@ Getting Started
|
|||
|
||||
This section gets you started with a very simple configuration and provides some example configurations.
|
||||
|
||||
The fastest way to get started using Arch is installing `pre-built binaries <https://hub.docker.com/r/katanemo/arch>`_.
|
||||
You can also build it from source.
|
||||
|
||||
The fastest way to get started using Arch is installing `pre-built binaries <https://hub.docker.com/r/katanemo/arch>`_.
|
||||
You can also build it from source.
|
||||
|
|
|
|||
|
|
@ -1,6 +0,0 @@
|
|||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Sample Applications
|
||||
|
||||
sample_apps/rag
|
||||
sample_apps/function_calling
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
Function Calling (Agentic) Apps
|
||||
===============================
|
||||
|
||||
Building something more than a summary/qa experience requires giving users access to you data and APIs - via prompts.
|
||||
Arch enables that use case by offering a capability called "Function Calling". Arch extracts critical imformation
|
||||
from a prompt and can match the intent of the user to an API or business function hosted in your application.
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
Retrieval-Augmented Generation (RAG)
|
||||
====================================
|
||||
|
||||
The following section describes how Arch can help you build faster, more smarter Retrieval-Augmented Generation (RAG) applications.
|
||||
|
||||
Intent Markers (Multi-Turn Chat)
|
||||
----------------------------------
|
||||
|
||||
Developers struggle to handle follow-up questions, or clarifying questions from users in their AI applications. Specifically, when
|
||||
users ask for modifications or additions to previous responses, their AI applications often generates entirely new responses instead
|
||||
of adjusting the previous ones. Developers are facing challenges in maintaining context across interactions, despite using tools like
|
||||
ConversationBufferMemory and chat_history from Langchain.
|
||||
|
||||
There are several documented cases of this issue, `here <https://www.reddit.com/r/ChatGPTPromptGenius/comments/17dzmpy/how_to_use_rag_with_conversation_history_for/?>`_,
|
||||
`and here <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_ and `again here <https://www.reddit.com/r/LangChain/comments/1bajhg8/chat_with_rag_further_questions/>`_.
|
||||
|
||||
Arch helps developer with intent detection tracking. Arch uses its lightweight NLI and embedding-based intent detection models to know
|
||||
if the user's last prompt represents a new intent or not. This way developers can easily build an intent tracker and only use a subset of prompts
|
||||
to process from the history to improve the retrieval and speed of their applications.
|
||||
|
||||
.. literalinclude:: /_include/intent_detection.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 77-
|
||||
:emphasize-lines: 15-22
|
||||
:caption: :download:`intent-detection-python-example.py </_include/intent_detection.py>`
|
||||
|
||||
6
docs/source/getting_started/use_cases.rst
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Use Cases
|
||||
|
||||
use_cases/rag
|
||||
use_cases/function_calling
|
||||
57
docs/source/getting_started/use_cases/function_calling.rst
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
Agentic (Text-to-Action) Apps
|
||||
==============================
|
||||
|
||||
Arch helps you easily personalize your applications by enabling calls to application-specific (API) operations
|
||||
via user prompts. This involves any predefined functions or APIs you want to expose to users to perform tasks,
|
||||
gather information, or manipulate data. With function calling, you have flexibility to support “agentic” apps
|
||||
tailored to specific use cases - from updating insurance claims to creating ad campaigns - via prompts.
|
||||
|
||||
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with
|
||||
the user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
|
||||
Arch does this via its purpose-built Arch-FC1B LLM - the fastest (200ms p90 - 10x faser than GPT-4o) and cheapest
|
||||
(100x than GPT-40) function-calling LLM that matches performance with frontier models.
|
||||
______________________________________________________________________________________________
|
||||
|
||||
Single Function Call
|
||||
--------------------
|
||||
In the most common scenario, users will request a single action via prompts, and Arch efficiently processes the
|
||||
request by extracting relevant parameters, validating the input, and calling the designated function or API. Here
|
||||
is how you would go about enabling this scenario with Arch:
|
||||
|
||||
Step 1: Define prompt targets with functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. literalinclude:: /_config/function-calling-network-agent.yml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 16-37
|
||||
:caption: Define prompt targets that can enable users to engage with API and backened functions of an app
|
||||
|
||||
Step 2: Process request parameters in Flask
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Once the prompt targets are configured as above, handling those parameters is
|
||||
|
||||
.. literalinclude:: /_include/parameter_handling_flask.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:caption: Flask API example for parameter extraction via HTTP request parameters
|
||||
|
||||
Parallel/ Multiple Function Calling
|
||||
-----------------------------------
|
||||
In more complex use cases, users may request multiple actions or need multiple APIs/functions to be called
|
||||
simultaneously or sequentially. With Arch, you can handle these scenarios efficiently using parallel or multiple
|
||||
function calling. This allows your application to engage in a broader range of interactions, such as updating
|
||||
different datasets, triggering events across systems, or collecting results from multiple services in one prompt.
|
||||
|
||||
Arch-FC1B is built to manage these parallel tasks efficiently, ensuring low latency and high throughput, even
|
||||
when multiple functions are invoked. It provides two mechanisms to handle these cases:
|
||||
|
||||
Step 1: Define Multiple Function Targets
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When enabling multiple function calling, define the prompt targets in a way that supports multiple functions or
|
||||
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
|
||||
the user's intent.
|
||||
|
||||
Example of Multiple Prompt Targets in YAML:
|
||||
94
docs/source/getting_started/use_cases/rag.rst
Normal file
|
|
@ -0,0 +1,94 @@
|
|||
Retrieval-Augmented (RAG)
|
||||
====================================
|
||||
|
||||
The following section describes how Arch can help you build faster, smarter and more accurate
|
||||
Retrieval-Augmented Generation (RAG) applications.
|
||||
|
||||
Intent-drift detection
|
||||
----------------------
|
||||
|
||||
Developers struggle to handle `follow-up <https://www.reddit.com/r/ChatGPTPromptGenius/comments/17dzmpy/how_to_use_rag_with_conversation_history_for/?>`_
|
||||
or `clarifying <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_
|
||||
questions. Specifically, when users ask for changes or additions to previous responses their AI applications often
|
||||
generate entirely new responses instead of adjusting previous ones. Arch offers *intent-drift* tracking as a feature so
|
||||
that developers can know when the user has shifted away from a previous intent so that they can dramatically improve
|
||||
retrieval accuracy, lower overall token cost and improve the speed of their responses back to users.
|
||||
|
||||
Arch uses its built-in lightweight NLI and embedding models to know if the user has steered away from an active intent.
|
||||
Arch's intent-drift detection mechanism is based on its' *prompt_targets* primtive. Arch tries to match an incoming
|
||||
prompt to one of the *prompt_targets* configured in the gateway. Once it detects that the user has moved away from an active
|
||||
active intent, Arch adds the ``x-arch-intent-drift`` headers to the request before sending it your application servers.
|
||||
|
||||
.. literalinclude:: /_include/intent_detection.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 95-125
|
||||
:emphasize-lines: 14-22
|
||||
:caption: :download:`Intent drift detection in python </_include/intent_detection.py>`
|
||||
|
||||
_____________________________________________________________________________________________________________________
|
||||
|
||||
.. Note::
|
||||
|
||||
Arch is (mostly) stateless so that it can scale in an embarrassingly parrallel fashion. So, while Arch offers
|
||||
intent-drift detetction, you still have to maintain converational state with intent drift as meta-data. The
|
||||
following code snippets show how easily you can build and enrich conversational history with Langchain (in python),
|
||||
so that you can use the most relevant prompts for your retrieval and for prompting upstream LLMs.
|
||||
|
||||
|
||||
Step 1: define ConversationBufferMemory
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. literalinclude:: /_include/intent_detection.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 1-21
|
||||
|
||||
Step 2: update ConversationBufferMemory w/ intent
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. literalinclude:: /_include/intent_detection.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 22-62
|
||||
|
||||
Step 3: get Messages based on latest drift
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. literalinclude:: /_include/intent_detection.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 64-76
|
||||
|
||||
|
||||
You can used the last set of messages that match to an intent to prompt an LLM, use it with an vector-DB for
|
||||
improved retrieval, etc. With Arch and a few lines of code, you can improve the retrieval accuracy, lower overall
|
||||
token cost and dramatically improve the speed of their responses back to users.
|
||||
|
||||
Smarter retrival with parameter extraction
|
||||
------------------------------------------
|
||||
|
||||
To build RAG (Retrieval-Augmented Generation) applications, you can configure prompt targets with parameters,
|
||||
enabling Arch to retrieve critical information in a structured way for processing. This approach improves the
|
||||
retrieval quality and speed of your application. By extracting parameters from the conversation, you can pull
|
||||
the appropriate chunks from a vector database or SQL-like data store to enhance accuracy. With Arch, you can
|
||||
streamline data retrieval and processing to build more efficient and precise RAG applications.
|
||||
|
||||
Step 1: Define prompt targets with parameter definitions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. literalinclude:: /_config/rag-prompt-targets.yml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 16-36
|
||||
:caption: prompt-config.yaml for parameter extraction for RAG scenarios
|
||||
|
||||
Step 2: Process request parameters in Flask
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Once the prompt targets are configured as above, handling those parameters is
|
||||
|
||||
.. literalinclude:: /_include/parameter_handling_flask.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:caption: Flask API example for parameter extraction via HTTP request parameters
|
||||
|
|
@ -3,36 +3,33 @@
|
|||
Prompt Processing
|
||||
=================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
Arch's model serving process is designed to securely handle incoming prompts by detecting jailbreak attempts,
|
||||
processing the prompts, and routing them to appropriate functions or prompt targets based on intent detection.
|
||||
The serving workflow integrates several key components, each playing a crucial role in managing generative AI interactions:
|
||||
processing the prompts, and routing them to appropriate functions or prompt targets based on intent detection.
|
||||
The serving workflow integrates several key components, each playing a crucial role in managing generative
|
||||
AI interactions:
|
||||
|
||||
Jailbreak and Toxicity Guardrails
|
||||
---------------------------------
|
||||
|
||||
Arch employs Arch-Guard, a security layer powered by a compact and high-performimg LLM that monitors incoming prompts to detect
|
||||
and reject jailbreak attempts, ensuring that unauthorized or harmful behaviors are intercepted early in the process. Arch-Guard
|
||||
is the leading model in the industry for jailbreak and toxicity detection. Configuring guardrails is super simple. See example
|
||||
below.
|
||||
|
||||
Arch employs Arch-Guard, a security layer powered by a compact and high-performimg LLM that monitors incoming prompts to detect
|
||||
and reject jailbreak attempts, ensuring that unauthorized or harmful behaviors are intercepted early in the process. Arch-Guard
|
||||
is the leading model in the industry for jailbreak and toxicity detection. Configuring guardrails is super simple. See example
|
||||
below.
|
||||
|
||||
.. literalinclude:: /_config/getting-started.yml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 18-21
|
||||
:emphasize-lines: 24-27
|
||||
:caption: :download:`arch-getting-started.yml </_config/getting-started.yml>`
|
||||
|
||||
|
||||
Prompt Targets
|
||||
---------------
|
||||
|
||||
Once a prompt passes the security checks, Arch processes the content and identifies if any specific functions need to be called.
|
||||
Arch-FC1B, a dedicated function calling module, extracts critical information from the prompt and executes the necessary
|
||||
backend API calls or internal functions. This capability allows for efficient handling of agentic tasks, such as scheduling or
|
||||
data retrieval, by dynamically interacting with backend services.
|
||||
|
||||
Once a prompt passes the security checks, Arch processes the content and identifies if any specific functions need to be called.
|
||||
Arch-FC1B, a dedicated function calling module, extracts critical information from the prompt and executes the necessary
|
||||
backend API calls or internal functions. This capability allows for efficient handling of agentic tasks, such as scheduling or
|
||||
data retrieval, by dynamically interacting with backend services.
|
||||
|
||||
.. image:: /_static/img/function-calling-network-flow.jpg
|
||||
:width: 100%
|
||||
|
|
@ -41,20 +38,20 @@ Prompt Targets
|
|||
Intent Detection and Prompt Matching:
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Arch uses Natural Language Inference (NLI) and embedding-based approaches to detect the intent of each incoming prompt.
|
||||
This intent detection phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt
|
||||
is forwarded to the most appropriate endpoint. Arch’s intent detection framework considers both the name and description of each prompt target,
|
||||
enhancing accuracy in forwarding decisions.
|
||||
Arch uses Natural Language Inference (NLI) and embedding-based approaches to detect the intent of each incoming prompt.
|
||||
This intent detection phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt
|
||||
is forwarded to the most appropriate endpoint. Arch’s intent detection framework considers both the name and description of each prompt target,
|
||||
enhancing accuracy in forwarding decisions.
|
||||
|
||||
- **Embedding Approaches**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match,
|
||||
ensuring that the prompt is handled by the correct downstream service.
|
||||
|
||||
- **NLI Integration**: Natural Language Inference techniques further refine the matching process by evaluating the semantic alignment
|
||||
between the prompt and potential targets.
|
||||
- **Embedding Approaches**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match,
|
||||
ensuring that the prompt is handled by the correct downstream service.
|
||||
|
||||
- **NLI Integration**: Natural Language Inference techniques further refine the matching process by evaluating the semantic alignment
|
||||
between the prompt and potential targets.
|
||||
|
||||
Forwarding Prompts to Downstream Targets:
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
After determining the correct target, Arch forwards the prompt to the designated endpoint, such as an LLM host or API service.
|
||||
This seamless routing mechanism integrates with Arch's broader ecosystem, enabling efficient communication and response generation tailored to the user's intent.
|
||||
After determining the correct target, Arch forwards the prompt to the designated endpoint, such as an LLM host or API service.
|
||||
This seamless routing mechanism integrates with Arch's broader ecosystem, enabling efficient communication and response generation tailored to the user's intent.
|
||||
|
||||
Arch's model serving process combines robust security measures with advanced intent detection and function calling capabilities, creating a reliable and adaptable environment for managing generative AI workflows. This approach not only enhances the accuracy and relevance of responses but also safeguards against malicious usage patterns, aligning with best practices in AI governance.
|
||||
|
|
|
|||
|
|
@ -4,9 +4,9 @@ Introduction
|
|||
============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
:maxdepth: 2
|
||||
|
||||
what_is_arch
|
||||
architecture/architecture
|
||||
life_of_a_request
|
||||
getting_help
|
||||
getting_help
|
||||
|
|
|
|||
|
|
@ -4,11 +4,11 @@ Life of a Request
|
|||
=================
|
||||
|
||||
Below we describe the events in the life of a request passing through an Arch gateway instance. We first
|
||||
describe how Arch fits into the request path and then the internal events that take place following
|
||||
the arrival of a request at Arch from downtream clients. We follow the request until the corresponding
|
||||
describe how Arch fits into the request path and then the internal events that take place following
|
||||
the arrival of a request at Arch from downtream clients. We follow the request until the corresponding
|
||||
dispatch upstream and the response path.
|
||||
|
||||
.. image:: /_static/img/network-topology-app-server.jpg
|
||||
.. image:: /_static/img/network-topology-ingress-egress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
|
|
@ -17,36 +17,36 @@ Terminology
|
|||
|
||||
Arch uses the following terms through its' codebase and documentation:
|
||||
|
||||
* *Listeners*: The Arch primitive responsible for binding to an IP/port, accepting new HTTP connections and orchestrating
|
||||
the downstream facing aspects of prompt processing. Arch relies almostly exclusively on `Envoy's Listener subsystem <arch_overview_listeners>`_.
|
||||
* *Downstream*: an entity connecting to Arch. This may be another AI agent (side car or networked) or a remote client.
|
||||
* *LLM Providers*: a set of upstream LLMs (API-based or network nodes) that Arch routes/forwards user and application-specific prompts to.
|
||||
Arch offers a simply abstract to call different LLMs via model-id, add LLM specific retry, failover and routing capabilities.
|
||||
* *Listeners*: The Arch primitive responsible for binding to an IP/port, accepting new HTTP connections and orchestrating
|
||||
the downstream facing aspects of prompt processing. Arch relies almostly exclusively on `Envoy's Listener subsystem <arch_overview_listeners>`_.
|
||||
* *Downstream*: an entity connecting to Arch. This may be another AI agent (side car or networked) or a remote client.
|
||||
* *LLM Providers*: a set of upstream LLMs (API-based or network nodes) that Arch routes/forwards user and application-specific prompts to.
|
||||
Arch offers a simply abstract to call different LLMs via model-id, add LLM specific retry, failover and routing capabilities.
|
||||
Arch build's on top of Envoy's `Cluster substem <https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager#arch-overview-cluster-manager>`
|
||||
* *Upstream*: A set of hosts that can recieve traffic from an instance of the Arch gateway.
|
||||
* *Prompt Targets*: A core primitive offered in Arch. Prompt targets are endpoints that receive prompts that are processed by Arch.
|
||||
For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you can
|
||||
build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts,
|
||||
|
||||
* *Prompt Targets*: A core primitive offered in Arch. Prompt targets are endpoints that receive prompts that are processed by Arch.
|
||||
For example, Arch enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt so that you can
|
||||
build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments on a document - via prompts,
|
||||
|
||||
Network topology
|
||||
----------------
|
||||
|
||||
How a request flows through the components in a network (including Arch) depends on the network’s topology.
|
||||
Arch can be used in a wide variety of networking topologies. We focus on the inner operation of Arch below,
|
||||
How a request flows through the components in a network (including Arch) depends on the network’s topology.
|
||||
Arch can be used in a wide variety of networking topologies. We focus on the inner operation of Arch below,
|
||||
but briefly we address how Arch relates to the rest of the network in
|
||||
this section.
|
||||
|
||||
* Ingress listeners take requests from upstream clients like a web UI or clients that forward prompts to you local application
|
||||
Responses from the local application flow back through Arch to the downstream.
|
||||
|
||||
* Egress listeners take requests from the local application and forward them to LLMs. These receiving nodes
|
||||
* Egress listeners take requests from the local application and forward them to LLMs. These receiving nodes
|
||||
will also be typically running Arch and accepting the request via their ingress listeners.
|
||||
|
||||
.. image:: /_static/img/network-topology-app-server.jpg
|
||||
.. image:: /_static/img/network-topology-ingress-egress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
In practice, Arch can be deployed on the edge and as an internal load balancer between AI agents. A request path may
|
||||
In practice, Arch can be deployed on the edge and as an internal load balancer between AI agents. A request path may
|
||||
traverse multiple Arch gateways:
|
||||
|
||||
.. image:: /_static/img/network-topology-agent.jpg
|
||||
|
|
@ -78,12 +78,12 @@ The request processing path in Arch has two main parts:
|
|||
The two subsystems are bridged with the HTTP router filter, which forwards the HTTP request from
|
||||
downstream to upstream.
|
||||
|
||||
Arch utilizes `Envoy event-based thread model <https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310>`_.
|
||||
A main thread is responsible forthe server lifecycle, configuration processing, stats, etc. and some number
|
||||
of :ref:`worker threads <arch_overview_threading>` process requests. All threads operate around an event
|
||||
loop (`libevent <https://libevent.org/>`_) and any given downstream TCP connection will be handled by exactly
|
||||
one worker thread for its lifetime. Each worker thread maintains its own pool of TCP connections to upstream
|
||||
endpoints. Today, Arch implemenents its core functionality around prompt handling in worker threads.
|
||||
Arch utilizes `Envoy event-based thread model <https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310>`_.
|
||||
A main thread is responsible forthe server lifecycle, configuration processing, stats, etc. and some number
|
||||
of :ref:`worker threads <arch_overview_threading>` process requests. All threads operate around an event
|
||||
loop (`libevent <https://libevent.org/>`_) and any given downstream TCP connection will be handled by exactly
|
||||
one worker thread for its lifetime. Each worker thread maintains its own pool of TCP connections to upstream
|
||||
endpoints. Today, Arch implemenents its core functionality around prompt handling in worker threads.
|
||||
|
||||
Worker threads rarely share state and operate in a trivially parallel fashion. This threading model
|
||||
enables scaling to very high core count CPUs.
|
||||
|
|
@ -95,30 +95,30 @@ Overview
|
|||
^^^^^^^^
|
||||
A brief outline of the life cycle of a request and response using the example configuration above:
|
||||
|
||||
1. **TCP Connection Establishment**:
|
||||
1. **TCP Connection Establishment**:
|
||||
A TCP connection from downstream is accepted by an Arch listener running on a worker thread. The listener filter chain provides SNI and other pre-TLS information. The transport socket, typically TLS, decrypts incoming data for processing.
|
||||
|
||||
2. **Prompt Guardrails Check**:
|
||||
2. **Prompt Guardrails Check**:
|
||||
Arch first checks the incoming prompts for guardrails such as jailbreak attempts and toxicity. This ensures that harmful or unwanted behaviors are detected early in the request processing pipeline.
|
||||
|
||||
3. **Intent Matching**:
|
||||
3. **Intent Matching**:
|
||||
The decrypted data stream is deframed by the HTTP/2 codec in Arch's HTTP connection manager. Arch performs intent matching using the name and description of the defined prompt targets, determining which endpoint should handle the prompt.
|
||||
|
||||
4. **Parameter Gathering with Arch-FC1B**:
|
||||
4. **Parameter Gathering with Arch-FC1B**:
|
||||
If a prompt target requires specific parameters, Arch engages Arch-FC1B to extract the necessary details from the incoming prompt(s). This process gathers the critical information needed for downstream API calls.
|
||||
|
||||
5. **API Call Execution**:
|
||||
5. **API Call Execution**:
|
||||
Arch routes the prompt to the appropriate backend API or function call. If an endpoint cluster is identified, load balancing is performed, circuit breakers are checked, and the request is proxied to the upstream endpoint. For more details on routing and load balancing, refer to the [Envoy routing documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/intro/arch_overview).
|
||||
|
||||
6. **Default Summarization by Upstream LLM**:
|
||||
6. **Default Summarization by Upstream LLM**:
|
||||
By default, if no specific endpoint processing is needed, the prompt is sent to an upstream LLM for summarization. This ensures that responses are concise and relevant, enhancing user experience in RAG (Retrieval-Augmented Generation) and agentic applications.
|
||||
|
||||
7. **Error Handling and Forwarding**:
|
||||
7. **Error Handling and Forwarding**:
|
||||
Errors encountered during processing, such as failed function calls or guardrail detections, are forwarded to designated error targets. Error details are communicated through specific headers to the application:
|
||||
|
||||
|
||||
- ``X-Function-Error-Code``: Code indicating the type of function call error.
|
||||
- ``X-Prompt-Guard-Error-Code``: Code specifying violations detected by prompt guardrails.
|
||||
- Additional headers carry messages and timestamps to aid in debugging and logging.
|
||||
|
||||
8. **Response Handling**:
|
||||
The upstream endpoint’s TLS transport socket encrypts the response, which is then proxied back downstream. Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.
|
||||
8. **Response Handling**:
|
||||
The upstream endpoint’s TLS transport socket encrypts the response, which is then proxied back downstream. Responses pass through HTTP filters in reverse order, ensuring any necessary processing or modification before final delivery.
|
||||
|
|
|
|||
|
|
@ -1,72 +1,85 @@
|
|||
What is Arch
|
||||
============
|
||||
|
||||
Arch is an intelligent Layer 7 gateway designed for generative AI apps, agents, and Co-pilots that work
|
||||
with prompts. Written in `Rust <https://www.rust-lang.org/>`_, and engineered with purpose-built
|
||||
:ref:`LLMs <llms_in_arch>`, Arch handles all the critical but undifferentiated tasks related to handling and
|
||||
processing prompts, including rejecting `jailbreak <https://github.com/verazuo/jailbreak_llms>`_ attempts,
|
||||
intelligently calling “backend” APIs to fulfill a user's request represented in a prompt, routing/disaster
|
||||
recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
|
||||
Arch is an intelligent `(Layer 7) <https://www.cloudflare.com/learning/ddos/what-is-layer-7/>`_ gateway
|
||||
designed for generative AI apps, AI agents, and Co-pilots that work with prompts. Engineered with purpose-built
|
||||
:ref:`LLMs <llms_in_arch>`, Arch handles the critical but undifferentiated tasks related to the handling and
|
||||
processing of prompts, including detecting and rejecting `jailbreak <https://github.com/verazuo/jailbreak_llms>`_
|
||||
attempts, intelligently calling “backend” APIs to fulfill the user's request represented in a prompt, routing to
|
||||
and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions
|
||||
in a centralized way.
|
||||
|
||||
The project was born out of the belief that:
|
||||
**The project was born out of the belief that:**
|
||||
|
||||
*prompts are nuanced and opaque user requests that need the same capabilities as network requests
|
||||
in modern (cloud-native) applications, including secure handling, intelligent routing, robust observability,
|
||||
and integration with backend (API) systems for personalization.*
|
||||
*Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests
|
||||
including secure handling, intelligent routing, robust observability, and integration with backend (API)
|
||||
systems for personalization - all outside business logic.*
|
||||
|
||||
In practice, achieving the above goal is incredibly difficult. Arch attempts to do so by providing the
|
||||
In practice, achieving the above goal is incredibly difficult. Arch attempts to do so by providing the
|
||||
following high level features:
|
||||
|
||||
**Out of process archtiecture, built on Envoy:** Arch is takes a dependency on `Envoy <http://envoyproxy.io/>`_
|
||||
and is a self-contained process that is designed to run alongside your application servers. Arch uses
|
||||
Envoy's HTTP connection management subsystem and HTTP L7 filtering capabilities to extend its' proxying
|
||||
functionality. This gives Arch several advantages:
|
||||
_____________________________________________________________________________________________________________
|
||||
|
||||
* Arch builds on Envoy's success. Envoy is used at masssive sacle by the leading technology companies of
|
||||
our time including `AirBnB <https://www.airbnb.com>`_, `Dropbox <https://www.dropbox.com>`_,
|
||||
`Google <https://www.google.com>`_, `Reddit <https://www.reddit.com>`_, `Stripe <https://www.stripe.com>`_,
|
||||
etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters:
|
||||
application and business logic.
|
||||
**Out-of-process architecture, built on** `Envoy <http://envoyproxy.io/>`_: Arch is takes a dependency on
|
||||
Envoy and is a self-contained process that is designed to run alongside your application servers. Arch uses
|
||||
Envoy's HTTP connection management subsystem, HTTP L7 filtering and telemetry capabilities to extend the
|
||||
functionality exclusively for prompts and LLMs. This gives Arch several advantages:
|
||||
|
||||
* Arch works with any application language. A single Arch deployment can act as gateway for AI applications
|
||||
written in Python, Java, C++, Go, Php, etc.
|
||||
* Arch builds on Envoy's proven success. Envoy is used at masssive sacle by the leading technology companies of
|
||||
our time including `AirBnB <https://www.airbnb.com>`_, `Dropbox <https://www.dropbox.com>`_,
|
||||
`Google <https://www.google.com>`_, `Reddit <https://www.reddit.com>`_, `Stripe <https://www.stripe.com>`_,
|
||||
etc. Its battle tested and scales linearly with usage and enables developers to focus on what really matters:
|
||||
application features and business logic.
|
||||
|
||||
* As anyone that has worked with a modern application architecture knows, deploying library upgrades
|
||||
can be incredibly painful. Arch can be deployed and upgraded quickly across your infrastructure
|
||||
transparently.
|
||||
* Arch works with any application language. A single Arch deployment can act as gateway for AI applications
|
||||
written in Python, Java, C++, Go, Php, etc.
|
||||
|
||||
**Engineered with LLMs:** Arch is engineered with specialized LLMs that are desgined for fast, cost-effective
|
||||
and acurrate handling of prompts. These (sub-billion parameter) :ref:`LLMs <llms_in_arch>` are designed to be
|
||||
best-in-class for critcal but undifferentiated prompt-related tasks like 1) applying guardrails for jailbreak
|
||||
attempts 2) extracting critical information from prompts (like follow-on, clarifying questions, etc.) so that
|
||||
you can improve the speed and accuracy of retrieval, and be able to convert prompts into API sematics when necessary
|
||||
to build text-to-action (or agentic) applications. The focus for Arch is to make prompt processing indistiguishable
|
||||
from the processing of a traditional HTTP request before forwarding it to an application server. With our focus on
|
||||
speed and cost, Arch uses purpose-built LLMs and will continue to invest in those to lower latency (and cost) while
|
||||
maintaining exceptional baseline performance with frontier LLMs like `OpenAI <https:openai.com>`_, and
|
||||
`Anthropic <https:www.anthropic.com>`_.
|
||||
* Arch can be deployed and upgraded quickly across your infrastructure transparently without horrid pain of
|
||||
deploying library upgrades in your applications.
|
||||
|
||||
**Prompt Guardrails:** Arch helps you apply prompt guardrails in a centralized way for better governance
|
||||
hygiene. With prompt guardrails you can prevent `jailbreak <https://github.com/verazuo/jailbreak_llms>`_
|
||||
attempts or toxicity present in user's prompts without having to write a single line of code. To learn more about
|
||||
how to configure guardrails available in Arch, read :ref:`more <llms_in_arch>`.
|
||||
**Engineered with Fast LLMs:** Arch is engineered with specialized (sub-billion) LLMs that are desgined for fast,
|
||||
cost-effective and acurrate handling of prompts. These :ref:`LLMs <llms_in_arch>` are designed to be
|
||||
best-in-class for critcal prompt-related tasks like:
|
||||
|
||||
**Function Calling:** Arch helps you personalize GenAI apps by enabling calls to application-specific (API)
|
||||
operations using prompts. This involves any predefined functions or APIs you want to expose to users to
|
||||
perform tasks, gather information, or manipulate data. With function calling, you have flexibilityto support
|
||||
agentic workflows tailored to specific use cases - from updating insurance claims to creating ad campaigns.
|
||||
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with the
|
||||
user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
|
||||
* **Function/API Calling:** Arch helps you easily personalize your applications by enabling calls to
|
||||
application-specific (API) operations via user prompts. This involves any predefined functions or APIs
|
||||
you want to expose to users to perform tasks, gather information, or manipulate data. With function calling,
|
||||
you have flexibility to support "agentic" experiences tailored to specific use cases - from updating insurance
|
||||
claims to creating ad campaigns - via prompts. Arch analyzes prompts, extracts critical information from
|
||||
prompts, engages in lightweight conversation with the user to gather any missing parameters and makes API
|
||||
calls so that you can focus on writing business logic. For more details, read :ref:`prompt processing <arch_overview_prompt_handling>`.
|
||||
|
||||
**Best-In Class Monitoring & Traffic Management:** Arch offers several monitoring metrics that help you
|
||||
understand three critical aspects of your application: latency, token usage, and error rates by LLM provider.
|
||||
Latency measures the speed at which your application is responding to users, which includes metrics like time
|
||||
to first token (TFT), time per output token (TOT) metrics, and the total latency as perceived by users. In
|
||||
addition, Arch offers several capabilities for calls originating from your applications to upstream LLMs,
|
||||
including a vendor-agnostic SDK to make LLM calls, smart retries on errors from upstream LLMs, and automatic
|
||||
cutover to other LLMs configured for continuous availability and disaster recovery scenarios.
|
||||
* **Prompt Guardrails:** Arch helps you improve the safety of your application by applying prompt guardrails in
|
||||
a centralized way for better governance hygiene. With prompt guardrails you can prevent `jailbreak <https://github.com/verazuo/jailbreak_llms>`_
|
||||
attempts or toxicity present in user's prompts without having to write a single line of code. To learn more
|
||||
about how to configure guardrails available in Arch, read :ref:`prompt processing <arch_overview_prompt_handling>`.
|
||||
|
||||
**Front/edge proxy support:** There is substantial benefit in using the same software at the edge (observability,
|
||||
prompt management, load balancing algorithms, etc.) as it is . Arch has a feature set that makes it well suited
|
||||
as an edge proxy for most modern web application use cases. This includes TLS termination, HTTP/1.1 HTTP/2 and
|
||||
HTTP/3 support and prompt-based routing.
|
||||
* **Intent-Drift Detection:** Developers struggle to handle `follow-up <https://www.reddit.com/r/ChatGPTPromptGenius/comments/17dzmpy/how_to_use_rag_with_conversation_history_for/?>`_,
|
||||
or `clarifying <https://www.reddit.com/r/LocalLLaMA/comments/18mqwg6/best_practice_for_rag_with_followup_chat/>`_
|
||||
questions. Specifically, when users ask for modifications or additions to previous responses their AI applications
|
||||
often generate entirely new responses instead of adjusting the previous ones. Arch offers intent-drift detection as a
|
||||
feature so that developers know when the user has shifted away from the previous intent so that they can improve
|
||||
their retrieval, lower overall token cost and dramatically improve the speed and accuracy of their responses back
|
||||
to users.
|
||||
|
||||
**Traffic Management:** Arch offers several capabilities for LLM calls originating from your applications, including a
|
||||
vendor-agnostic SDK to make LLM calls, smart retries on errors from upstream LLMs, and automatic cutover to other LLMs
|
||||
configured in Arch for continuous availability and disaster recovery scenarios. Arch extends Envoy's `cluster subsystem
|
||||
<https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/cluster_manager>`_ to manage upstream connections
|
||||
to LLMs so that you can build resilient AI applications.
|
||||
|
||||
**Front/edge Gateway:** There is substantial benefit in using the same software at the edge (observability,
|
||||
traffic shaping alogirithms, applying guardrails, etc.) as for outbound LLM inference use cases. Arch has the feature set
|
||||
that makes it exceptionally well suited as an edge gateway for AI applications. This includes TLS termination, rate limiting,
|
||||
and prompt-based routing.
|
||||
|
||||
**Best-In Class Monitoring:** Arch offers several monitoring metrics that help you understand three
|
||||
critical aspects of your application: latency, token usage, and error rates by an upstream LLM provider. Latency
|
||||
measures the speed at which your application is responding to users, which includes metrics like time to first
|
||||
token (TFT), time per output token (TOT) metrics, and the total latency as perceived by users.
|
||||
|
||||
**End-to-End Tracing:** Arch propagates trace context using the W3C Trace Context standard, specifically through
|
||||
the ``traceparent`` header. This allows each component in the system to record its part of the request flow,
|
||||
enabling **end-to-end tracing** across the entire application. By using OpenTelemetry, Arch ensures that
|
||||
developers can capture this trace data consistently and in a format compatible with various observability tools.
|
||||
For more details, read :ref:`tracing <arch_overview_tracing>`.
|
||||
10
docs/source/observability/observability.rst
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
.. _observability:
|
||||
|
||||
Observability
|
||||
=============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
tracing
|
||||
stats
|
||||
3
docs/source/observability/stats.rst
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
Metrics and Statistics
|
||||
======================
|
||||
|
||||
315
docs/source/observability/tracing.rst
Normal file
|
|
@ -0,0 +1,315 @@
|
|||
.. _arch_overview_tracing:
|
||||
|
||||
Tracing
|
||||
=======
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
`OpenTelemetry <https://opentelemetry.io/>`_ is an open-source observability framework providing APIs
|
||||
and instrumentation for generating, collecting, processing, and exporting telemetry data, such as traces,
|
||||
metrics, and logs. Its flexible design supports a wide range of backends and seamlessly integrates with
|
||||
modern application tools. A key feature of OpenTelemetry is its commitment to standards like the
|
||||
`W3C Trace Context <https://www.w3.org/TR/trace-context/>`_
|
||||
|
||||
**Tracing** is a critical tool that allows developers to visualize and understand the flow of
|
||||
requests in an AI application. With tracing, you can capture a detailed view of how requests propagate
|
||||
through various services and components, which is crucial for **debugging**, **performance optimization**,
|
||||
and understanding complex AI agent architectures like Co-pilots.
|
||||
|
||||
**Arch** propagates trace context using the W3C Trace Context standard, specifically through the
|
||||
``traceparent`` header. This allows each component in the system to record its part of the request
|
||||
flow, enabling **end-to-end tracing** across the entire application. By using OpenTelemetry, Arch ensures
|
||||
that developers can capture this trace data consistently and in a format compatible with various observability
|
||||
tools.
|
||||
______________________________________________________________________________________________
|
||||
|
||||
Benefits of using ``traceparent`` headers
|
||||
-----------------------------------------
|
||||
|
||||
- **Standardization**: The W3C Trace Context standard ensures compatibility across ecosystem tools, allowing
|
||||
traces to be propagated uniformly through different layers of the system.
|
||||
- **Ease of Integration**: OpenTelemetry's design allows developers to easily integrate tracing with minimal
|
||||
changes to their codebase, enabling quick adoption of end-to-end observability.
|
||||
- **Interoperability**: Works seamlessly with popular tracing tools like AWS X-Ray, Datadog, Jaeger, and many others,
|
||||
making it easy to visualize traces in the tools you're already usi
|
||||
|
||||
How to initiate a trace
|
||||
-----------------------
|
||||
|
||||
1. **Enable Tracing Configuration**: Simply add the ``tracing: 100`` flag to in the :ref:`listener <arch_overview_listeners>` config
|
||||
|
||||
2. **Trace Context Propagation**: Arch automatically propagates the ``traceparent`` header. When a request is received, Arch will:
|
||||
|
||||
- Generate a new ``traceparent`` header if one is not present.
|
||||
- Extract the trace context from the ``traceparent`` header if it exists.
|
||||
- Start a new span representing its processing of the request.
|
||||
- Forward the ``traceparent`` header to downstream services.
|
||||
|
||||
3. **Sampling Policy**: The 100 in ``tracing: 100`` means that all the requests as sampled for tracing.
|
||||
You can adjust this value from 0-100.
|
||||
|
||||
|
||||
Trace Propagation
|
||||
-----------------
|
||||
|
||||
Arch uses the W3C Trace Context standard for trace propagation, which relies on the ``traceparent`` header.
|
||||
This header carries tracing information in a standardized format, enabling interoperability between different
|
||||
tracing systems.
|
||||
|
||||
Header Format
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The ``traceparent`` header has the following format::
|
||||
|
||||
traceparent: {version}-{trace-id}-{parent-id}-{trace-flags}
|
||||
|
||||
- {version}: The version of the Trace Context specification (e.g., ``00``).
|
||||
- {trace-id}: A 16-byte (32-character hexadecimal) unique identifier for the trace.
|
||||
- {parent-id}: An 8-byte (16-character hexadecimal) identifier for the parent span.
|
||||
- {trace-flags}: Flags indicating trace options (e.g., sampling).
|
||||
|
||||
Instrumentation
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
To integrate AI tracing, your application needs to follow a few simple steps. The steps
|
||||
below are very common practice, and not unique to Arch, when you reading tracing headers and export
|
||||
`spans <https://docs.lightstep.com/docs/understand-distributed-tracing>`_ for distributed tracing.
|
||||
|
||||
- Read the ``traceparent`` header from incoming requests.
|
||||
- Start new spans as children of the extracted context.
|
||||
- Include the ``traceparent`` header in outbound requests to propagate trace context.
|
||||
- Send tracing data to a collector or tracing backend to export spans
|
||||
|
||||
Example with OpenTelemetry in Python
|
||||
************************************
|
||||
|
||||
Install OpenTelemetry packages:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp
|
||||
pip install opentelemetry-instrumentation-requests
|
||||
|
||||
Set up the tracer and exporter:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from opentelemetry import trace
|
||||
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
|
||||
from opentelemetry.instrumentation.requests import RequestsInstrumentor
|
||||
from opentelemetry.sdk.resources import Resource
|
||||
from opentelemetry.sdk.trace import TracerProvider
|
||||
from opentelemetry.sdk.trace.export import BatchSpanProcessor
|
||||
|
||||
# Define the service name
|
||||
resource = Resource(attributes={
|
||||
"service.name": "customer-support-agent"
|
||||
})
|
||||
|
||||
# Set up the tracer provider and exporter
|
||||
tracer_provider = TracerProvider(resource=resource)
|
||||
otlp_exporter = OTLPSpanExporter(endpoint="otel-collector:4317", insecure=True)
|
||||
span_processor = BatchSpanProcessor(otlp_exporter)
|
||||
tracer_provider.add_span_processor(span_processor)
|
||||
trace.set_tracer_provider(tracer_provider)
|
||||
|
||||
# Instrument HTTP requests
|
||||
RequestsInstrumentor().instrument()
|
||||
|
||||
Handle incoming requests:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from opentelemetry import trace
|
||||
from opentelemetry.propagate import extract, inject
|
||||
import requests
|
||||
|
||||
def handle_request(request):
|
||||
# Extract the trace context
|
||||
context = extract(request.headers)
|
||||
tracer = trace.get_tracer(__name__)
|
||||
|
||||
with tracer.start_as_current_span("process_customer_request", context=context):
|
||||
# Example of processing a customer request
|
||||
print("Processing customer request...")
|
||||
|
||||
# Prepare headers for outgoing request to payment service
|
||||
headers = {}
|
||||
inject(headers)
|
||||
|
||||
# Make outgoing request to external service (e.g., payment gateway)
|
||||
response = requests.get("http://payment-service/api", headers=headers)
|
||||
|
||||
print(f"Payment service response: {response.content}")
|
||||
|
||||
|
||||
AI Agent Tracing Visualization Example
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following is an example of tracing for an AI-powered customer support system.
|
||||
A customer interacts with AI agents, which forward their requests through different
|
||||
specialized services and external systems.
|
||||
|
||||
::
|
||||
|
||||
+--------------------------+
|
||||
| Customer Interaction |
|
||||
+--------------------------+
|
||||
|
|
||||
v
|
||||
+--------------------------+ +--------------------------+
|
||||
| Agent 1 (Main - Arch) | ----> | External Payment Service |
|
||||
+--------------------------+ +--------------------------+
|
||||
| |
|
||||
v v
|
||||
+--------------------------+ +--------------------------+
|
||||
| Agent 2 (Support - Arch)| ----> | Internal Tech Support |
|
||||
+--------------------------+ +--------------------------+
|
||||
| |
|
||||
v v
|
||||
+--------------------------+ +--------------------------+
|
||||
| Agent 3 (Orders- Arch) | ----> | Inventory Management |
|
||||
+--------------------------+ +--------------------------+
|
||||
|
||||
Trace Breakdown:
|
||||
****************
|
||||
|
||||
- **Customer Interaction**:
|
||||
- Span 1: Customer initiates a request via the AI-powered chatbot for billing support (e.g., asking for payment details).
|
||||
|
||||
- **AI Agent 1 (Main - Arch)**:
|
||||
- Span 2: AI Agent 1 (Main) processes the request and identifies it as related to billing, forwarding the request
|
||||
to an external payment service.
|
||||
- Span 3: AI Agent 1 determines that additional technical support is needed for processing and forwards the request
|
||||
to AI Agent 2.
|
||||
|
||||
- **External Payment Service**:
|
||||
- Span 4: The external payment service processes the payment-related request (e.g., verifying payment status) and sends
|
||||
the response back to AI Agent 1.
|
||||
|
||||
- **AI Agent 2 (Tech - Arch)**:
|
||||
- Span 5: AI Agent 2, responsible for technical queries, processes a request forwarded from AI Agent 1 (e.g., checking for
|
||||
any account issues).
|
||||
- Span 6: AI Agent 2 forwards the query to Internal Tech Support for further investigation.
|
||||
|
||||
- **Internal Tech Support**:
|
||||
- Span 7: Internal Tech Support processes the request (e.g., resolving account access issues) and responds to AI Agent 2.
|
||||
|
||||
- **AI Agent 3 (Orders - Arch)**:
|
||||
- Span 8: AI Agent 3 handles order-related queries. AI Agent 1 forwards the request to AI Agent 3 after payment verification
|
||||
is completed.
|
||||
- Span 9: AI Agent 3 forwards a request to the Inventory Management system to confirm product availability for a pending order.
|
||||
|
||||
- **Inventory Management**:
|
||||
- Span 10: The Inventory Management system checks stock and availability and returns the information to AI Agent 3.
|
||||
|
||||
Integrating with Tracing Tools
|
||||
------------------------------
|
||||
|
||||
AWS X-Ray
|
||||
~~~~~~~~~
|
||||
|
||||
To send tracing data to `AWS X-Ray <https://aws.amazon.com/xray/>`_ :
|
||||
|
||||
1. **Configure OpenTelemetry Collector**: Set up the collector to export traces to AWS X-Ray.
|
||||
|
||||
Collector configuration (``otel-collector-config.yaml``):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
|
||||
processors:
|
||||
batch:
|
||||
|
||||
exporters:
|
||||
awsxray:
|
||||
region: your-aws-region
|
||||
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [awsxray]
|
||||
|
||||
2. **Deploy the Collector**: Run the collector as a Docker container, Kubernetes pod, or standalone service.
|
||||
3. **Ensure AWS Credentials**: Provide AWS credentials to the collector, preferably via IAM roles.
|
||||
4. **Verify Traces**: Access the AWS X-Ray console to view your traces.
|
||||
|
||||
Datadog
|
||||
~~~~~~~
|
||||
|
||||
Datadog
|
||||
|
||||
To send tracing data to `Datadog <https://docs.datadoghq.com/getting_started/tracing/>`_:
|
||||
|
||||
1. **Configure OpenTelemetry Collector**: Set up the collector to export traces to Datadog.
|
||||
|
||||
Collector configuration (``otel-collector-config.yaml``):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
|
||||
processors:
|
||||
batch:
|
||||
|
||||
exporters:
|
||||
datadog:
|
||||
api:
|
||||
key: "${DD_API_KEY}"
|
||||
site: "${DD_SITE}"
|
||||
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: [batch]
|
||||
exporters: [datadog]
|
||||
|
||||
2. **Set Environment Variables**: Provide your Datadog API key and site.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export DD_API_KEY=your_datadog_api_key
|
||||
export DD_SITE=datadoghq.com # Or datadoghq.eu
|
||||
|
||||
3. **Deploy the Collector**: Run the collector in your environment.
|
||||
4. **Verify Traces**: Access the Datadog APM dashboard to view your traces.
|
||||
|
||||
|
||||
Best Practices
|
||||
--------------
|
||||
|
||||
- **Consistent Instrumentation**: Ensure all services propagate the ``traceparent`` header.
|
||||
- **Secure Configuration**: Protect sensitive data and secure communication between services.
|
||||
- **Performance Monitoring**: Be mindful of the performance impact and adjust sampling rates accordingly.
|
||||
- **Error Handling**: Implement proper error handling to prevent tracing issues from affecting your application.
|
||||
|
||||
Conclusion
|
||||
----------
|
||||
|
||||
By leveraging the ``traceparent`` header for trace context propagation, Arch enables developers to implement
|
||||
tracing efficiently. This approach simplifies the process of collecting and analyzing tracing data in common
|
||||
tools like AWS X-Ray and Datadog, enhancing observability and facilitating faster debugging and optimization.
|
||||
|
||||
Additional Resources
|
||||
--------------------
|
||||
|
||||
- **OpenTelemetry Documentation**: https://opentelemetry.io/docs/
|
||||
- **W3C Trace Context Specification**: https://www.w3.org/TR/trace-context/
|
||||
- **AWS X-Ray Exporter**: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/awsxrayexporter
|
||||
- **Datadog Exporter**: https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/datadogexporter
|
||||
|
||||
.. Note::
|
||||
Replace placeholders like ``your-aws-region``, and ``DD_API_KEY`` with your actual configurations.
|
||||
|
||||
|
||||
|
|
@ -2,10 +2,11 @@ Arch Documentation
|
|||
==================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
:maxdepth: 2
|
||||
|
||||
intro/intro
|
||||
getting_started/getting_started
|
||||
getting_started/use_cases
|
||||
observability/observability
|
||||
llms/llms
|
||||
configuration_reference
|
||||
getting_started/getting_started
|
||||
getting_started/sample_apps
|
||||
|
|
|
|||