mirror of
https://github.com/katanemo/plano.git
synced 2026-05-12 09:12:43 +02:00
Docs branch - v1 of our tech docs (#69)
* added the first set of docs for our technical docs * more docuemtnation changes * added support for prompt processing and updated life of a request * updated docs to including getting help sections and updated life of a request * committing local changes for getting started guide, sample applications, and full reference spec for prompt-config * updated configuration reference, added sample app skeleton, updated favico * fixed the configuration refernce file, and made minor changes to the intent detection. commit v1 for now --------- Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local> Co-authored-by: Adil Hafeez <adil@katanemo.com>
This commit is contained in:
parent
233976a568
commit
80c554ce1a
34 changed files with 1040 additions and 0 deletions
10
docs/source/intro/architecture/architecture.rst
Normal file
10
docs/source/intro/architecture/architecture.rst
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
Technical Architecture
|
||||
======================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
intro/terminology
|
||||
intro/threading_model
|
||||
listeners/listeners
|
||||
prompt_processing/prompt_processing
|
||||
44
docs/source/intro/architecture/intro/terminology.rst
Normal file
44
docs/source/intro/architecture/intro/terminology.rst
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
Terminology
|
||||
============
|
||||
|
||||
A few definitions before we dive into the main architecture documentation. Arch borrows from Envoy's terminology
|
||||
to keep things consistent in logs, traces and in code.
|
||||
|
||||
**Downstream**: An downstream client (web application, etc.) connects to Arch, sends requests, and receives responses.
|
||||
|
||||
**Upstream**: An upstream host receives connections and prompts from Arch, and returns context or responses for a prompt
|
||||
|
||||
.. image:: /_static/img/network-topology-ingress-egress.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
**Listener**: A listener is a named network location (e.g., port, address, path etc.) that Arch listens on to process prompts
|
||||
before forwarding them to your application server endpoints. rch enables you to configure one listener for downstream connections
|
||||
(like port 80, 443) and creates a separate internal listener for calls that initiate from your application code to LLMs.
|
||||
|
||||
.. Note::
|
||||
|
||||
When you start Arch, you specify a listener address/port that you want to bind downstream (. But Arch uses are predefined port that you
|
||||
can use for outbound calls to LLMs and other services 127.0.0.1:10000
|
||||
|
||||
**Instance**: An instance of the Arch gateway. When you start Arch it creates at most two processes. One to handle Layer 7
|
||||
networking operations (auth, tls, observability, etc) and the second process to serve models that enable it to make smart
|
||||
decisions on how to accept, handle and forward prompts. The second process is optional, as the model serving sevice could be
|
||||
hosted on a different network (an API call). But these two processes are considered a single instance of Arch.
|
||||
|
||||
**System Prompt**: An initial text or message that is provided by the developer that Arch can use to call an downstream LLM
|
||||
in order to generate a response from the LLM model. The system prompt can be thought of as the input or query that the model
|
||||
uses to generate its response. The quality and specificity of the system prompt can have a significant impact on the relevance
|
||||
and accuracy of the model's response. Therefore, it is important to provide a clear and concise system prompt that accurately
|
||||
conveys the user's intended message or question.
|
||||
|
||||
**Prompt Targets**: Arch offers a primitive called “prompt targets” to help separate business logic from undifferentiated
|
||||
work in building generative AI apps. Prompt targets are endpoints that receive prompts that are processed by Bolt.
|
||||
For example, Bolt enriches incoming prompts with metadata like knowing when a request is a follow-up or clarifying prompt
|
||||
so that you can build faster, more accurate RAG apps. To support agentic apps, like scheduling travel plans or sharing comments
|
||||
on a document - via prompts, Bolt uses its function calling abilities to extract critical information from the incoming prompt
|
||||
(or a set of prompts) needed by a downstream backend API or function call before calling it directly.
|
||||
|
||||
**Error Targets**: Error targets are those endpoints that receive forwarded errors from Arch when issues arise,
|
||||
such as failing to properly call a function/API, detecting violations of guardrails, or encountering other processing errors.
|
||||
These errors are communicated to the application via headers (X-Arch-[ERROR-TYPE]), allowing it to handle the errors gracefully and take appropriate actions.
|
||||
21
docs/source/intro/architecture/intro/threading_model.rst
Normal file
21
docs/source/intro/architecture/intro/threading_model.rst
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
.. _arch_overview_threading:
|
||||
|
||||
Threading model
|
||||
===============
|
||||
|
||||
Arch builds on top of Envoy's single process with multiple threads architecture.
|
||||
|
||||
A single *primary* thread controls various sporadic coordination tasks while some number of *worker*
|
||||
threads perform filtering, and forwarding.
|
||||
|
||||
Once a connection is accepted, the connection spends the rest of its lifetime bound to a single worker
|
||||
thread. All the functionality around prompt handling from a downstream client is handled in a separate worker thread.
|
||||
This allows the majority of Arch to be largely single threaded (embarrassingly parallel) with a small amount
|
||||
of more complex code handling coordination between the worker threads.
|
||||
|
||||
Generally Arch is written to be 100% non-blocking.
|
||||
|
||||
.. tip::
|
||||
|
||||
For most workloads we recommend configuring the number of worker threads to be equal to the number of
|
||||
hardware threads on the machine.
|
||||
27
docs/source/intro/architecture/listeners/listeners.rst
Normal file
27
docs/source/intro/architecture/listeners/listeners.rst
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
.. _arch_overview_listeners:
|
||||
|
||||
Listener
|
||||
========
|
||||
Arch leverages Envoy’s Listener subsystem to streamline connection management for developers.
|
||||
By building on Envoy’s robust architecture, Arch simplifies the configuration required to bind incoming
|
||||
connections from downstream clients and efficiently manages internal listeners for outgoing connections
|
||||
to LLM hosts and APIs.
|
||||
|
||||
**Listener Subsystem Overview**
|
||||
|
||||
- **Downstream Connections**: Arch uses Envoy's Listener subsystem to accept connections from downstream clients.
|
||||
A listener acts as the primary entry point for incoming traffic, handling initial connection setup, including network
|
||||
filtering and security checks, such as SNI and TLS termination. For more details on the listener subsystem, refer to the
|
||||
`Envoy Listener Configuration <https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/listeners>`_.
|
||||
|
||||
- **Internal Listeners for Outgoing Connections**: Arch automatically configures internal listeners to route requests
|
||||
from prompts origination from your application services to appropriate upstream targets, including LLM hosts and backend APIs.
|
||||
This configuration abstracts away complex networking setups, allowing developers to focus on business logic rather than the
|
||||
intricacies of connection management and multiple SDKs to work with different LLM providers.
|
||||
|
||||
- **Simplified Configuration**: Arch minimizes the complexity of traditional Envoy setups by pre-defining essential
|
||||
listener settings, making it easier for developers to bind connections without deep knowledge of Envoy’s configuration model.
|
||||
This simplification ensures that connections are secure, reliable, and optimized for performance.
|
||||
|
||||
Arch’s dependency on Envoy’s Listener subsystem provides a powerful, developer-friendly interface for managing connections,
|
||||
enhancing the overall efficiency of handling prompts and routing them to the correct endpoints within a generative AI application.
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
.. _arch_overview_prompt_handling:
|
||||
|
||||
Prompt Processing
|
||||
=================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
Arch's model serving process is designed to securely handle incoming prompts by detecting jailbreak attempts,
|
||||
processing the prompts, and routing them to appropriate functions or prompt targets based on intent detection.
|
||||
The serving workflow integrates several key components, each playing a crucial role in managing generative AI interactions:
|
||||
|
||||
Jailbreak and Toxicity Guardrails
|
||||
---------------------------------
|
||||
|
||||
Arch employs Arch-Guard, a security layer powered by a compact and high-performimg LLM that monitors incoming prompts to detect
|
||||
and reject jailbreak attempts, ensuring that unauthorized or harmful behaviors are intercepted early in the process. Arch-Guard
|
||||
is the leading model in the industry for jailbreak and toxicity detection. Configuring guardrails is super simple. See example
|
||||
below.
|
||||
|
||||
.. literalinclude:: /_config/getting-started.yml
|
||||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 18-21
|
||||
:caption: :download:`arch-getting-started.yml </_config/getting-started.yml>`
|
||||
|
||||
|
||||
Prompt Targets
|
||||
---------------
|
||||
|
||||
Once a prompt passes the security checks, Arch processes the content and identifies if any specific functions need to be called.
|
||||
Arch-FC1B, a dedicated function calling module, extracts critical information from the prompt and executes the necessary
|
||||
backend API calls or internal functions. This capability allows for efficient handling of agentic tasks, such as scheduling or
|
||||
data retrieval, by dynamically interacting with backend services.
|
||||
|
||||
.. image:: /_static/img/function-calling-network-flow.jpg
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
Intent Detection and Prompt Matching:
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Arch uses Natural Language Inference (NLI) and embedding-based approaches to detect the intent of each incoming prompt.
|
||||
This intent detection phase analyzes the prompt's content and matches it against predefined prompt targets, ensuring that each prompt
|
||||
is forwarded to the most appropriate endpoint. Arch’s intent detection framework considers both the name and description of each prompt target,
|
||||
enhancing accuracy in forwarding decisions.
|
||||
|
||||
- **Embedding Approaches**: By embedding the prompt and comparing it to known target vectors, Arch effectively identifies the closest match,
|
||||
ensuring that the prompt is handled by the correct downstream service.
|
||||
|
||||
- **NLI Integration**: Natural Language Inference techniques further refine the matching process by evaluating the semantic alignment
|
||||
between the prompt and potential targets.
|
||||
|
||||
Forwarding Prompts to Downstream Targets:
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
After determining the correct target, Arch forwards the prompt to the designated endpoint, such as an LLM host or API service.
|
||||
This seamless routing mechanism integrates with Arch's broader ecosystem, enabling efficient communication and response generation tailored to the user's intent.
|
||||
|
||||
Arch's model serving process combines robust security measures with advanced intent detection and function calling capabilities, creating a reliable and adaptable environment for managing generative AI workflows. This approach not only enhances the accuracy and relevance of responses but also safeguards against malicious usage patterns, aligning with best practices in AI governance.
|
||||
Loading…
Add table
Add a link
Reference in a new issue