Fix errors and improve Doc (#143)

* Fix link issues and add icons

* Improve Doc

* fix test

* making minor modifications to shuguangs' doc changes

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
Co-authored-by: Adil Hafeez <adil@katanemo.com>
This commit is contained in:
Shuguang Chen 2024-10-08 13:18:34 -07:00 committed by GitHub
parent 3ed50e61d2
commit b30ad791f7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
27 changed files with 396 additions and 329 deletions

View file

@ -83,7 +83,7 @@ Heres a step-by-step guide to configuring function calling within your Arch s
Step 1: Define the Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create or identify the backend function you want Arch to call. This could be an API endpoint, a script, or any other executable backend logic.
First, create or identify the backend function you want Arch to call. This could be an API endpoint, a script, or any other executable backend logic.
.. code-block:: python
:caption: Example Function
@ -112,11 +112,11 @@ Create or identify the backend function you want Arch to call. This could be an
Step 2: Configure Prompt Targets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Map the function to a prompt target, defining the intent and parameters that Arch will extract from the users prompt.
Next, map the function to a prompt target, defining the intent and parameters that Arch will extract from the users prompt.
Specify the parameters your function needs and how Arch should interpret these.
.. code-block:: yaml
:caption: Example Config
:caption: Prompt Target Example Configuration
prompt_targets:
- name: get_weather
@ -134,10 +134,10 @@ Map the function to a prompt target, defining the intent and parameters that Arc
name: api_server
path: /weather
Step 3: Validate Parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Arch will validate parameters and ensure that the required parameters (e.g., location) are present in the prompt, and add validation rules if necessary.
Step 3: Arch Takes Over
~~~~~~~~~~~~~~~~~~~~~~~
Once you have defined the functions and configured the prompt targets, Arch takes care of the remaining work.
It will automatically validate parameters validate parameters and ensure that the required parameters (e.g., location) are present in the prompt, and add validation rules if necessary.
Here is ane example validation schema using the `jsonschema <https://json-schema.org/docs>`_ library
.. code-block:: python
@ -191,12 +191,8 @@ Here is ane example validation schema using the `jsonschema <https://json-schema
print(weather_info)
Step 4: Execute and Return the Response
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the function is called, format the response and send it back to Arch-Function.
Next, Arch-Function provides users with coherent and user-friendly responses.
Once the functions are called, Arch formats the response and deliver back to users.
By completing these setup steps, you enable Arch to manage the process from validation to response, ensuring users receive consistent, reliable results.
Example Use Cases
-----------------

View file

@ -0,0 +1,68 @@
version: v0.1
listener:
address: 0.0.0.0 # or 127.0.0.1
port: 10000
# Defines how Arch should parse the content from application/json or text/pain Content-type in the http request
message_format: huggingface
# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
- name: OpenAI
provider: openai
access_key: OPENAI_API_KEY
model: gpt-4o
default: true
stream: true
# default system prompt used by all prompt targets
system_prompt: You are a network assistant that just offers facts; not advice on manufacturers or purchasing decisions.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance within my programmed parameters.
prompt_targets:
- name: information_extraction
default: true
description: handel all scenarios that are question and answer in nature. Like summarization, information extraction, etc.
endpoint:
name: app_server
path: /agent/summary
# Arch uses the default LLM and treats the response from the endpoint as the prompt to send to the LLM
auto_llm_dispatch_on_response: true
# override system prompt for this prompt target
system_prompt: You are a helpful information extraction assistant. Use the information that is provided to you.
- name: reboot_network_device
description: Perform device operations like rebooting a device.
endpoint:
name: app_server
path: /agent/action
parameters:
- name: device_id
type: str
description: Identifier of the network device to reboot.
required: true
- name: confirmation
type: bool
description: Confirmation flag to proceed with reboot.
default: false
enum: [true, false]
error_target:
endpoint:
name: error_target_1
path: /error
# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
endpoints:
app_server:
# value could be ip address or a hostname with port
# this could also be a list of endpoints for load balancing
# for example endpoint: [ ip1:port, ip2:port ]
endpoint: 127.0.0.1:80
# max time to wait for a connection to be established
connect_timeout: 0.005s

View file

@ -47,6 +47,16 @@ It excels at detecting explicitly malicious prompts and assessing toxic content,
By embedding Arch-Guard within the Arch architecture, we empower developers to build robust, LLM-powered applications while prioritizing security and safety. With Arch-Guard, you can navigate the complexities of prompt management with confidence, knowing you have a reliable defense against malicious input.
Example Configuration
~~~~~~~~~~~~~~~~~~~~~
Here is an example of using Arch-Guard in Arch:
.. literalinclude:: includes/arch_config.yaml
:language: yaml
:linenos:
:lines: 22-26
:caption: Arch-Guard Example Configuration
How Arch-Guard Works
----------------------