Quickstart

Follow this guide to learn how to quickly set up Arch and integrate it into your generative AI applications.

Prerequisites

Before you begin, ensure you have the following:

  • Docker & Python installed on your system

  • API Keys for LLM providers (if using external LLMs)

The fastest way to get started using Arch is to use katanemo/archgw pre-built binaries. You can also build it from source.

Step 1: Install Arch

Arch’s CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:

$ pip install archgw

This will install the archgw command-line tool globally on your system.

Tip

We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that archgw and its dependencies do not interfere with other packages on your system.

To create and activate a virtual environment, you can run the following commands:

$ python -m venv venv
$ source venv/bin/activate   # On Windows, use: venv\Scripts\activate
$ pip install archgw

Step 2: Config Arch

Arch operates based on a configuration file where you can define LLM providers, prompt targets, and guardrails, etc. Below is an example configuration to get you started, including:

  • endpoints: Specifies where Arch listens for incoming prompts.

  • system_prompts: Defines predefined prompts to set the context for interactions.

  • llm_providers: Lists the LLM providers Arch can route prompts to.

  • prompt_guards: Sets up rules to detect and reject undesirable prompts.

  • prompt_targets: Defines endpoints that handle specific types of prompts.

  • error_target: Specifies where to route errors for handling.

version: v0.1
listener:
  address: 127.0.0.1
  port: 8080 #If you configure port 443, you'll need to update the listener with tls_certificates
  message_format: huggingface

# Centralized way to manage LLMs, manage keys, retry logic, failover and limits in a central way
llm_providers:
  - name: OpenAI
    provider: openai
    access_key: $OPENAI_API_KEY
    model: gpt-3.5-turbo
    default: true

# default system prompt used by all prompt targets
system_prompt: |
  You are a network assistant that helps operators with a better understanding of network traffic flow and perform actions on networking operations. No advice on manufacturers or purchasing decisions.

prompt_targets:
    - name: device_summary
      description: Retrieve network statistics for specific devices within a time range
      endpoint:
        name: app_server
        path: /agent/device_summary
      parameters:
        - name: device_ids
          type: list
          description: A list of device identifiers (IDs) to retrieve statistics for.
          required: true  # device_ids are required to get device statistics
        - name: days
          type: int
          description: The number of days for which to gather device statistics.
          default: "7"

# Arch creates a round-robin load balancing between different endpoints, managed via the cluster subsystem.
endpoints:
  app_server:
    # value could be ip address or a hostname with port
    # this could also be a list of endpoints for load balancing
    # for example endpoint: [ ip1:port, ip2:port ]
    endpoint: host.docker.internal:18083
    # max time to wait for a connection to be established
    connect_timeout: 0.005s

Step 3: Start Arch Gateway

$ archgw up [path_to_config]

For detailed usage please refer to the Support <https://github.com/katanemo/arch/blob/main/arch/tools/README.md#setup-instructionsuser-archgw-cli>

Next Steps

Congratulations! You’ve successfully set up Arch and made your first prompt-based request. To further enhance your GenAI applications, explore the following resources:

With Arch, building scalable, fast, and personalized GenAI applications has never been easier. Dive deeper into Arch’s capabilities and start creating innovative AI-driven experiences today!