diff --git a/README.md b/README.md index d0bc7ad9..8e27af9f 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,7 @@ +Arch is an **intelligent proxy server designed for prompts** - to help you protect, observe, and build fully agentic apps, by integrating with (existing) backend APIs. Built by the contributors of [Envoy Proxy](https://www.envoyproxy.io/) with the belief that: + +>Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – outside core business logic.* + ![alt text](docs/source/_static/img/arch-logo.png) Arch - Build fast, hyper-personalized agents with intelligent infra | Product Hunt @@ -7,11 +11,6 @@ [![e2e tests](https://github.com/katanemo/arch/actions/workflows/e2e_tests.yml/badge.svg)](https://github.com/katanemo/arch/actions/workflows/e2e_tests.yml) [![Build and Deploy Documentation](https://github.com/katanemo/arch/actions/workflows/static.yml/badge.svg)](https://github.com/katanemo/arch/actions/workflows/static.yml) -## Fast, observable, and personalized agentic applciations. - -Arch is an intelligent proxy server designed for prompts - to help you protect, observe, and build fully agentic experiences by simply writing APIs. Built on (and by the contributors of) [Envoy Proxy](https://www.envoyproxy.io/) with the belief that: - ->Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems for personalization – outside core business logic.* Arch is engineered with purpose-built LLMs to handle critical but undifferentiated tasks related to the handling and processing of prompts. This includes detecting and rejecting [jailbreak](https://github.com/verazuo/jailbreak_llms) attempts, intelligent task routing for improved accuracy, mapping user request into "backend" functions, and managing the observability of prompts and LLM API calls in a centralized way.