updating readme and docs with note about Arch-Function (#285)

* updating readme and docs with note about Arch-Function

* minor fixes to README

* a few more minor updates to the README

---------

Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
This commit is contained in:
Salman Paracha 2024-11-19 08:43:56 -08:00 committed by GitHub
parent 33ab24292c
commit 970db68575
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 9 additions and 4 deletions

View file

@ -9,7 +9,7 @@
## Build fast, observable, and personalized AI agents.
Arch is an intelligent [Layer 7](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) gateway designed to protect, observe, and personalize AI agents (assistants, co-pilots) with your APIs.
Arch is an intelligent [Layer 7](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) distributed proxy designed to protect, observe, and personalize AI agents with your APIs.
Engineered with purpose-built LLMs, Arch handles the critical but undifferentiated tasks related to the handling and processing of prompts, including detecting and rejecting [jailbreak](https://github.com/verazuo/jailbreak_llms) attempts, intelligently calling "backend" APIs to fulfill the user's request represented in a prompt, routing to and offering disaster recovery between upstream LLMs, and managing the observability of prompts and LLM interactions in a centralized way.
@ -26,6 +26,12 @@ Engineered with purpose-built LLMs, Arch handles the critical but undifferentiat
**Jump to our [docs](https://docs.archgw.com)** to learn how you can use Arch to improve the speed, security and personalization of your GenAI apps.
> [!NOTE]
> Today, the function calling LLM (Arch-Function) designed for the agentic and RAG scenarios is hosted free of charge in the US-central region.
> To offer consistent latencies and throughput, and to manage our expenses, we will enable access to the hosted version via developers keys soon, and give you the option to run that
> LLM locally. Pricing for the hosted version of Arch-Function will be ~ $0.10/M output token (100x cheaper that GPT-4o for function calling scenarios).
## Contact
To get in touch with us, please join our [discord server](https://discord.gg/pGZf2gcwEc). We will be monitoring that actively and offering support there.

View file

@ -59,8 +59,8 @@
}
div.bold-text {
font-size: 1.4rem;
margin-bottom: 5px;
line-height: 3rem;
margin-bottom: 10px;
margin-top: 10px;
}
.subheading {
font-size: 1rem;
@ -170,7 +170,6 @@
}
</style>
</head>
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-F1XYQ9H653"></script>
<script>
window.dataLayer = window.dataLayer || [];