mirror of
https://github.com/katanemo/plano.git
synced 2026-04-26 09:16:24 +02:00
add precommit check (#97)
* add precommit check
* remove check
* Revert "remove check"
This reverts commit 9987b62b9b.
* fix checks
* fix whitespace errors
This commit is contained in:
parent
1e61452310
commit
4182879717
26 changed files with 292 additions and 312 deletions
|
|
@ -3,16 +3,16 @@
|
|||
Agentic (Text-to-Action) Apps
|
||||
==============================
|
||||
|
||||
Arch helps you easily personalize your applications by calling application-specific (API) functions
|
||||
via user prompts. This involves any predefined functions or APIs you want to expose to users to perform tasks,
|
||||
gather information, or manipulate data. This capability is generally referred to as **function calling**, where
|
||||
you have the flexibility to support “agentic” apps tailored to specific use cases - from updating insurance
|
||||
claims to creating ad campaigns - via prompts.
|
||||
Arch helps you easily personalize your applications by calling application-specific (API) functions
|
||||
via user prompts. This involves any predefined functions or APIs you want to expose to users to perform tasks,
|
||||
gather information, or manipulate data. This capability is generally referred to as **function calling**, where
|
||||
you have the flexibility to support “agentic” apps tailored to specific use cases - from updating insurance
|
||||
claims to creating ad campaigns - via prompts.
|
||||
|
||||
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with
|
||||
Arch analyzes prompts, extracts critical information from prompts, engages in lightweight conversation with
|
||||
the user to gather any missing parameters and makes API calls so that you can focus on writing business logic.
|
||||
Arch does this via its purpose-built :ref:`Arch-FC LLM <llms_in_arch>` - the fastest (200ms p90 - 10x faser than GPT-4o)
|
||||
and cheapest (100x than GPT-40) function-calling LLM that matches performance with frontier models.
|
||||
Arch does this via its purpose-built :ref:`Arch-FC LLM <llms_in_arch>` - the fastest (200ms p90 - 10x faser than GPT-4o)
|
||||
and cheapest (100x than GPT-40) function-calling LLM that matches performance with frontier models.
|
||||
______________________________________________________________________________________________
|
||||
|
||||
.. image:: /_static/img/function-calling-network-flow.jpg
|
||||
|
|
@ -22,7 +22,7 @@ ________________________________________________________________________________
|
|||
|
||||
Single Function Call
|
||||
--------------------
|
||||
In the most common scenario, users will request a single action via prompts, and Arch efficiently processes the
|
||||
In the most common scenario, users will request a single action via prompts, and Arch efficiently processes the
|
||||
request by extracting relevant parameters, validating the input, and calling the designated function or API. Here
|
||||
is how you would go about enabling this scenario with Arch:
|
||||
|
||||
|
|
@ -38,7 +38,7 @@ Step 1: Define prompt targets with functions
|
|||
Step 2: Process request parameters in Flask
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Once the prompt targets are configured as above, handling those parameters is
|
||||
Once the prompt targets are configured as above, handling those parameters is
|
||||
|
||||
.. literalinclude:: /_include/parameter_handling_flask.py
|
||||
:language: python
|
||||
|
|
@ -47,19 +47,19 @@ Once the prompt targets are configured as above, handling those parameters is
|
|||
|
||||
Parallel/ Multiple Function Calling
|
||||
-----------------------------------
|
||||
In more complex use cases, users may request multiple actions or need multiple APIs/functions to be called
|
||||
simultaneously or sequentially. With Arch, you can handle these scenarios efficiently using parallel or multiple
|
||||
In more complex use cases, users may request multiple actions or need multiple APIs/functions to be called
|
||||
simultaneously or sequentially. With Arch, you can handle these scenarios efficiently using parallel or multiple
|
||||
function calling. This allows your application to engage in a broader range of interactions, such as updating
|
||||
different datasets, triggering events across systems, or collecting results from multiple services in one prompt.
|
||||
|
||||
Arch-FC1B is built to manage these parallel tasks efficiently, ensuring low latency and high throughput, even
|
||||
Arch-FC1B is built to manage these parallel tasks efficiently, ensuring low latency and high throughput, even
|
||||
when multiple functions are invoked. It provides two mechanisms to handle these cases:
|
||||
|
||||
Step 1: Define Multiple Function Targets
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When enabling multiple function calling, define the prompt targets in a way that supports multiple functions or
|
||||
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
|
||||
When enabling multiple function calling, define the prompt targets in a way that supports multiple functions or
|
||||
API calls based on the user's prompt. These targets can be triggered in parallel or sequentially, depending on
|
||||
the user's intent.
|
||||
|
||||
Example of Multiple Prompt Targets in YAML:
|
||||
|
|
@ -68,4 +68,4 @@ Example of Multiple Prompt Targets in YAML:
|
|||
:language: yaml
|
||||
:linenos:
|
||||
:emphasize-lines: 16-37
|
||||
:caption: Define prompt targets that can enable users to engage with API and backened functions of an app
|
||||
:caption: Define prompt targets that can enable users to engage with API and backened functions of an app
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue