Update README.md, NOTICE.md and app_client.py

This commit is contained in:
akhisud3195 2025-01-14 15:09:14 +05:30
parent cb8d845c19
commit 244811ac57
3 changed files with 100 additions and 22 deletions

44
apps/agents/NOTICE.md Normal file
View file

@ -0,0 +1,44 @@
# Attribution to OpenAI Swarm
- The Rowboat Agents framework has been built upon [OpenAI Swarm](https://github.com/openai/swarm), with modifications and improvements.
- OpenAI Swarm is available under the [MIT license](https://github.com/openai/swarm/blob/main/LICENSE) as of the time of this writing, and is hence available to be used by other projects.
- The original OpenAI Swarm is an experimental sample framework at the time of this writing. It is currently not intended to be used in production, and therefore has no official support.
### OpenAI Swarm License
Below is the license text from OpenAI Swarm, as required by the MIT license:
```
MIT License
Copyright (c) 2024 OpenAI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
# High-level changes
These are the high-level changes made to OpenAI Swarm to build in RowBoat's custom implementation.
Please note that this not an exhaustive list:
- Added localized agent-level history
- Added parent-child agent relationships with parents' history containing children's history
- Added usage tracking of tokens per llm
- Added turn-level error handling
- Added converstaion turn limits
- Removed streaming support as RowBoat Agents does not support streaming currently
- Modified the `Agent` and `Response` classes to be more comprehensive

View file

@ -1,7 +1,49 @@
## Agents
Please visit https://www.rowboatlabs.com/developers to learn more about RowBoat Labs for developers
# RowBoat Labs
Please visit https://www.rowboatlabs.com/developers to learn more about RowBoat Labs
# Agents
## Overview
- RowBoat Agents is a multi-agent framework which powers agentic workflows. The best way to configure these workflows is via the RowBoat Studio (UI), the source code for which is at [rowboatlabs/rowboat](https://github.com/rowboatlabs/rowboat/tree/dev/apps/rowboat)
- The Rowboat Agents framework has been built upon [OpenAI Swarm](https://github.com/openai/swarm), with modifications and improvements. Please see NOTICE.md in this directory, for attribution notes and more details. OpenAI Swarms is available under the MIT license as of the time of this writing.
- Multi-agent systems like OpenAI Swarm are typically implemented as graph-based systems, where each agent is a node in the graph. At every turn of conversation, the graph is traversed based on the a) `state` which is updated at every turn and b) the current set of `messages`.
- RowBoat Agents is a stateless implementation of such a graph-based system (specifically, a DAG or directed acyclic graph). At every turn of conversation, the incoming request JSON is parsed to extract `messages`, `state` and the `workflow`. The `workflow` is a representation of the DAG containing agents, with each agent having a set of tools and connected children agents. See `tests/sample_requests/default_example.json` for an example of a complete request JSON from an upstream service.
- At each turn of conversation (i.e., a request from upstream), the agent graph object is created from scratch. The graph is then run, which produces the next set of `messages` and `state`. The `messages` will be shown to the user by the upstream service. Additionally, if the `messages` contain tool calls, then the upstream service must invoke the necessary tools and send the results back to the framework as the next turn.
## Specifics
- **Format**: Uses OpenAI's messages format when passing messages.
- **LLMs**: Currently, only OpenAI LLMs (e.g. gpt-4o, gpt-4o-mini) are supported. Easy to expand to other LLMs like Claude, Gemini or self-hosted models.
- **Responses**: Here are some examples of responses that the framework can return:
- A list of one user-facing message
- A list of one or more tool calls
- A list of one user-facing message and one or more tool calls
- **Errors**: Errors are thrown as a tool call `raise_error` with the error message as the argument. Error handling will have to be managed by the upstream service.
## Limitations
- Does not support streaming currently.
- Does not support multiple user-facing messages in the same turn.
# Important directories and files
- `src/`: Contains all source code for the agents app
- `src/app/`: Contains Flask app which exposes the framework as a service
- `src/graph/`: Contains logic to run every turn of the conversation
- `src/graph/core.py`: Core graph implementation which parses the workflow config, creates agents from it and runs the turn of conversation (through the `run_turn` function)
- `src/swarm/`: RowBoat's custom implementation of OpenAI Swarm, which is used by `src/graph/core.py`
- `tests/`: Contains sample requests, an interactive client and a test client which mocks an upstream service
- `configs/`: Contains configurations to run every turn
- `tests/sample_requests/`: Contains sample request files for the agents app
# High-level flow
- `app/main.py` receives the request JSON from an upstream service, parses it and sends it to `src/graph/core.py`
- `src/graph/core.py` creates the agent graph object from scratch and uses `src/swarm/core.py` to run the turn
- `src/swarm/core.py` runs the turn by performing actual LLM calls and internal tool invocations to transitiion between agents
- `src/graph/core.py` returns the response messages and the new state to `app/main.py`, which relays it back to the upstream service
- The upstream services appends any new user messages to the history of messages and sends the messages back along with the new state to `app/main.py` as part of the next request. The process repeats until the upstream service completes its conversation with the user.
# Using the framework
## Set up conda env
Standard conda env setup process:
- `conda create -n myenv python=3.12`
- `conda activate myenv`
- Note: python>=3.10
@ -9,11 +51,11 @@ Please visit https://www.rowboatlabs.com/developers to learn more about RowBoat
## Install dependencies
Install either using poetry or using pip
### Using poetry
### If using poetry
- `pip install poetry`
- `poetry install`
### Using pip
### If using pip
`pip install -r requirements.txt`
## Set up .env file
@ -30,5 +72,5 @@ Copy `.env.copy` to `.env` and add your API keys
- For local testing: `flask --app src.app.main run --port=4040`
- To set up the server on remote: `gunicorn -b 0.0.0.0:4040 src.app.main:app`
## Run client test
`python -m tests.app_client`
## Run test client
`python -m tests.app_client --sample_request default_example.json`

View file

@ -1,27 +1,19 @@
from src.utils.common import common_logger, read_json_from_file
logger = common_logger
logger.info("Running swarm_flask_client.py")
import requests
from pprint import pprint
if __name__ == "__main__":
request = read_json_from_file("./tests/sample_requests/example4.json").get("lastRequest", {})
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--sample_request', type=str, required=True, help='Sample request JSON file name under tests/sample_requests/')
args = parser.parse_args()
request = read_json_from_file(f"./tests/sample_requests/{args.sample_request}").get("lastRequest", {})
print("Sending request...")
logger.info("Sending request...")
response = requests.post(
"http://localhost:4040/chat",
"http://localhost:4040/chat",
json=request
).json()
print("Output: ")
logger.info(f"Output: ")
# for k, v in response.items():
# print(f"{k}: {v}")
# print('*'*200)
# print('*'*200)
# logger.info(f"{k}: {v}")
# logger.info('*'*200)
# logger.info('='*200)
pprint(response, indent=2)
logger.info(response)
print(response)