Merge branch 'main' of https://gitlab.deepwisdomai.com/pub/MetaGPT into mgx_intent

This commit is contained in:
stellahsr 2024-03-30 12:06:38 +08:00
commit 9e5947bdb9
261 changed files with 8649 additions and 776 deletions

View file

@ -19,6 +19,7 @@
- LLM type and model name:
- System version:
- Python version:
- MetaGPT version or branch:
<!-- Dependent packagessthe packages version cause the bug(like `pydantic 1.10.8`), installation methodlike `pip install metagpt` or `pip install from source` or `run in docker` -->

View file

@ -1,8 +1,9 @@
name: Build and upload python package
on:
workflow_dispatch:
release:
types: [created]
types: [created, published]
jobs:
deploy:

2
.gitignore vendored
View file

@ -1,7 +1,7 @@
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
__pycache__
*.py[cod]
*$py.class

3
MANIFEST.in Normal file
View file

@ -0,0 +1,3 @@
recursive-include metagpt/ext/stanford_town/prompts *.txt
recursive-include metagpt/ext/stanford_town/static_dirs *.csv
recursive-include metagpt/ext/stanford_town/static_dirs *.json

View file

@ -26,7 +26,7 @@ # MetaGPT: The Multi-Agent Framework
</p>
## News
🚀 Mar. 14, 2024: Our Data Interpreter paper is on [arxiv](https://arxiv.org/abs/2402.18679). Check the [example](https://docs.deepwisdom.ai/main/en/DataInterpreter/) and [code](https://github.com/geekan/MetaGPT/tree/main/examples/di)!
🚀 Mar. 14, 2024: Our **Data Interpreter** paper is on [arxiv](https://arxiv.org/abs/2402.18679). Check the [example](https://docs.deepwisdom.ai/main/en/DataInterpreter/) and [code](https://github.com/geekan/MetaGPT/tree/main/examples/di)!
🚀 Feb. 08, 2024: [v0.7.0](https://github.com/geekan/MetaGPT/releases/tag/v0.7.0) released, supporting assigning different LLMs to different Roles. We also introduced [Data Interpreter](https://github.com/geekan/MetaGPT/blob/main/examples/di/README.md), a powerful agent capable of solving a wide range of real-world problems.
@ -55,9 +55,9 @@ ## Software Company as Multi-Agent System
<p align="center">Software Company Multi-Agent Schematic (Gradually Implementing)</p>
## Install
## Get Started
### Pip installation
### Installation
> Ensure that Python 3.9+ is installed on your system. You can check this by using: `python --version`.
> You can use conda like this: `conda create -n metagpt python=3.9 && conda activate metagpt`
@ -68,6 +68,9 @@ # or `pip install --upgrade git+https://github.com/geekan/MetaGPT.git`
# or `git clone https://github.com/geekan/MetaGPT && cd MetaGPT && pip install --upgrade -e .`
```
For detailed installation guidance, please refer to [cli_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-stable-version)
or [docker_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-with-docker)
### Configuration
You can init the config of MetaGPT by running the following command, or manually create `~/.metagpt/config2.yaml` file:
@ -88,13 +91,13 @@ # Check https://docs.deepwisdom.ai/main/en/guide/get_started/configuration.html
### Usage
After installation, you can use it as CLI
After installation, you can use MetaGPT at CLI
```bash
metagpt "Create a 2048 game" # this will create a repo in ./workspace
```
or you can use it as library
or use it as library
```python
from metagpt.software_company import generate_repo, ProjectRepo
@ -102,47 +105,19 @@ ### Usage
print(repo) # it will print the repo structure with files
```
detail installation please refer to [cli_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-stable-version)
or [docker_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-with-docker)
You can also use its [Data Interpreter](https://github.com/geekan/MetaGPT/tree/main/examples/di)
### Docker installation
<details><summary><strong>⏬ Step 1: Download metagpt image and prepare config2.yaml </strong><i>:: click to expand ::</i></summary>
<div>
```python
import asyncio
from metagpt.roles.di.data_interpreter import DataInterpreter
```bash
docker pull metagpt/metagpt:latest
mkdir -p /opt/metagpt/{config,workspace}
docker run --rm metagpt/metagpt:latest cat /app/metagpt/config/config2.yaml > /opt/metagpt/config/config2.yaml
vim /opt/metagpt/config/config2.yaml # Change the config
async def main():
di = DataInterpreter()
await di.run("Run data analysis on sklearn Iris dataset, include a plot")
asyncio.run(main()) # or await main() in a jupyter notebook setting
```
</div>
</details>
<details><summary><strong>⏬ Step 2: Run metagpt container </strong><i>:: click to expand ::</i></summary>
<div>
```bash
docker run --name metagpt -d \
--privileged \
-v /opt/metagpt/config/config2.yaml:/app/metagpt/config/config2.yaml \
-v /opt/metagpt/workspace:/app/metagpt/workspace \
metagpt/metagpt:latest
```
</div>
</details>
<details><summary><strong>⏬ Step 3: Use metagpt </strong><i>:: click to expand ::</i></summary>
<div>
```bash
docker exec -it metagpt /bin/bash
$ metagpt "Create a 2048 game" # this will create a repo in ./workspace
```
</div>
</details>
### QuickStart & Demo Video
- Try it on [MetaGPT Huggingface Space](https://huggingface.co/spaces/deepwisdom/MetaGPT)
@ -162,6 +137,7 @@ ## Tutorial
- 🧑‍💻 Contribution
- [Develop Roadmap](docs/ROADMAP.md)
- 🔖 Use Cases
- [Data Interpreter](https://docs.deepwisdom.ai/main/en/guide/use_cases/agent/interpreter/intro.html)
- [Debate](https://docs.deepwisdom.ai/main/en/guide/use_cases/multi_agent/debate.html)
- [Researcher](https://docs.deepwisdom.ai/main/en/guide/use_cases/agent/researcher.html)
- [Recepit Assistant](https://docs.deepwisdom.ai/main/en/guide/use_cases/agent/receipt_assistant.html)

View file

@ -4,6 +4,7 @@ llm:
api_key: "YOUR_API_KEY"
model: "gpt-4-turbo-preview" # or gpt-3.5-turbo-1106 / gpt-4-1106-preview
proxy: "YOUR_PROXY" # for LLM API requests
# timeout: 600 # Optional. If set to 0, default value is 300.
pricing_plan: "" # Optional. If invalid, it will be automatically filled in with the value of the `model`.
# Azure-exclusive pricing plan mappings
# - gpt-3.5-turbo 4k: "gpt-3.5-turbo-1106"

View file

@ -1,10 +1,5 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time : 2024/01/15
@Author : mannaandpoem
@File : imitate_webpage.py
"""
from metagpt.roles.di.data_interpreter import DataInterpreter

View file

@ -0,0 +1,36 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time : 2024/3/22 10:54
@Author : alexanderwu
@File : custom_tool.py
"""
from metagpt.roles.di.data_interpreter import DataInterpreter
from metagpt.tools.tool_registry import register_tool
@register_tool()
def magic_function(arg1: str, arg2: int) -> dict:
"""
The magic function that does something.
Args:
arg1 (str): ...
arg2 (int): ...
Returns:
dict: ...
"""
return {"arg1": arg1 * 3, "arg2": arg2 * 5}
async def main():
di = DataInterpreter(tools=["magic_function"])
await di.run("Just call the magic function with arg1 'A' and arg2 2. Tell me the result.")
if __name__ == "__main__":
import asyncio
asyncio.run(main())

View file

@ -1,14 +1,17 @@
import asyncio
from metagpt.logs import logger
from metagpt.roles.di.data_interpreter import DataInterpreter
from metagpt.utils.recovery_util import save_history
async def main(requirement: str = ""):
di = DataInterpreter()
await di.run(requirement)
rsp = await di.run(requirement)
logger.info(rsp)
save_history(role=di)
if __name__ == "__main__":
requirement = "Run data analysis on sklearn Iris dataset, include a plot"
asyncio.run(main(requirement))

View file

@ -11,9 +11,13 @@ from metagpt.rag.schema import (
BM25RetrieverConfig,
ChromaIndexConfig,
ChromaRetrieverConfig,
ElasticsearchIndexConfig,
ElasticsearchRetrieverConfig,
ElasticsearchStoreConfig,
FAISSRetrieverConfig,
LLMRankerConfig,
)
from metagpt.utils.exceptions import handle_exception
DOC_PATH = EXAMPLE_DATA_PATH / "rag/writer.txt"
QUESTION = "What are key qualities to be a good writer?"
@ -39,12 +43,22 @@ class Player(BaseModel):
class RAGExample:
"""Show how to use RAG."""
def __init__(self):
self.engine = SimpleEngine.from_docs(
input_files=[DOC_PATH],
retriever_configs=[FAISSRetrieverConfig(), BM25RetrieverConfig()],
ranker_configs=[LLMRankerConfig()],
)
def __init__(self, engine: SimpleEngine = None):
self._engine = engine
@property
def engine(self):
if not self._engine:
self._engine = SimpleEngine.from_docs(
input_files=[DOC_PATH],
retriever_configs=[FAISSRetrieverConfig(), BM25RetrieverConfig()],
ranker_configs=[LLMRankerConfig()],
)
return self._engine
@engine.setter
def engine(self, value: SimpleEngine):
self._engine = value
async def run_pipeline(self, question=QUESTION, print_title=True):
"""This example run rag pipeline, use faiss&bm25 retriever and llm ranker, will print something like:
@ -97,6 +111,7 @@ class RAGExample:
self.engine.add_docs([travel_filepath])
await self.run_pipeline(question=travel_question, print_title=False)
@handle_exception
async def add_objects(self, print_title=True):
"""This example show how to add objects.
@ -154,20 +169,41 @@ class RAGExample:
"""
self._print_title("Init And Query ChromaDB")
# save index
# 1. save index
output_dir = DATA_PATH / "rag"
SimpleEngine.from_docs(
input_files=[TRAVEL_DOC_PATH],
retriever_configs=[ChromaRetrieverConfig(persist_path=output_dir)],
)
# load index
engine = SimpleEngine.from_index(
index_config=ChromaIndexConfig(persist_path=output_dir),
# 2. load index
engine = SimpleEngine.from_index(index_config=ChromaIndexConfig(persist_path=output_dir))
# 3. query
answer = await engine.aquery(TRAVEL_QUESTION)
self._print_query_result(answer)
@handle_exception
async def init_and_query_es(self):
"""This example show how to use es. how to save and load index. will print something like:
Query Result:
Bob likes traveling.
"""
self._print_title("Init And Query Elasticsearch")
# 1. create es index and save docs
store_config = ElasticsearchStoreConfig(index_name="travel", es_url="http://127.0.0.1:9200")
engine = SimpleEngine.from_docs(
input_files=[TRAVEL_DOC_PATH],
retriever_configs=[ElasticsearchRetrieverConfig(store_config=store_config)],
)
# query
answer = engine.query(TRAVEL_QUESTION)
# 2. load index
engine = SimpleEngine.from_index(index_config=ElasticsearchIndexConfig(store_config=store_config))
# 3. query
answer = await engine.aquery(TRAVEL_QUESTION)
self._print_query_result(answer)
@staticmethod
@ -205,6 +241,7 @@ async def main():
await e.add_objects()
await e.init_objects()
await e.init_and_query_chromadb()
await e.init_and_query_es()
if __name__ == "__main__":

View file

@ -13,7 +13,7 @@ async def main():
question = "What are the most interesting human facts?"
search = Config.default().search
kwargs = {"api_key": search.api_key, "cse_id": search.cse_id, "proxy": None}
kwargs = search.model_dump()
await Searcher(search_engine=SearchEngine(engine=search.api_type, **kwargs)).run(question)

View file

@ -0,0 +1,93 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : entry of Stanford Town(ST/st) game
import asyncio
from typing import Optional
import fire
from metagpt.ext.stanford_town.roles.st_role import STRole
from metagpt.ext.stanford_town.stanford_town import StanfordTown
from metagpt.ext.stanford_town.utils.const import STORAGE_PATH
from metagpt.ext.stanford_town.utils.mg_ga_transform import (
get_reverie_meta,
write_curr_sim_code,
write_curr_step,
)
from metagpt.ext.stanford_town.utils.utils import copy_folder
from metagpt.logs import logger
async def startup(
idea: str, fork_sim_code: str, sim_code: str, temp_storage_path: str, investment: float = 30.0, n_round: int = 500
):
town = StanfordTown()
logger.info("StanfordTown init environment")
# copy `storage/{fork_sim_code}` to `storage/{sim_code}`
copy_folder(str(STORAGE_PATH.joinpath(fork_sim_code)), str(STORAGE_PATH.joinpath(sim_code)))
# get role names from `storage/{simulation_name}/reverie/meta.json` and then init roles
reverie_meta = get_reverie_meta(fork_sim_code)
roles = []
sim_path = STORAGE_PATH.joinpath(sim_code)
sim_path.mkdir(exist_ok=True)
for idx, role_name in enumerate(reverie_meta["persona_names"]):
has_inner_voice = True if idx == 0 else False
role = STRole(
name=role_name,
profile=role_name,
sim_code=sim_code,
step=reverie_meta.get("step", 0),
start_time=reverie_meta.get("start_date"),
curr_time=reverie_meta.get("curr_time"),
sec_per_step=reverie_meta.get("sec_per_step"),
has_inner_voice=has_inner_voice,
)
roles.append(role)
# init temp_storage
write_curr_sim_code({"sim_code": sim_code}, temp_storage_path)
write_curr_step({"step": reverie_meta.get("step", 0)}, temp_storage_path)
await town.hire(roles)
town.invest(investment)
town.run_project(idea)
await town.run(n_round)
def main(
idea: str,
fork_sim_code: str,
sim_code: str,
temp_storage_path: Optional[str] = None,
investment: float = 30.0,
n_round: int = 500,
):
"""
Args:
idea: idea works as an `inner voice` to the first agent.
fork_sim_code: old simulation name to start with, choose one inside `generative_agents/environment/frontend_server/storage/`
sim_code: new simulation name to save simulation result
temp_storage_path: generative_agents temp_storage path inside `environment/frontend_server` to interact.
investment: the investment of running agents
n_round: rounds to run agents
"""
asyncio.run(
startup(
idea=idea,
fork_sim_code=fork_sim_code,
sim_code=sim_code,
temp_storage_path=temp_storage_path,
investment=investment,
n_round=n_round,
)
)
if __name__ == "__main__":
fire.Fire(main)

View file

@ -0,0 +1,4 @@
# path to store simulation data
test_*
unittest*
July*

View file

@ -0,0 +1,26 @@
{
"Isabella Rodriguez": {
"maze": "the_ville",
"x": 72,
"y": 14
},
"Klaus Mueller": {
"maze": "the_ville",
"x": 126,
"y": 46
},
"Maria Lopez": {
"maze": "the_ville",
"x": 123,
"y": 57
}
}

View file

@ -0,0 +1,51 @@
{
"vision_r": 8,
"att_bandwidth": 8,
"retention": 8,
"curr_time": null,
"curr_tile": null,
"daily_plan_req": "Isabella Rodriguez opens Hobbs Cafe at 8am everyday, and works at the counter until 8pm, at which point she closes the cafe.",
"name": "Isabella Rodriguez",
"first_name": "Isabella",
"last_name": "Rodriguez",
"age": 34,
"innate": "friendly, outgoing, hospitable",
"learned": "Isabella Rodriguez is a cafe owner of Hobbs Cafe who loves to make people feel welcome. She is always looking for ways to make the cafe a place where people can come to relax and enjoy themselves.",
"currently": "Isabella Rodriguez is planning on having a Valentine's Day party at Hobbs Cafe with her customers on February 14th, 2023 at 5pm. She is gathering party material, and is telling everyone to join the party at Hobbs Cafe on February 14th, 2023, from 5pm to 7pm.",
"lifestyle": "Isabella Rodriguez goes to bed around 11pm, awakes up around 6am.",
"living_area": "the Ville:Isabella Rodriguez's apartment:main room",
"concept_forget": 100,
"daily_reflection_time": 180,
"daily_reflection_size": 5,
"overlap_reflect_th": 4,
"kw_strg_event_reflect_th": 10,
"kw_strg_thought_reflect_th": 9,
"recency_w": 1,
"relevance_w": 1,
"importance_w": 1,
"recency_decay": 0.995,
"importance_trigger_max": 150,
"importance_trigger_curr": 150,
"importance_ele_n": 0,
"thought_count": 5,
"daily_req": [],
"f_daily_schedule": [],
"f_daily_schedule_hourly_org": [],
"act_address": null,
"act_start_time": null,
"act_duration": null,
"act_description": null,
"act_pronunciatio": null,
"act_event": ["Isabella Rodriguez", null, null],
"act_obj_description": null,
"act_obj_pronunciatio": null,
"act_obj_event": [null, null, null],
"chatting_with": null,
"chat": null,
"chatting_with_buffer": {},
"chatting_end_time": null,
"act_path_set": false,
"planned_path": []
}

View file

@ -0,0 +1,66 @@
{
"the Ville": {
"Hobbs Cafe": {
"cafe": [
"refrigerator",
"cafe customer seating",
"cooking area",
"kitchen sink",
"behind the cafe counter",
"piano"
]
},
"Isabella Rodriguez's apartment": {
"main room": [
"bed",
"desk",
"refrigerator",
"closet",
"shelf"
]
},
"The Rose and Crown Pub": {
"pub": [
"shelf",
"refrigerator",
"bar customer seating",
"behind the bar counter",
"kitchen sink",
"cooking area",
"microphone"
]
},
"Harvey Oak Supply Store": {
"supply store": [
"supply store product shelf",
"behind the supply store counter",
"supply store counter"
]
},
"The Willows Market and Pharmacy": {
"store": [
"behind the pharmacy counter",
"pharmacy store shelf",
"pharmacy store counter",
"grocery store shelf",
"behind the grocery counter",
"grocery store counter"
]
},
"Dorm for Oak Hill College": {
"garden": [
"dorm garden"
],
"common room": [
"common room sofa",
"pool table",
"common room table"
]
},
"Johnson Park": {
"park": [
"park garden"
]
}
}
}

View file

@ -0,0 +1,2 @@
{"kw_strength_event": {},
"kw_strength_thought": {}}

View file

@ -0,0 +1,51 @@
{
"vision_r": 8,
"att_bandwidth": 8,
"retention": 8,
"curr_time": null,
"curr_tile": null,
"daily_plan_req": "Klaus Mueller goes to the library at Oak Hill College early in the morning, spends his days writing, and eats at Hobbs Cafe.",
"name": "Klaus Mueller",
"first_name": "Klaus",
"last_name": "Mueller",
"age": 20,
"innate": "kind, inquisitive, passionate",
"learned": "Klaus Mueller is a student at Oak Hill College studying sociology. He is passionate about social justice and loves to explore different perspectives.",
"currently": "Klaus Mueller is writing a research paper on the effects of gentrification in low-income communities.",
"lifestyle": "Klaus Mueller goes to bed around 11pm, awakes up around 7am, eats dinner around 5pm.",
"living_area": "the Ville:Dorm for Oak Hill College:Klaus Mueller's room",
"concept_forget": 100,
"daily_reflection_time": 180,
"daily_reflection_size": 5,
"overlap_reflect_th": 4,
"kw_strg_event_reflect_th": 10,
"kw_strg_thought_reflect_th": 9,
"recency_w": 1,
"relevance_w": 1,
"importance_w": 1,
"recency_decay": 0.99,
"importance_trigger_max": 150,
"importance_trigger_curr": 150,
"importance_ele_n": 0,
"thought_count": 5,
"daily_req": [],
"f_daily_schedule": [],
"f_daily_schedule_hourly_org": [],
"act_address": null,
"act_start_time": null,
"act_duration": null,
"act_description": null,
"act_pronunciatio": null,
"act_event": ["Klaus Mueller", null, null],
"act_obj_description": null,
"act_obj_pronunciatio": null,
"act_obj_event": [null, null, null],
"chatting_with": null,
"chat": null,
"chatting_with_buffer": {},
"chatting_end_time": null,
"act_path_set": false,
"planned_path": []
}

View file

@ -0,0 +1,86 @@
{
"the Ville": {
"Oak Hill College": {
"hallway": [],
"library": [
"library sofa",
"library table",
"bookshelf"
],
"classroom": [
"blackboard",
"classroom podium",
"classroom student seating"
]
},
"Dorm for Oak Hill College": {
"garden": [
"dorm garden"
],
"Klaus Mueller's room": [
"bed",
"game console",
"closet",
"desk"
],
"woman's bathroom": [
"toilet",
"shower",
"bathroom sink"
],
"common room": [
"common room sofa",
"pool table",
"common room table"
],
"man's bathroom": [
"shower",
"bathroom sink",
"toilet"
]
},
"The Willows Market and Pharmacy": {
"store": [
"grocery store shelf",
"behind the grocery counter",
"grocery store counter",
"pharmacy store shelf",
"pharmacy store counter",
"behind the pharmacy counter"
]
},
"Harvey Oak Supply Store": {
"supply store": [
"supply store product shelf",
"behind the supply store counter",
"supply store counter"
]
},
"Johnson Park": {
"park": [
"park garden"
]
},
"The Rose and Crown Pub": {
"pub": [
"shelf",
"refrigerator",
"bar customer seating",
"behind the bar counter",
"kitchen sink",
"cooking area",
"microphone"
]
},
"Hobbs Cafe": {
"cafe": [
"refrigerator",
"cafe customer seating",
"cooking area",
"kitchen sink",
"behind the cafe counter",
"piano"
]
}
}
}

View file

@ -0,0 +1,2 @@
{"kw_strength_event": {},
"kw_strength_thought": {}}

View file

@ -0,0 +1,51 @@
{
"vision_r": 8,
"att_bandwidth": 8,
"retention": 8,
"curr_time": null,
"curr_tile": null,
"daily_plan_req": "Maria Lopez spends at least 3 hours a day Twitch streaming or gaming.",
"name": "Maria Lopez",
"first_name": "Maria",
"last_name": "Lopez",
"age": 21,
"innate": "energetic, enthusiastic, inquisitive",
"learned": "Maria Lopez is a student at Oak Hill College studying physics and a part time Twitch game streamer who loves to connect with people and explore new ideas.",
"currently": "Maria Lopez is working on her physics degree and streaming games on Twitch to make some extra money. She visits Hobbs Cafe for studying and eating just about everyday.",
"lifestyle": "Maria Lopez goes to bed around 2am, awakes up around 9am, eats dinner around 6pm. She likes to hang out at Hobbs Cafe if it's before 6pm.",
"living_area": "the Ville:Dorm for Oak Hill College:Maria Lopez's room",
"concept_forget": 100,
"daily_reflection_time": 180,
"daily_reflection_size": 5,
"overlap_reflect_th": 4,
"kw_strg_event_reflect_th": 10,
"kw_strg_thought_reflect_th": 9,
"recency_w": 1,
"relevance_w": 1,
"importance_w": 1,
"recency_decay": 0.99,
"importance_trigger_max": 150,
"importance_trigger_curr": 150,
"importance_ele_n": 0,
"thought_count": 5,
"daily_req": [],
"f_daily_schedule": [],
"f_daily_schedule_hourly_org": [],
"act_address": null,
"act_start_time": null,
"act_duration": null,
"act_description": null,
"act_pronunciatio": null,
"act_event": ["Maria Lopez", null, null],
"act_obj_description": null,
"act_obj_pronunciatio": null,
"act_obj_event": [null, null, null],
"chatting_with": null,
"chat": null,
"chatting_with_buffer": {},
"chatting_end_time": null,
"act_path_set": false,
"planned_path": []
}

View file

@ -0,0 +1,87 @@
{
"the Ville": {
"Oak Hill College": {
"hallway": [],
"library": [
"library sofa",
"library table",
"bookshelf"
],
"classroom": [
"blackboard",
"classroom podium",
"classroom student seating"
]
},
"Dorm for Oak Hill College": {
"garden": [
"dorm garden"
],
"Maria Lopez's room": [
"closet",
"desk",
"bed",
"computer",
"blackboard"
],
"woman's bathroom": [
"toilet",
"shower",
"bathroom sink"
],
"common room": [
"common room sofa",
"pool table",
"common room table"
],
"man's bathroom": [
"shower",
"bathroom sink",
"toilet"
]
},
"The Willows Market and Pharmacy": {
"store": [
"grocery store shelf",
"behind the grocery counter",
"grocery store counter",
"pharmacy store shelf",
"pharmacy store counter",
"behind the pharmacy counter"
]
},
"Harvey Oak Supply Store": {
"supply store": [
"supply store product shelf",
"behind the supply store counter",
"supply store counter"
]
},
"Johnson Park": {
"park": [
"park garden"
]
},
"The Rose and Crown Pub": {
"pub": [
"shelf",
"refrigerator",
"bar customer seating",
"behind the bar counter",
"kitchen sink",
"cooking area",
"microphone"
]
},
"Hobbs Cafe": {
"cafe": [
"refrigerator",
"cafe customer seating",
"cooking area",
"kitchen sink",
"behind the cafe counter",
"piano"
]
}
}
}

View file

@ -0,0 +1,13 @@
{
"fork_sim_code": "base_the_ville_isabella_maria_klaus",
"start_date": "February 13, 2023",
"curr_time": "February 13, 2023, 00:00:00",
"sec_per_step": 10,
"maze_name": "the_ville",
"persona_names": [
"Isabella Rodriguez",
"Maria Lopez",
"Klaus Mueller"
],
"step": 0
}

View file

@ -17,6 +17,7 @@ from pydantic import BaseModel, Field, create_model, model_validator
from tenacity import retry, stop_after_attempt, wait_random_exponential
from metagpt.actions.action_outcls_registry import register_action_outcls
from metagpt.const import USE_CONFIG_TIMEOUT
from metagpt.llm import BaseLLM
from metagpt.logs import logger
from metagpt.provider.postprocess.llm_output_postprocess import llm_output_postprocess
@ -330,7 +331,7 @@ class ActionNode:
def compile_to(self, i: Dict, schema, kv_sep) -> str:
if schema == "json":
return json.dumps(i, indent=4)
return json.dumps(i, indent=4, ensure_ascii=False)
elif schema == "markdown":
return dict_to_markdown(i, kv_sep=kv_sep)
else:
@ -339,10 +340,7 @@ class ActionNode:
def tagging(self, text, schema, tag="") -> str:
if not tag:
return text
if schema == "json":
return f"[{tag}]\n" + text + f"\n[/{tag}]"
else: # markdown
return f"[{tag}]\n" + text + f"\n[/{tag}]"
return f"[{tag}]\n{text}\n[/{tag}]"
def _compile_f(self, schema, mode, tag, format_func, kv_sep, exclude=None) -> str:
nodes = self.to_dict(format_func=format_func, mode=mode, exclude=exclude)
@ -374,7 +372,7 @@ class ActionNode:
schema="markdown": 编译context, example(markdown), instruction(markdown), constraint, action
"""
if schema == "raw":
return context + "\n\n## Actions\n" + LANGUAGE_CONSTRAINT + "\n" + self.instruction
return f"{context}\n\n## Actions\n{LANGUAGE_CONSTRAINT}\n{self.instruction}"
### 直接使用 pydantic BaseModel 生成 instruction 与 example仅限 JSON
# child_class = self._create_children_class()
@ -416,7 +414,7 @@ class ActionNode:
images: Optional[Union[str, list[str]]] = None,
system_msgs: Optional[list[str]] = None,
schema="markdown", # compatible to original format
timeout=3,
timeout=USE_CONFIG_TIMEOUT,
) -> (str, BaseModel):
"""Use ActionOutput to wrap the output of aask"""
content = await self.llm.aask(prompt, system_msgs, images=images, timeout=timeout)
@ -448,7 +446,9 @@ class ActionNode:
def set_context(self, context):
self.set_recursive("context", context)
async def simple_fill(self, schema, mode, images: Optional[Union[str, list[str]]] = None, timeout=3, exclude=None):
async def simple_fill(
self, schema, mode, images: Optional[Union[str, list[str]]] = None, timeout=USE_CONFIG_TIMEOUT, exclude=None
):
prompt = self.compile(context=self.context, schema=schema, mode=mode, exclude=exclude)
if schema != "raw":
@ -473,7 +473,7 @@ class ActionNode:
mode="auto",
strgy="simple",
images: Optional[Union[str, list[str]]] = None,
timeout=3,
timeout=USE_CONFIG_TIMEOUT,
exclude=[],
):
"""Fill the node(s) with mode.

View file

@ -18,7 +18,7 @@ from metagpt.prompts.di.write_analysis_code import (
STRUCTUAL_PROMPT,
)
from metagpt.schema import Message, Plan
from metagpt.utils.common import CodeParser, process_message, remove_comments
from metagpt.utils.common import CodeParser, remove_comments
class WriteAnalysisCode(Action):
@ -50,7 +50,7 @@ class WriteAnalysisCode(Action):
)
working_memory = working_memory or []
context = process_message([Message(content=structual_prompt, role="user")] + working_memory)
context = self.llm.format_msg([Message(content=structual_prompt, role="user")] + working_memory)
# LLM call
if use_reflection:

View file

@ -10,6 +10,7 @@ from typing import Optional
from pydantic import field_validator
from metagpt.const import LLM_API_TIMEOUT
from metagpt.utils.yaml_model import YamlModel
@ -74,7 +75,7 @@ class LLMConfig(YamlModel):
stream: bool = False
logprobs: Optional[bool] = None # https://cookbook.openai.com/examples/using_logprobs
top_logprobs: Optional[int] = None
timeout: int = 60
timeout: int = 600
# For Network
proxy: Optional[str] = None
@ -88,3 +89,8 @@ class LLMConfig(YamlModel):
if v in ["", None, "YOUR_API_KEY"]:
raise ValueError("Please set your API key in config2.yaml")
return v
@field_validator("timeout")
@classmethod
def check_timeout(cls, v):
return v or LLM_API_TIMEOUT

View file

@ -7,6 +7,8 @@
"""
from typing import Callable, Optional
from pydantic import Field
from metagpt.tools import SearchEngineType
from metagpt.utils.yaml_model import YamlModel
@ -18,3 +20,11 @@ class SearchConfig(YamlModel):
api_key: str = ""
cse_id: str = "" # for google
search_func: Optional[Callable] = None
params: dict = Field(
default_factory=lambda: {
"engine": "google",
"google_domain": "google.com",
"gl": "us",
"hl": "en",
}
)

View file

@ -123,7 +123,6 @@ BASE64_FORMAT = "base64"
# REDIS
REDIS_KEY = "REDIS_KEY"
LLM_API_TIMEOUT = 300
# Message id
IGNORED_MESSAGE_ID = "0"
@ -132,3 +131,7 @@ IGNORED_MESSAGE_ID = "0"
GENERALIZATION = "Generalize"
COMPOSITION = "Composite"
AGGREGATION = "Aggregate"
# Timeout
USE_CONFIG_TIMEOUT = 0 # Using llm.timeout configuration.
LLM_API_TIMEOUT = 300

View file

@ -21,7 +21,7 @@ ## Usage
from metagpt.environment.api.env_api import EnvAPIAbstract
# get screenshot from ExtEnv
screenshot_path: Path = env.observe(
screenshot_path: Path = await env.observe(
EnvAPIAbstract(
api_name="get_screenshot", kwargs={"ss_name": f"{round_count}_before", "local_save_dir": task_dir}
)
@ -34,5 +34,5 @@ # do a `tap` action on the screen
## TODO
- add android app operation assistant under `examples/android_assistant`
- migrate roles/actions of werewolf game from old version into current version
- migrate roles/actions of mincraft game from old version into current version
- migrate roles/actions of minecraft game from old version into current version
- migrate roles/actions of stanford_town game from old version into current version

View file

@ -3,11 +3,10 @@
# @Desc :
from metagpt.environment.base_env import Environment
from metagpt.environment.android_env.android_env import AndroidEnv
from metagpt.environment.mincraft_env.mincraft_env import MincraftExtEnv
from metagpt.environment.werewolf_env.werewolf_env import WerewolfEnv
from metagpt.environment.stanford_town_env.stanford_town_env import StanfordTownEnv
from metagpt.environment.software_env.software_env import SoftwareEnv
from metagpt.environment.android.android_env import AndroidEnv
from metagpt.environment.werewolf.werewolf_env import WerewolfEnv
from metagpt.environment.stanford_town.stanford_town_env import StanfordTownEnv
from metagpt.environment.software.software_env import SoftwareEnv
__all__ = ["AndroidEnv", "MincraftExtEnv", "WerewolfEnv", "StanfordTownEnv", "SoftwareEnv", "Environment"]
__all__ = ["AndroidEnv", "WerewolfEnv", "StanfordTownEnv", "SoftwareEnv", "Environment"]

View file

@ -4,7 +4,7 @@
from pydantic import Field
from metagpt.environment.android_env.android_ext_env import AndroidExtEnv
from metagpt.environment.android.android_ext_env import AndroidExtEnv
from metagpt.environment.base_env import Environment

View file

@ -8,8 +8,9 @@ from typing import Any, Optional
from pydantic import Field
from metagpt.environment.android_env.const import ADB_EXEC_FAIL
from metagpt.environment.android.const import ADB_EXEC_FAIL
from metagpt.environment.base_env import ExtEnv, mark_as_readable, mark_as_writeable
from metagpt.environment.base_env_space import BaseEnvAction, BaseEnvObsParams
class AndroidExtEnv(ExtEnv):
@ -19,6 +20,20 @@ class AndroidExtEnv(ExtEnv):
width: int = Field(default=720, description="device screen width")
height: int = Field(default=1080, description="device screen height")
def reset(
self,
*,
seed: Optional[int] = None,
options: Optional[dict[str, Any]] = None,
) -> tuple[dict[str, Any], dict[str, Any]]:
pass
def observe(self, obs_params: Optional[BaseEnvObsParams] = None) -> Any:
pass
def step(self, action: BaseEnvAction) -> tuple[dict[str, Any], float, bool, bool, dict[str, Any]]:
pass
def __init__(self, **data: Any):
super().__init__(**data)
if data.get("device_id"):

View file

@ -3,9 +3,12 @@
# @Desc : base env of executing environment
import asyncio
from abc import abstractmethod
from enum import Enum
from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Set, Union
from gymnasium import spaces
from gymnasium.core import ActType, ObsType
from pydantic import BaseModel, ConfigDict, Field, SerializeAsAny, model_validator
from metagpt.context import Context
@ -14,6 +17,7 @@ from metagpt.environment.api.env_api import (
ReadAPIRegistry,
WriteAPIRegistry,
)
from metagpt.environment.base_env_space import BaseEnvAction, BaseEnvObsParams
from metagpt.logs import logger
from metagpt.schema import Message
from metagpt.utils.common import get_function_schema, is_coroutine_func, is_send_to
@ -26,7 +30,7 @@ class EnvType(Enum):
ANDROID = "Android"
GYM = "Gym"
WEREWOLF = "Werewolf"
MINCRAFT = "Mincraft"
MINECRAFT = "Minecraft"
STANFORDTOWN = "StanfordTown"
@ -49,6 +53,11 @@ def mark_as_writeable(func):
class ExtEnv(BaseModel):
"""External Env to integrate actual game environment"""
model_config = ConfigDict(arbitrary_types_allowed=True)
action_space: spaces.Space[ActType] = Field(default_factory=spaces.Space, exclude=True)
observation_space: spaces.Space[ObsType] = Field(default_factory=spaces.Space, exclude=True)
def _check_api_exist(self, rw_api: Optional[str] = None):
if not rw_api:
raise ValueError(f"{rw_api} not exists")
@ -61,39 +70,56 @@ class ExtEnv(BaseModel):
else:
return env_write_api_registry.get_apis()
async def observe(self, env_action: Union[str, EnvAPIAbstract]):
async def read_from_api(self, env_action: Union[str, EnvAPIAbstract]):
"""get observation from particular api of ExtEnv"""
if isinstance(env_action, str):
read_api = env_read_api_registry.get(api_name=env_action)["func"]
self._check_api_exist(read_api)
if is_coroutine_func(read_api):
res = await read_api(self)
env_read_api = env_read_api_registry.get(api_name=env_action)["func"]
self._check_api_exist(env_read_api)
if is_coroutine_func(env_read_api):
res = await env_read_api(self)
else:
res = read_api(self)
res = env_read_api(self)
elif isinstance(env_action, EnvAPIAbstract):
read_api = env_read_api_registry.get(api_name=env_action.api_name)["func"]
self._check_api_exist(read_api)
if is_coroutine_func(read_api):
res = await read_api(self, *env_action.args, **env_action.kwargs)
env_read_api = env_read_api_registry.get(api_name=env_action.api_name)["func"]
self._check_api_exist(env_read_api)
if is_coroutine_func(env_read_api):
res = await env_read_api(self, *env_action.args, **env_action.kwargs)
else:
res = read_api(self, *env_action.args, **env_action.kwargs)
res = env_read_api(self, *env_action.args, **env_action.kwargs)
return res
async def step(self, env_action: Union[str, Message, EnvAPIAbstract, list[EnvAPIAbstract]]):
async def write_thru_api(self, env_action: Union[str, Message, EnvAPIAbstract, list[EnvAPIAbstract]]):
"""execute through particular api of ExtEnv"""
res = None
if isinstance(env_action, Message):
self.publish_message(env_action)
elif isinstance(env_action, EnvAPIAbstract):
write_api = env_write_api_registry.get(env_action.api_name)["func"]
self._check_api_exist(write_api)
if is_coroutine_func(write_api):
res = await write_api(self, *env_action.args, **env_action.kwargs)
env_write_api = env_write_api_registry.get(env_action.api_name)["func"]
self._check_api_exist(env_write_api)
if is_coroutine_func(env_write_api):
res = await env_write_api(self, *env_action.args, **env_action.kwargs)
else:
res = write_api(self, *env_action.args, **env_action.kwargs)
res = env_write_api(self, *env_action.args, **env_action.kwargs)
return res
@abstractmethod
def reset(
self,
*,
seed: Optional[int] = None,
options: Optional[dict[str, Any]] = None,
) -> tuple[dict[str, Any], dict[str, Any]]:
"""Implement this to get init observation"""
@abstractmethod
def observe(self, obs_params: Optional[BaseEnvObsParams] = None) -> Any:
"""Implement this if you want to get partial observation from the env"""
@abstractmethod
def step(self, action: BaseEnvAction) -> tuple[dict[str, Any], float, bool, bool, dict[str, Any]]:
"""Implement this to feed a action and then get new observation from the env"""
class Environment(ExtEnv):
"""环境,承载一批角色,角色可以向环境发布消息,可以被其他角色观察到
@ -108,6 +134,20 @@ class Environment(ExtEnv):
history: str = "" # For debug
context: Context = Field(default_factory=Context, exclude=True)
def reset(
self,
*,
seed: Optional[int] = None,
options: Optional[dict[str, Any]] = None,
) -> tuple[dict[str, Any], dict[str, Any]]:
pass
def observe(self, obs_params: Optional[BaseEnvObsParams] = None) -> Any:
pass
def step(self, action: BaseEnvAction) -> tuple[dict[str, Any], float, bool, bool, dict[str, Any]]:
pass
@model_validator(mode="after")
def init_roles(self):
self.add_roles(self.roles.values())

View file

@ -0,0 +1,33 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc :
from enum import IntEnum
from pydantic import BaseModel, ConfigDict, Field
class BaseEnvActionType(IntEnum):
# # NONE = 0 # no action to run, just get observation
pass
class BaseEnvAction(BaseModel):
"""env action type and its related params of action functions/apis"""
model_config = ConfigDict(arbitrary_types_allowed=True)
action_type: int = Field(default=0, description="action type")
class BaseEnvObsType(IntEnum):
# # NONE = 0 # get whole observation from env
pass
class BaseEnvObsParams(BaseModel):
"""observation params for different EnvObsType to get its observe result"""
model_config = ConfigDict(arbitrary_types_allowed=True)
obs_type: int = Field(default=0, description="observation type")

View file

@ -4,8 +4,8 @@
from metagpt.const import METAGPT_ROOT
# For Mincraft Game Agent
MC_CKPT_DIR = METAGPT_ROOT / "data/mincraft/ckpt"
# For Minecraft Game Agent
MC_CKPT_DIR = METAGPT_ROOT / "data/minecraft/ckpt"
MC_LOG_DIR = METAGPT_ROOT / "logs"
MC_DEFAULT_WARMUP = {
"context": 15,

View file

@ -1,6 +1,6 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : MG Mincraft Env
# @Desc : MG Minecraft Env
# refs to `voyager voyager.py`
import json
@ -8,19 +8,19 @@ import re
import time
from typing import Any, Iterable
from llama_index.vector_stores.chroma import ChromaVectorStore
from pydantic import ConfigDict, Field
from metagpt.config2 import config as CONFIG
from metagpt.environment.base_env import Environment
from metagpt.environment.mincraft_env.const import MC_CKPT_DIR
from metagpt.environment.mincraft_env.mincraft_ext_env import MincraftExtEnv
from metagpt.environment.minecraft.const import MC_CKPT_DIR
from metagpt.environment.minecraft.minecraft_ext_env import MinecraftExtEnv
from metagpt.logs import logger
from metagpt.rag.vector_stores.chroma import ChromaVectorStore
from metagpt.utils.common import load_mc_skills_code, read_json_file, write_json_file
class MincraftEnv(Environment, MincraftExtEnv):
"""MincraftEnv, including shared memory of cache and information between roles"""
class MinecraftEnv(Environment, MinecraftExtEnv):
"""MinecraftEnv, including shared memory of cache and information between roles"""
model_config = ConfigDict(arbitrary_types_allowed=True)
@ -282,7 +282,7 @@ class MincraftEnv(Environment, MincraftExtEnv):
position = event["status"]["position"]
blocks.append(block)
positions.append(position)
new_events = self.step(
new_events = self._step(
f"await givePlacedItemBack(bot, {json.dumps(blocks)}, {json.dumps(positions)})",
programs=self.programs,
)
@ -323,7 +323,7 @@ class MincraftEnv(Environment, MincraftExtEnv):
Exception: If there is an issue retrieving events.
"""
try:
self.reset(
self._reset(
options={
"mode": "soft",
"wait_ticks": 20,
@ -332,13 +332,13 @@ class MincraftEnv(Environment, MincraftExtEnv):
# difficulty = "easy" if len(self.completed_tasks) > 15 else "peaceful"
difficulty = "peaceful"
events = self.step("bot.chat(`/time set ${getNextTime()}`);\n" + f"bot.chat('/difficulty {difficulty}');")
events = self._step("bot.chat(`/time set ${getNextTime()}`);\n" + f"bot.chat('/difficulty {difficulty}');")
self.update_event(events)
return events
except Exception as e:
time.sleep(3) # wait for mineflayer to exit
# reset bot status here
events = self.reset(
events = self._reset(
options={
"mode": "hard",
"wait_ticks": 20,
@ -365,7 +365,7 @@ class MincraftEnv(Environment, MincraftExtEnv):
Exception: If there is an issue retrieving events.
"""
try:
events = self.step(
events = self._step(
code=self.code,
programs=self.programs,
)
@ -374,7 +374,7 @@ class MincraftEnv(Environment, MincraftExtEnv):
except Exception as e:
time.sleep(3) # wait for mineflayer to exit
# reset bot status here
events = self.reset(
events = self._reset(
options={
"mode": "hard",
"wait_ticks": 20,

View file

@ -1,28 +1,29 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : The Mincraft external environment to integrate with Mincraft game
# @Desc : The Minecraft external environment to integrate with Minecraft game
# refs to `voyager bridge.py`
import json
import time
from typing import Optional
from typing import Any, Optional
import requests
from pydantic import ConfigDict, Field, model_validator
from metagpt.environment.base_env import ExtEnv, mark_as_writeable
from metagpt.environment.mincraft_env.const import (
from metagpt.environment.base_env_space import BaseEnvAction, BaseEnvObsParams
from metagpt.environment.minecraft.const import (
MC_CKPT_DIR,
MC_CORE_INVENTORY_ITEMS,
MC_CURRICULUM_OB,
MC_DEFAULT_WARMUP,
METAGPT_ROOT,
)
from metagpt.environment.mincraft_env.process_monitor import SubprocessMonitor
from metagpt.environment.minecraft.process_monitor import SubprocessMonitor
from metagpt.logs import logger
class MincraftExtEnv(ExtEnv):
class MinecraftExtEnv(ExtEnv):
model_config = ConfigDict(arbitrary_types_allowed=True)
mc_port: Optional[int] = Field(default=None)
@ -38,6 +39,20 @@ class MincraftExtEnv(ExtEnv):
server_paused: bool = Field(default=False)
warm_up: dict = Field(default=dict())
def reset(
self,
*,
seed: Optional[int] = None,
options: Optional[dict[str, Any]] = None,
) -> tuple[dict[str, Any], dict[str, Any]]:
pass
def observe(self, obs_params: Optional[BaseEnvObsParams] = None) -> Any:
pass
def step(self, action: BaseEnvAction) -> tuple[dict[str, Any], float, bool, bool, dict[str, Any]]:
pass
@property
def server(self) -> str:
return f"{self.server_host}:{self.server_port}"
@ -48,7 +63,7 @@ class MincraftExtEnv(ExtEnv):
self.mineflayer = SubprocessMonitor(
commands=[
"node",
METAGPT_ROOT.joinpath("metagpt", "environment", "mincraft_env", "mineflayer", "index.js"),
METAGPT_ROOT.joinpath("metagpt", "environment", "minecraft", "mineflayer", "index.js"),
str(self.server_port),
],
name="mineflayer",
@ -115,7 +130,7 @@ class MincraftExtEnv(ExtEnv):
return res.json()
@mark_as_writeable
def reset(self, *, seed=None, options=None) -> dict:
def _reset(self, *, seed=None, options=None) -> dict:
if options is None:
options = {}
if options.get("inventory", {}) and options.get("mode", "hard") != "hard":
@ -145,7 +160,7 @@ class MincraftExtEnv(ExtEnv):
return json.loads(returned_data)
@mark_as_writeable
def step(self, code: str, programs: str = "") -> dict:
def _step(self, code: str, programs: str = "") -> dict:
if not self.has_reset:
raise RuntimeError("Environment has not been reset yet")
self.check_process()

View file

@ -0,0 +1,105 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc :
from typing import Any, Optional, Union
import numpy as np
import numpy.typing as npt
from gymnasium import spaces
from pydantic import ConfigDict, Field, field_validator
from metagpt.environment.base_env_space import (
BaseEnvAction,
BaseEnvActionType,
BaseEnvObsParams,
BaseEnvObsType,
)
class EnvActionType(BaseEnvActionType):
NONE = 0 # no action to run, just get observation
ADD_TILE_EVENT = 1 # Add an event triple to a tile
RM_TILE_EVENT = 2 # Remove an event triple from a tile
TURN_TILE_EVENT_IDLE = 3 # Turn an event triple from a tile into idle
RM_TITLE_SUB_EVENT = 4 # Remove an event triple that has the input subject from a tile
class EnvAction(BaseEnvAction):
"""env action type and its related params of action functions/apis"""
model_config = ConfigDict(arbitrary_types_allowed=True)
action_type: int = Field(default=EnvActionType.NONE, description="action type")
coord: npt.NDArray[np.int64] = Field(
default_factory=lambda: np.zeros(2, dtype=np.int64), description="tile coordinate"
)
subject: str = Field(default="", description="subject name of first element in event")
event: tuple[str, Optional[str], Optional[str], Optional[str]] = Field(
default=["", None, None, None], description="tile event"
)
@field_validator("coord", mode="before")
@classmethod
def check_coord(cls, coord) -> npt.NDArray[np.int64]:
if not isinstance(coord, np.ndarray):
return np.array(coord)
class EnvObsType(BaseEnvObsType):
"""get part observation with specific params"""
NONE = 0 # get whole observation from env
GET_TITLE = 1 # get the tile detail dictionary with given tile coord
TILE_PATH = 2 # get the tile address with given tile coord
TILE_NBR = 3 # get the neighbors of given tile coord and its vision radius
class EnvObsParams(BaseEnvObsParams):
"""observation params for different EnvObsType"""
model_config = ConfigDict(arbitrary_types_allowed=True)
obs_type: int = Field(default=EnvObsType.NONE, description="observation type")
coord: npt.NDArray[np.int64] = Field(
default_factory=lambda: np.zeros(2, dtype=np.int64), description="tile coordinate"
)
level: str = Field(default="", description="different level of title")
vision_radius: int = Field(default=0, description="the vision radius of current tile")
@field_validator("coord", mode="before")
@classmethod
def check_coord(cls, coord) -> npt.NDArray[np.int64]:
if not isinstance(coord, np.ndarray):
return np.array(coord)
EnvObsValType = Union[list[list[str]], dict[str, set[tuple[int, int]]], list[list[dict[str, Any]]]]
def get_observation_space() -> spaces.Dict:
# it's a
space = spaces.Dict(
{"collision_maze": spaces.Discrete(2), "tiles": spaces.Discrete(2), "address_tiles": spaces.Discrete(2)}
)
return space
def get_action_space(maze_shape: tuple[int, int]) -> spaces.Dict:
"""The fields defined by the space correspond to the input parameters of the action except `action_type`"""
space = spaces.Dict(
{
"action_type": spaces.Discrete(len(EnvActionType)),
"coord": spaces.Box(
np.array([0, 0], dtype=np.int64), np.array([maze_shape[0], maze_shape[1]], dtype=np.int64)
), # coord of the tile
"subject": spaces.Text(256), # the first element of an tile event
"event": spaces.Tuple(
(spaces.Text(256), spaces.Text(256), spaces.Text(256), spaces.Text(256))
), # event is a tuple of four str
}
)
return space

View file

@ -0,0 +1,10 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : MG StanfordTown Env
from metagpt.environment.base_env import Environment
from metagpt.environment.stanford_town.stanford_town_ext_env import StanfordTownExtEnv
class StanfordTownEnv(StanfordTownExtEnv, Environment):
pass

View file

@ -5,11 +5,20 @@
import math
from pathlib import Path
from typing import Optional, Tuple
from typing import Any, Optional
from pydantic import ConfigDict, Field, model_validator
from metagpt.environment.base_env import ExtEnv, mark_as_readable, mark_as_writeable
from metagpt.environment.stanford_town.env_space import (
EnvAction,
EnvActionType,
EnvObsParams,
EnvObsType,
EnvObsValType,
get_action_space,
get_observation_space,
)
from metagpt.utils.common import read_csv_to_list, read_json_file
@ -197,15 +206,82 @@ class StanfordTownExtEnv(ExtEnv):
else:
address_tiles[add] = set([(j, i)])
values["address_tiles"] = address_tiles
values["action_space"] = get_action_space((maze_width, maze_height))
values["observation_space"] = get_observation_space()
return values
def reset(
self,
*,
seed: Optional[int] = None,
options: Optional[dict[str, Any]] = None,
) -> tuple[dict[str, EnvObsValType], dict[str, Any]]:
"""reset env and get the init observation
Return results corresponding to `observation, info`
"""
super().reset(seed=seed, options=options)
obs = self._get_obs()
return obs, {}
def _get_obs(self) -> dict[str, EnvObsValType]:
"""Get observation"""
return {
"collision_maze": self.get_collision_maze(),
"tiles": self.tiles,
"address_tiles": self.get_address_tiles(),
}
def observe(self, obs_params: Optional[EnvObsParams] = None) -> Any:
"""Get partial or full observation from the env"""
obs_type = obs_params.obs_type if obs_params else EnvObsType.NONE
if obs_type == EnvObsType.NONE:
obs = self._get_obs()
elif obs_type == EnvObsType.GET_TITLE:
obs = self.access_tile(tile=obs_params.coord)
elif obs_type == EnvObsType.TILE_PATH:
obs = self.get_tile_path(tile=obs_params.coord, level=obs_params.level)
elif obs_type == EnvObsType.TILE_NBR:
obs = self.get_nearby_tiles(tile=obs_params.coord, vision_r=obs_params.vision_radius)
return obs
def step(self, action: EnvAction) -> tuple[dict[str, EnvObsValType], float, bool, bool, dict[str, Any]]:
"""Execute action and then return observation
Return results corresponding to `observation, reward, terminated, truncated, info`
"""
terminated = False
try:
self._execute_env_action(action)
except Exception:
terminated = True
obs = self._get_obs()
ret = (obs, 1.0, terminated, False, {})
return ret
def _execute_env_action(self, action: EnvAction):
action_type = action.action_type
if action_type == EnvActionType.NONE:
pass
elif action_type == EnvActionType.ADD_TILE_EVENT:
self.add_event_from_tile(curr_event=action.event, tile=action.coord)
elif action_type == EnvActionType.RM_TILE_EVENT:
self.remove_event_from_tile(curr_event=action.event, tile=action.coord)
elif action_type == EnvActionType.TURN_TILE_EVENT_IDLE:
self.turn_event_from_tile_idle(curr_event=action.event, tile=action.coord)
elif action_type == EnvActionType.RM_TITLE_SUB_EVENT:
self.remove_subject_events_from_tile(subject=action.subject, tile=action.coord)
def turn_coordinate_to_tile(self, px_coordinate: tuple[int, int]) -> tuple[int, int]:
"""
Turns a pixel coordinate to a tile coordinate.
"""
x = math.ceil(px_coordinate[0] / self.sq_tile_size)
y = math.ceil(px_coordinate[1] / self.sq_tile_size)
return (x, y)
return x, y
@mark_as_readable
def get_collision_maze(self) -> list:
@ -316,10 +392,6 @@ class StanfordTownExtEnv(ExtEnv):
nearby_tiles += [(i, j)]
return nearby_tiles
@mark_as_writeable
def add_tiles_event(self, pt_y: int, pt_x: int, event: Tuple[str, str, str, str]):
self.tiles[pt_y][pt_x]["events"].add(event)
@mark_as_writeable
def add_event_from_tile(self, curr_event: tuple[str], tile: tuple[int, int]) -> None:
"""

View file

@ -1,12 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : MG StanfordTown Env
from metagpt.environment.base_env import Environment
from metagpt.environment.stanford_town_env.stanford_town_ext_env import (
StanfordTownExtEnv,
)
class StanfordTownEnv(Environment, StanfordTownExtEnv):
pass

View file

@ -5,7 +5,7 @@
from pydantic import Field
from metagpt.environment.base_env import Environment
from metagpt.environment.werewolf_env.werewolf_ext_env import WerewolfExtEnv
from metagpt.environment.werewolf.werewolf_ext_env import WerewolfExtEnv
from metagpt.logs import logger
from metagpt.schema import Message

View file

@ -5,11 +5,12 @@
import random
from collections import Counter
from enum import Enum
from typing import Callable, Optional
from typing import Any, Callable, Optional
from pydantic import ConfigDict, Field
from metagpt.environment.base_env import ExtEnv, mark_as_readable, mark_as_writeable
from metagpt.environment.base_env_space import BaseEnvAction, BaseEnvObsParams
from metagpt.logs import logger
@ -128,6 +129,20 @@ class WerewolfExtEnv(ExtEnv):
player_poisoned: Optional[str] = Field(default=None)
player_current_dead: list[str] = Field(default=[])
def reset(
self,
*,
seed: Optional[int] = None,
options: Optional[dict[str, Any]] = None,
) -> tuple[dict[str, Any], dict[str, Any]]:
pass
def observe(self, obs_params: Optional[BaseEnvObsParams] = None) -> Any:
pass
def step(self, action: BaseEnvAction) -> tuple[dict[str, Any], float, bool, bool, dict[str, Any]]:
pass
@property
def living_players(self) -> list[str]:
player_names = []

3
metagpt/ext/__init__.py Normal file
View file

@ -0,0 +1,3 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc :

View file

@ -0,0 +1,36 @@
## Stanford Town Game
### Pre-Description
In order to facilitate GA( [generative_agents](https://github.com/joonspk-research/generative_agents) )'s frontend docking data (to avoid changing its code), you can set the value `temp_storage_path` to `temp_storage` of `generative_agents` when start `run_st_game.py`. like
`python3 run_st_game.py --temp_storage_path path/to/ga/temp_storage xxx`
Or change the path under `const.py` like beflow
```
STORAGE_PATH = EXAMPLE_PATH.joinpath("storage")
TEMP_STORAGE_PATH = EXAMPLE_PATH.joinpath("temp_storage")
# updated
STORAGE_PATH = Path("{path/to/ga/storage}")
TEMP_STORAGE_PATH = Path("{path/to/ga/temp_storage}")
```
This can be used to achieve docking of simulation data without changing the GA code. Otherwise, the GA code must be modified to adapt to the MG output path.
If you don't want to start from 0, copy other simulation directories under `generative_agents/environment/frontend_server/storage/` to `examples/stanford_town/storage`, and select a directory named `fork_sim_code`.
### Backend service startup
The execution entry is `python3 run_st_game.py "Host a open lunch party at 13:00 pm" "base_the_ville_isabella_maria_klaus" "test_sim" 10`
or
`python3 run_st_game.py "Host a open lunch party at 13:00 pm" "base_the_ville_isabella_maria_klaus" "test_sim" 10 --temp_storage_path path/to/ga/temp_storage`
`idea` is the user's voice to the first Agent, and it is disseminated through this voice to see whether the final multi-agents achieve the goal of hosting or participating in the event.
### Frontend service startup
Enter project folder `generative_agents`
Enter `environment/frontend_server` and use `python3 manage.py runserver` to start the front-end service.
Visit `http://localhost:8000/simulator_home` to enter the current simulation interface.
## Appreciation
The reproduction work has referred the `https://github.com/joonspk-research/generative_agents`, let's make a general statement here.

View file

@ -0,0 +1,35 @@
## Stanford Town Game
### 前置
为了方便GA [generative_agents](https://github.com/joonspk-research/generative_agents) )的前端对接数据(避免改动它那块的代码),可在启动`run_st_game.py`加上`temp_storage_path`指向`generative_agents`对应的`temp_storage`路径。比如
`python3 run_st_game.py --temp_storage_path path/to/ga/temp_storage xxx`
或将`const.py`下的
```
STORAGE_PATH = EXAMPLE_PATH.joinpath("storage")
TEMP_STORAGE_PATH = EXAMPLE_PATH.joinpath("temp_storage")
# 更新为
STORAGE_PATH = Path("{path/to/ga/storage}")
TEMP_STORAGE_PATH = Path("{path/to/ga/temp_storage}")
```
这样可用实现不改变GA代码情况下实现仿真数据的对接。不然得修改GA的代码来适配MG的输出路径。
如果你不想从0开始启动拷贝`generative_agents/environment/frontend_server/storage/`下的其他仿真目录到`examples/stanford_town/storage`,并选择一个目录名作为`fork_sim_code`
### 后端服务启动
执行入口为:`python3 run_st_game.py "Host a open lunch party at 13:00 pm" "base_the_ville_isabella_maria_klaus" "test_sim" 10`
或者
`python3 run_st_game.py "Host a open lunch party at 13:00 pm" "base_the_ville_isabella_maria_klaus" "test_sim" 10 --temp_storage_path path/to/ga/temp_storage`
`idea`为用户给第一个Agent的用户心声并通过这个心声进行传播看最后多智能体是否达到举办、参加活动的目标。
### 前端服务启动
进入`generative_agents`项目目录
进入`environment/frontend_server`,使用`python3 manage.py runserver`启动前端服务。
访问`http://localhost:8000/simulator_home` 进入当前的仿真界面。
## Appreciation
The reproduction work has referred the `https://github.com/joonspk-research/generative_agents`, let's make a general statement here.

View file

@ -0,0 +1,3 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : stanford town implement

View file

@ -0,0 +1,3 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc :

View file

@ -0,0 +1,39 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : summarize relationship in a agent chat
from metagpt.ext.stanford_town.actions.st_action import STAction
from metagpt.logs import logger
class AgentChatSumRel(STAction):
name: str = "AgentChatSumRel"
def _func_validate(self, llm_resp: str, prompt: str) -> bool:
resp = False
try:
_ = llm_resp.split('"')[0].strip()
resp = True
except Exception:
pass
return resp
def _func_cleanup(self, llm_resp: str, prompt: str) -> str:
return llm_resp.split('"')[0].strip()
def _func_fail_default_resp(self) -> str:
pass
async def run(self, init_role: "STRole", target_role: "STRole", statements: str) -> str:
def create_prompt_input(init_role: "STRole", target_role: "STRole", statements: str) -> str:
prompt_input = [statements, init_role.name, target_role.name]
return prompt_input
prompt_input = create_prompt_input(init_role, target_role, statements)
prompt = self.generate_prompt_with_tmpl_filename(prompt_input, "summarize_chat_relationship_v2.txt")
example_output = "Jane Doe is working on a project"
special_instruction = "The output should be a string that responds to the question."
output = await self._run_gpt35(prompt, example_output, special_instruction)
logger.info(f"Role: {init_role.name} Action: {self.cls_name} output: {output}")
return output

View file

@ -0,0 +1,97 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : device to talk to another role, return yes or no
from metagpt.ext.stanford_town.actions.st_action import STAction
from metagpt.logs import logger
class DecideToTalk(STAction):
name: str = "DecideToTalk"
def _func_validate(self, llm_resp: str, prompt: str) -> bool:
resp = False
try:
if llm_resp.split("Answer in yes or no:")[-1].strip().lower() in ["yes", "no"]:
resp = True
except ValueError:
pass
return resp
def _func_cleanup(self, llm_resp: str, prompt: str) -> str:
return llm_resp.split("Answer in yes or no:")[-1].strip().lower()
def _func_fail_default_resp(self) -> str:
return "yes"
async def run(self, init_role: "STRole", target_role: "STRole", retrieved: dict, *args, **kwargs) -> bool:
"""Run action"""
def create_prompt_input(init_role: "STRole", target_role: "STRole", retrieved: dict) -> str:
scratch = init_role.rc.scratch
target_scratch = target_role.rc.scratch
last_chat = init_role.rc.memory.get_last_chat(target_role.name)
last_chatted_time = ""
last_chat_about = ""
if last_chat:
last_chatted_time = last_chat.created.strftime("%B %d, %Y, %H:%M:%S")
last_chat_about = last_chat.description
context = ""
for c_node in retrieved["events"]:
curr_desc = c_node.description.split(" ")
curr_desc[2:3] = ["was"]
curr_desc = " ".join(curr_desc)
context += f"{curr_desc}. "
context += "\n"
for c_node in retrieved["thoughts"]:
context += f"{c_node.description}. "
curr_time = scratch.curr_time.strftime("%B %d, %Y, %H:%M:%S %p")
init_act_desc = scratch.act_description
if "(" in init_act_desc:
init_act_desc = init_act_desc.split("(")[-1][:-1]
if len(scratch.planned_path) == 0 and "waiting" not in init_act_desc:
init_p_desc = f"{init_role.name} is already {init_act_desc}"
elif "waiting" in init_act_desc:
init_p_desc = f"{init_role.name} is {init_act_desc}"
else:
init_p_desc = f"{init_role.name} is on the way to {init_act_desc}"
target_act_desc = scratch.act_description
if "(" in target_act_desc:
target_act_desc = target_act_desc.split("(")[-1][:-1]
if len(target_scratch.planned_path) == 0 and "waiting" not in init_act_desc:
target_p_desc = f"{target_role.name} is already {target_act_desc}"
elif "waiting" in init_act_desc:
target_p_desc = f"{init_role.name} is {init_act_desc}"
else:
target_p_desc = f"{target_role.name} is on the way to {target_act_desc}"
prompt_input = []
prompt_input += [context]
prompt_input += [curr_time]
prompt_input += [init_role.name]
prompt_input += [target_role.name]
prompt_input += [last_chatted_time]
prompt_input += [last_chat_about]
prompt_input += [init_p_desc]
prompt_input += [target_p_desc]
prompt_input += [init_role.name]
prompt_input += [target_role.name]
return prompt_input
prompt_input = create_prompt_input(init_role, target_role, retrieved)
prompt = self.generate_prompt_with_tmpl_filename(
prompt_input=prompt_input, tmpl_filename="decide_to_talk_v2.txt"
)
self.fail_default_resp = self._func_fail_default_resp()
output = await self._run_gpt35_max_tokens(prompt, max_tokens=20) # yes or no
result = True if output == "yes" else False
logger.info(f"Role: {init_role.name} Action: {self.cls_name} output: {result}")
return result

Some files were not shown because too many files have changed in this diff Show more