mirror of
https://github.com/FoundationAgents/MetaGPT.git
synced 2026-04-25 00:36:55 +02:00
Merge branch 'main' into reasoning
This commit is contained in:
commit
08587f392f
339 changed files with 22997 additions and 1437 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
|
@ -162,6 +162,8 @@ examples/graph_store.json
|
|||
examples/image__vector_store.json
|
||||
examples/index_store.json
|
||||
.chroma
|
||||
.chroma_exp_data
|
||||
.role_memory_data
|
||||
*~$*
|
||||
workspace/*
|
||||
tmp
|
||||
|
|
@ -189,6 +191,7 @@ cov.xml
|
|||
*-structure.json
|
||||
*.dot
|
||||
.python-version
|
||||
tests/data/requirements/*.jpg
|
||||
*.csv
|
||||
metagpt/ext/sela/results/*
|
||||
.chainlit/
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ FROM nikolaik/python-nodejs:python3.9-nodejs20-slim
|
|||
|
||||
# Install Debian software needed by MetaGPT and clean up in one RUN command to reduce image size
|
||||
RUN apt update &&\
|
||||
apt install -y libgomp1 git chromium fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf libxss1 --no-install-recommends &&\
|
||||
apt install -y libgomp1 git chromium fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf libxss1 --no-install-recommends file &&\
|
||||
apt clean && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install Mermaid CLI globally
|
||||
|
|
|
|||
22
News.md
Normal file
22
News.md
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
## Earlier news
|
||||
|
||||
🚀 Oct. 29, 2024: We introduced three papers: [AFLOW](https://arxiv.org/abs/2410.10762), [FACT](https://arxiv.org/abs/2410.21012), and [SELA](https://arxiv.org/abs/2410.17238), check the [code](examples)!
|
||||
|
||||
🚀 Mar. 29, 2024: [v0.8.0](https://github.com/geekan/MetaGPT/releases/tag/v0.8.0) released. Now you can use Data Interpreter ([arxiv](https://arxiv.org/abs/2402.18679), [example](https://docs.deepwisdom.ai/main/en/DataInterpreter/), [code](https://github.com/geekan/MetaGPT/tree/main/examples/di)) via pypi package import. Meanwhile, we integrated the RAG module and supported multiple new LLMs.
|
||||
|
||||
🚀 Feb. 08, 2024: [v0.7.0](https://github.com/geekan/MetaGPT/releases/tag/v0.7.0) released, supporting assigning different LLMs to different Roles. We also introduced [Data Interpreter](https://github.com/geekan/MetaGPT/blob/main/examples/di/README.md), a powerful agent capable of solving a wide range of real-world problems.
|
||||
|
||||
🚀 Jan. 16, 2024: Our paper [MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
|
||||
](https://openreview.net/forum?id=VtmBAGCN7o) accepted for **oral presentation (top 1.2%)** at ICLR 2024, **ranking #1** in the LLM-based Agent category.
|
||||
|
||||
🚀 Jan. 03, 2024: [v0.6.0](https://github.com/geekan/MetaGPT/releases/tag/v0.6.0) released, new features include serialization, upgraded OpenAI package and supported multiple LLM, provided [minimal example for debate](https://github.com/geekan/MetaGPT/blob/main/examples/debate_simple.py) etc.
|
||||
|
||||
🚀 Dec. 15, 2023: [v0.5.0](https://github.com/geekan/MetaGPT/releases/tag/v0.5.0) released, introducing some experimental features such as incremental development, multilingual, multiple programming languages, etc.
|
||||
|
||||
🔥 Nov. 08, 2023: MetaGPT is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html).
|
||||
|
||||
🔥 Sep. 01, 2023: MetaGPT tops GitHub Trending Monthly for the **17th time** in August 2023.
|
||||
|
||||
🌟 Jun. 30, 2023: MetaGPT is now open source.
|
||||
|
||||
🌟 Apr. 24, 2023: First line of MetaGPT code committed.
|
||||
88
README.md
88
README.md
|
|
@ -27,30 +27,16 @@ # MetaGPT: The Multi-Agent Framework
|
|||
</p>
|
||||
|
||||
## News
|
||||
🚀 Feb. 19, 2025: Today we are officially launching our natural language programming product: MGX (MetaGPT X) - the world's first AI agent development team. [Offical website](https://mgx.dev/) [Twitter](https://x.com/MetaGPT_/status/1892199535130329356)
|
||||
|
||||
🚀 Mar. 4, 2025: 🎉 [mgx.dev](https://mgx.dev/) is the #1 Product of the Day on @ProductHunt! 🏆
|
||||
|
||||
🚀 Feb. 19, 2025: Today we are officially launching our natural language programming product: [MGX (MetaGPT X)](https://mgx.dev/) - the world's first AI agent development team. More details on [Twitter](https://x.com/MetaGPT_/status/1892199535130329356).
|
||||
|
||||
🚀 Feb. 17, 2025: We introduced two papers: [SPO](https://arxiv.org/pdf/2502.06855) and [AOT](https://arxiv.org/pdf/2502.12018), check the [code](examples)!
|
||||
|
||||
🚀 Jan. 22, 2025: Our paper [AFlow: Automating Agentic Workflow Generation](https://openreview.net/forum?id=z5uVAKwmjf) accepted for **oral presentation (top 1.8%)** at ICLR 2025, **ranking #2** in the LLM-based Agent category.
|
||||
|
||||
🚀 Oct. 29, 2024: We introduced three papers: [AFLOW](https://arxiv.org/abs/2410.10762), [FACT](https://arxiv.org/abs/2410.21012), and [SELA](https://arxiv.org/abs/2410.17238), check the [code](examples)!
|
||||
|
||||
🚀 Mar. 29, 2024: [v0.8.0](https://github.com/geekan/MetaGPT/releases/tag/v0.8.0) released. Now you can use Data Interpreter ([arxiv](https://arxiv.org/abs/2402.18679), [example](https://docs.deepwisdom.ai/main/en/DataInterpreter/), [code](https://github.com/geekan/MetaGPT/tree/main/examples/di)) via pypi package import. Meanwhile, we integrated the RAG module and supported multiple new LLMs.
|
||||
|
||||
🚀 Feb. 08, 2024: [v0.7.0](https://github.com/geekan/MetaGPT/releases/tag/v0.7.0) released, supporting assigning different LLMs to different Roles. We also introduced [Data Interpreter](https://github.com/geekan/MetaGPT/blob/main/examples/di/README.md), a powerful agent capable of solving a wide range of real-world problems.
|
||||
|
||||
🚀 Jan. 16, 2024: Our paper [MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
|
||||
](https://openreview.net/forum?id=VtmBAGCN7o) accepted for **oral presentation (top 1.2%)** at ICLR 2024, **ranking #1** in the LLM-based Agent category.
|
||||
|
||||
🚀 Jan. 03, 2024: [v0.6.0](https://github.com/geekan/MetaGPT/releases/tag/v0.6.0) released, new features include serialization, upgraded OpenAI package and supported multiple LLM, provided [minimal example for debate](https://github.com/geekan/MetaGPT/blob/main/examples/debate_simple.py) etc.
|
||||
|
||||
🚀 Dec. 15, 2023: [v0.5.0](https://github.com/geekan/MetaGPT/releases/tag/v0.5.0) released, introducing some experimental features such as incremental development, multilingual, multiple programming languages, etc.
|
||||
|
||||
🔥 Nov. 08, 2023: MetaGPT is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html).
|
||||
|
||||
🔥 Sep. 01, 2023: MetaGPT tops GitHub Trending Monthly for the **17th time** in August 2023.
|
||||
|
||||
🌟 Jun. 30, 2023: MetaGPT is now open source.
|
||||
|
||||
🌟 Apr. 24, 2023: First line of MetaGPT code committed.
|
||||
👉👉 [Earlier news](News.md)
|
||||
|
||||
## Software Company as Multi-Agent System
|
||||
|
||||
|
|
@ -75,6 +61,8 @@ # or `pip install --upgrade git+https://github.com/geekan/MetaGPT.git`
|
|||
# or `git clone https://github.com/geekan/MetaGPT && cd MetaGPT && pip install --upgrade -e .`
|
||||
```
|
||||
|
||||
**Install [node](https://nodejs.org/en/download) and [pnpm](https://pnpm.io/installation#using-npm) before actual use.**
|
||||
|
||||
For detailed installation guidance, please refer to [cli_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-stable-version)
|
||||
or [docker_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-with-docker)
|
||||
|
||||
|
|
@ -107,7 +95,9 @@ ### Usage
|
|||
or use it as library
|
||||
|
||||
```python
|
||||
from metagpt.software_company import generate_repo, ProjectRepo
|
||||
from metagpt.software_company import generate_repo
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
repo: ProjectRepo = generate_repo("Create a 2048 game") # or ProjectRepo("<path>")
|
||||
print(repo) # it will print the repo structure with files
|
||||
```
|
||||
|
|
@ -173,7 +163,7 @@ ## Citation
|
|||
|
||||
To stay updated with the latest research and development, follow [@MetaGPT_](https://twitter.com/MetaGPT_) on Twitter.
|
||||
|
||||
To cite [MetaGPT](https://openreview.net/forum?id=VtmBAGCN7o) or [Data Interpreter](https://arxiv.org/abs/2402.18679) in publications, please use the following BibTeX entries.
|
||||
To cite [MetaGPT](https://openreview.net/forum?id=VtmBAGCN7o) in publications, please use the following BibTeX entries.
|
||||
|
||||
```bibtex
|
||||
@inproceedings{hong2024metagpt,
|
||||
|
|
@ -183,54 +173,6 @@ ## Citation
|
|||
year={2024},
|
||||
url={https://openreview.net/forum?id=VtmBAGCN7o}
|
||||
}
|
||||
@misc{teng2025atom,
|
||||
title={Atom of Thoughts for Markov LLM Test-Time Scaling},
|
||||
author={Fengwei Teng and Zhaoyang Yu and Quan Shi and Jiayi Zhang and Chenglin Wu and Yuyu Luo},
|
||||
year={2025},
|
||||
eprint={2502.12018},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2502.12018},
|
||||
}
|
||||
@misc{xiang2025self,
|
||||
title={Self-Supervised Prompt Optimization},
|
||||
author={Jinyu Xiang and Jiayi Zhang and Zhaoyang Yu and Fengwei Teng and Jinhao Tu and Xinbing Liang and Sirui Hong and Chenglin Wu and Yuyu Luo},
|
||||
year={2025},
|
||||
eprint={2502.06855},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2502.06855},
|
||||
}
|
||||
@inproceedings{wang2025fact,
|
||||
title={FACT: Examining the Effectiveness of Iterative Context Rewriting for Multi-fact Retrieval},
|
||||
author={Jinlin Wang and Suyuchen Wang and Ziwen Xia and Sirui Hong and Yun Zhu and Bang Liu and Chenglin Wu},
|
||||
booktitle={The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL},
|
||||
year={2025},
|
||||
url={https://openreview.net/forum?id=VXOircx5h3}
|
||||
}
|
||||
@misc{chi2024sela,
|
||||
title={SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning},
|
||||
author={Yizhou Chi and Yizhang Lin and Sirui Hong and Duyi Pan and Yaying Fei and Guanghao Mei and Bangbang Liu and Tianqi Pang and Jacky Kwok and Ceyao Zhang and Bang Liu and Chenglin Wu},
|
||||
year={2024},
|
||||
eprint={2410.17238},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2410.17238},
|
||||
}
|
||||
@inproceedings{zhang2025aflow,
|
||||
title={{AF}low: Automating Agentic Workflow Generation},
|
||||
author={Jiayi Zhang and Jinyu Xiang and Zhaoyang Yu and Fengwei Teng and Xiong-Hui Chen and Jiaqi Chen and Mingchen Zhuge and Xin Cheng and Sirui Hong and Jinlin Wang and Bingnan Zheng and Bang Liu and Yuyu Luo and Chenglin Wu},
|
||||
booktitle={The Thirteenth International Conference on Learning Representations},
|
||||
year={2025},
|
||||
url={https://openreview.net/forum?id=z5uVAKwmjf}
|
||||
}
|
||||
@misc{hong2024data,
|
||||
title={Data Interpreter: An LLM Agent For Data Science},
|
||||
author={Sirui Hong and Yizhang Lin and Bang Liu and Bangbang Liu and Binhao Wu and Danyang Li and Jiaqi Chen and Jiayi Zhang and Jinlin Wang and Li Zhang and Lingyao Zhang and Min Yang and Mingchen Zhuge and Taicheng Guo and Tuo Zhou and Wei Tao and Wenyi Wang and Xiangru Tang and Xiangtao Lu and Xiawu Zheng and Xinbing Liang and Yaying Fei and Yuheng Cheng and Zongze Xu and Chenglin Wu},
|
||||
year={2024},
|
||||
eprint={2402.18679},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2402.18679},
|
||||
}
|
||||
```
|
||||
|
||||
For more work, please refer to [Academic Work](academic_work.md).
|
||||
|
|
|
|||
60
academic_work.md
Normal file
60
academic_work.md
Normal file
|
|
@ -0,0 +1,60 @@
|
|||
```bibtex
|
||||
@inproceedings{hong2024metagpt,
|
||||
title={Meta{GPT}: Meta Programming for A Multi-Agent Collaborative Framework},
|
||||
author={Sirui Hong and Mingchen Zhuge and Jonathan Chen and Xiawu Zheng and Yuheng Cheng and Jinlin Wang and Ceyao Zhang and Zili Wang and Steven Ka Shing Yau and Zijuan Lin and Liyang Zhou and Chenyu Ran and Lingfeng Xiao and Chenglin Wu and J{\"u}rgen Schmidhuber},
|
||||
booktitle={The Twelfth International Conference on Learning Representations},
|
||||
year={2024},
|
||||
url={https://openreview.net/forum?id=VtmBAGCN7o}
|
||||
}
|
||||
|
||||
@misc{teng2025atom,
|
||||
title={Atom of Thoughts for Markov LLM Test-Time Scaling},
|
||||
author={Fengwei Teng and Zhaoyang Yu and Quan Shi and Jiayi Zhang and Chenglin Wu and Yuyu Luo},
|
||||
year={2025},
|
||||
eprint={2502.12018},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2502.12018},
|
||||
}
|
||||
@misc{xiang2025self,
|
||||
title={Self-Supervised Prompt Optimization},
|
||||
author={Jinyu Xiang and Jiayi Zhang and Zhaoyang Yu and Fengwei Teng and Jinhao Tu and Xinbing Liang and Sirui Hong and Chenglin Wu and Yuyu Luo},
|
||||
year={2025},
|
||||
eprint={2502.06855},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2502.06855},
|
||||
}
|
||||
@inproceedings{wang2025fact,
|
||||
title={FACT: Examining the Effectiveness of Iterative Context Rewriting for Multi-fact Retrieval},
|
||||
author={Jinlin Wang and Suyuchen Wang and Ziwen Xia and Sirui Hong and Yun Zhu and Bang Liu and Chenglin Wu},
|
||||
booktitle={The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL},
|
||||
year={2025},
|
||||
url={https://openreview.net/forum?id=VXOircx5h3}
|
||||
}
|
||||
@misc{chi2024sela,
|
||||
title={SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning},
|
||||
author={Yizhou Chi and Yizhang Lin and Sirui Hong and Duyi Pan and Yaying Fei and Guanghao Mei and Bangbang Liu and Tianqi Pang and Jacky Kwok and Ceyao Zhang and Bang Liu and Chenglin Wu},
|
||||
year={2024},
|
||||
eprint={2410.17238},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2410.17238},
|
||||
}
|
||||
@inproceedings{zhang2025aflow,
|
||||
title={{AF}low: Automating Agentic Workflow Generation},
|
||||
author={Jiayi Zhang and Jinyu Xiang and Zhaoyang Yu and Fengwei Teng and Xiong-Hui Chen and Jiaqi Chen and Mingchen Zhuge and Xin Cheng and Sirui Hong and Jinlin Wang and Bingnan Zheng and Bang Liu and Yuyu Luo and Chenglin Wu},
|
||||
booktitle={The Thirteenth International Conference on Learning Representations},
|
||||
year={2025},
|
||||
url={https://openreview.net/forum?id=z5uVAKwmjf}
|
||||
}
|
||||
@misc{hong2024data,
|
||||
title={Data Interpreter: An LLM Agent For Data Science},
|
||||
author={Sirui Hong and Yizhang Lin and Bang Liu and Bangbang Liu and Binhao Wu and Danyang Li and Jiaqi Chen and Jiayi Zhang and Jinlin Wang and Li Zhang and Lingyao Zhang and Min Yang and Mingchen Zhuge and Taicheng Guo and Tuo Zhou and Wei Tao and Wenyi Wang and Xiangru Tang and Xiangtao Lu and Xiawu Zheng and Xinbing Liang and Yaying Fei and Yuheng Cheng and Zongze Xu and Chenglin Wu},
|
||||
year={2024},
|
||||
eprint={2402.18679},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2402.18679},
|
||||
}
|
||||
```
|
||||
|
|
@ -20,6 +20,37 @@ embedding:
|
|||
embed_batch_size: 100
|
||||
dimensions: # output dimension of embedding model
|
||||
|
||||
# Role's custom configuration
|
||||
roles:
|
||||
- role: "ProductManager" # role's className or role's role_id
|
||||
llm:
|
||||
api_type: "openai" # or azure / ollama / open_llm etc. Check LLMType for more options
|
||||
base_url: "YOUR_BASE_URL"
|
||||
api_key: "YOUR_API_KEY"
|
||||
proxy: "YOUR_PROXY" # for LLM API requests
|
||||
model: "gpt-4-turbo-1106"
|
||||
- role: "Architect"
|
||||
llm:
|
||||
api_type: "openai" # or azure / ollama / open_llm etc. Check LLMType for more options
|
||||
base_url: "YOUR_BASE_URL"
|
||||
api_key: "YOUR_API_KEY"
|
||||
proxy: "YOUR_PROXY" # for LLM API requests
|
||||
model: "gpt-35-turbo"
|
||||
- role: "ProjectManager"
|
||||
llm:
|
||||
api_type: "azure"
|
||||
base_url: "YOUR_BASE_URL"
|
||||
api_key: "YOUR_API_KEY"
|
||||
api_version: "YOUR_API_VERSION"
|
||||
model: "gpt-4-1106"
|
||||
- role: "Engineer"
|
||||
llm:
|
||||
api_type: "azure"
|
||||
base_url: "YOUR_BASE_URL"
|
||||
api_key: "YOUR_API_KEY"
|
||||
api_version: "YOUR_API_VERSION"
|
||||
model: "gpt-35-turbo-1106"
|
||||
|
||||
repair_llm_output: true # when the output is not a valid json, try to repair it
|
||||
|
||||
proxy: "YOUR_PROXY" # for tools like requests, playwright, selenium, etc.
|
||||
|
|
@ -50,6 +81,21 @@ s3:
|
|||
secure: false
|
||||
bucket: "test"
|
||||
|
||||
exp_pool:
|
||||
enabled: false
|
||||
enable_read: false
|
||||
enable_write: false
|
||||
persist_path: .chroma_exp_data # The directory.
|
||||
retrieval_type: bm25 # Default is `bm25`, can be set to `chroma` for vector storage, which requires setting up embedding.
|
||||
use_llm_ranker: true # Default is `true`, it will use LLM Reranker to get better result.
|
||||
collection_name: experience_pool # When `retrieval_type` is `chroma`, `collection_name` is the collection name in chromadb.
|
||||
|
||||
role_zero:
|
||||
enable_longterm_memory: false # Whether to use long-term memory. Default is `false`.
|
||||
longterm_memory_persist_path: .role_memory_data # The directory to save data.
|
||||
memory_k: 200 # The capacity of short-term memory.
|
||||
similarity_top_k: 5 # The number of long-term memories to retrieve.
|
||||
use_llm_ranker: false # Whether to use LLM Reranker to get better result. Default is `false`.
|
||||
|
||||
azure_tts_subscription_key: "YOUR_SUBSCRIPTION_KEY"
|
||||
azure_tts_region: "eastus"
|
||||
|
|
@ -81,5 +127,3 @@ models:
|
|||
# # timeout: 600 # Optional. If set to 0, default value is 300.
|
||||
# # Details: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
|
||||
# pricing_plan: "" # Optional. Use for Azure LLM when its model name is not the same as OpenAI's
|
||||
|
||||
agentops_api_key: "YOUR_AGENTOPS_API_KEY" # get key from https://app.agentops.ai/settings/projects
|
||||
|
|
|
|||
48
config/vault.example.yaml
Normal file
48
config/vault.example.yaml
Normal file
|
|
@ -0,0 +1,48 @@
|
|||
# Usage:
|
||||
# 1. Get value.
|
||||
# >>> from metagpt.tools.libs.env import get_env
|
||||
# >>> access_token = await get_env(key="access_token", app_name="github")
|
||||
# >>> print(access_token)
|
||||
# YOUR_ACCESS_TOKEN
|
||||
#
|
||||
# 2. Get description for LLM understanding.
|
||||
# >>> from metagpt.tools.libs.env import get_env_description
|
||||
# >>> descriptions = await get_env_description
|
||||
# >>> for k, desc in descriptions.items():
|
||||
# >>> print(f"{key}:{desc}")
|
||||
# await get_env(key="access_token", app_name="github"):Get github access token
|
||||
# await get_env(key="access_token", app_name="gitlab"):Get gitlab access token
|
||||
# ...
|
||||
|
||||
vault:
|
||||
github:
|
||||
values:
|
||||
access_token: "YOUR_ACCESS_TOKEN"
|
||||
descriptions:
|
||||
access_token: "Get github access token"
|
||||
gitlab:
|
||||
values:
|
||||
access_token: "YOUR_ACCESS_TOKEN"
|
||||
descriptions:
|
||||
access_token: "Get gitlab access token"
|
||||
iflytek_tts:
|
||||
values:
|
||||
api_id: "YOUR_APP_ID"
|
||||
api_key: "YOUR_API_KEY"
|
||||
api_secret: "YOUR_API_SECRET"
|
||||
descriptions:
|
||||
api_id: "Get the API ID of IFlyTek Text to Speech"
|
||||
api_key: "Get the API KEY of IFlyTek Text to Speech"
|
||||
api_secret: "Get the API SECRET of IFlyTek Text to Speech"
|
||||
azure_tts:
|
||||
values:
|
||||
subscription_key: "YOUR_SUBSCRIPTION_KEY"
|
||||
region: "YOUR_REGION"
|
||||
descriptions:
|
||||
subscription_key: "Get the subscription key of Azure Text to Speech."
|
||||
region: "Get the region of Azure Text to Speech."
|
||||
default: # All key-value pairs whose app name is an empty string are placed below
|
||||
values:
|
||||
proxy: "YOUR_PROXY"
|
||||
descriptions:
|
||||
proxy: "Get proxy for tools like requests, playwright, selenium, etc."
|
||||
15
examples/cr.py
Normal file
15
examples/cr.py
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
import fire
|
||||
|
||||
from metagpt.roles.di.engineer2 import Engineer2
|
||||
from metagpt.tools.libs.cr import CodeReview
|
||||
|
||||
|
||||
async def main(msg):
|
||||
role = Engineer2(tools=["Plan", "Editor:write,read", "RoleZero", "ValidateAndRewriteCode", "CodeReview"])
|
||||
cr = CodeReview()
|
||||
role.tool_execution_map.update({"CodeReview.review": cr.review, "CodeReview.fix": cr.fix})
|
||||
await role.run(msg)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(main)
|
||||
BIN
examples/data/di/dog.jpg
Normal file
BIN
examples/data/di/dog.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 15 KiB |
BIN
examples/data/di/receipt_shopping.jpg
Normal file
BIN
examples/data/di/receipt_shopping.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 33 KiB |
24
examples/data/exp_pool/engineer_exps.json
Normal file
24
examples/data/exp_pool/engineer_exps.json
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
[
|
||||
{
|
||||
"req": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "\n# Current Plan\n{'goal': \"Please write a 1048 game using JavaScript and HTML code without using any frameworks, user can play with keyboard. Refer to the system design located at '/tmp/system_design.json' and the project schedule at '/tmp/project_schedule.json' for detailed information.\", 'tasks': []}\n\n# Current Task\n\n\n# Instruction\nBased on the context, write a plan or modify an existing plan to achieve the goal. A plan consists of one to 3 tasks.\nIf plan is created, you should track the progress and update the plan accordingly, such as Plan.finish_current_task, Plan.append_task, Plan.reset_task, Plan.replace_task, etc.\nWhen presented a current task, tackle the task using the available commands.\nPay close attention to new user message, review the conversation history, use RoleZero.reply_to_human to respond to new user requirement.\nNote:\n1. If you keeping encountering errors, unexpected situation, or you are not sure of proceeding, use RoleZero.ask_human to ask for help.\n2. Carefully review your progress at the current task, if your actions so far has not fulfilled the task instruction, you should continue with current task. Otherwise, finish current task.\n3. Each time you finish a task, use RoleZero.reply_to_human to report your progress.\n4. Each time you write a code in your response, write with the Editor directly without preparing a repetitive code block beforehand.\n5. Take on ONE task and write ONE code file in each response. DON'T attempt all tasks in one response.\n6. When not specified, you should write files in a folder named \"src\". If you know the project path, then write in a \"src\" folder under the project path.\n7. When provided system design or project schedule, you MUST read them first before making a plan, then adhere to them in your implementation, especially in the programming language, package, or framework. You MUST implement all code files prescribed in the system design or project schedule. You can create a plan first with each task corresponding to implementing one code file.\n8. Write at most one file per task, do your best to implement THE ONLY ONE FILE. CAREFULLY CHECK THAT YOU DONT MISS ANY NECESSARY CLASS/FUNCTION IN THIS FILE.\n9. COMPLETE CODE: Your code will be part of the entire project, so please implement complete, reliable, reusable code snippets.\n10. When provided system design, YOU MUST FOLLOW \"Data structures and interfaces\". DONT CHANGE ANY DESIGN. Do not use public member functions that do not exist in your design.\n11. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n12. To modify code in a file, read the entire file, make changes, and update the file with the complete code, ensuring that no line numbers are included in the final write.\n13. When a system design or project schedule is provided, at the end of the plan, add a CodeRview Task for each file; for example, if there are three files, add three CodeRview Tasks. For each CodeRview Task, just call ValidateAndRewriteCode.run.\n"
|
||||
}
|
||||
],
|
||||
"resp": "\nExplanation: To begin the task of creating a 2048 game using JavaScript and HTML, I need to first read the system design and project schedule documents to understand the specific requirements and architecture outlined for the game development.\n\n```json\n[\n {\n \"command_name\": \"Editor.read\",\n \"args\": {\n \"path\": \"/tmp/system_design.json\"\n }\n },\n {\n \"command_name\": \"Editor.read\",\n \"args\": {\n \"path\": \"/tmp/project_schedule.json\"\n }\n }\n]\n```\n"
|
||||
},
|
||||
{
|
||||
"req": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Command Editor.read executed: file_path='/tmp/system_design.json' block_content='001|{\"Implementation approach\":\"We will implement the 2048 game using plain JavaScript and HTML, ensuring no frameworks are used. The game logic will handle tile movements, merging, and game state updates. The UI will be simple and clean, with a responsive design to fit different screen sizes. We will use CSS for styling and ensure the game is playable with keyboard arrow keys. The game will display the current score, have a restart button, and show a game over message when no more moves are possible.\",\"File list\":[\"index.html\",\"style.css\",\"script.js\"],\"Data structures and interfaces\":\"\\\\nclassDiagram\\\\n class Game {\\\\n -grid: int[][]\\\\n -score: int\\\\n +init(): void\\\\n +move(direction: str): void\\\\n +merge(direction: str): void\\\\n +isGameOver(): bool\\\\n +restart(): void\\\\n }\\\\n class UI {\\\\n -game: Game\\\\n +init(): void\\\\n +update(): void\\\\n +showGameOver(): void\\\\n +bindEvents(): void\\\\n }\\\\n Game --> UI\\\\n\",\"Program call flow\":\"\\\\nsequenceDiagram\\\\n participant U as UI\\\\n participant G as Game\\\\n U->>G: init()\\\\n G-->>U: return\\\\n U->>U: bindEvents()\\\\n U->>G: move(direction)\\\\n G->>G: merge(direction)\\\\n G->>U: update()\\\\n U->>U: update()\\\\n U->>G: isGameOver()\\\\n G-->>U: return bool\\\\n alt Game Over\\\\n U->>U: showGameOver()\\\\n end\\\\n U->>G: restart()\\\\n G-->>U: return\\\\n\",\"Anything UNCLEAR\":\"Clarify if there are any specific design preferences or additional features required beyond the basic 2048 game functionality.\"}\\n'\n\nCommand Editor.read executed: file_path='/tmp/project_schedule.json' block_content='001|{\"Required packages\":[\"No third-party dependencies required\"],\"Required Other language third-party packages\":[\"No third-party dependencies required\"],\"Logic Analysis\":[[\"script.js\",\"Contains Game and UI classes, and their methods: init, move, merge, isGameOver, restart, update, showGameOver, bindEvents\"],[\"index.html\",\"Contains the HTML structure for the game UI\"],[\"style.css\",\"Contains the CSS styles for the game UI\"]],\"Task list\":[\"index.html\",\"style.css\",\"script.js\"],\"Full API spec\":\"\",\"Shared Knowledge\":\"The `script.js` file will contain the core game logic and UI handling. The `index.html` file will provide the structure for the game, and `style.css` will handle the styling.\",\"Anything UNCLEAR\":\"Clarify if there are any specific design preferences or additional features required beyond the basic 2048 game functionality.\"}\\n'"
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": "\n# Current Plan\n{'goal': \"Please write a 1048 game using JavaScript and HTML code without using any frameworks, user can play with keyboard. Refer to the system design located at '/tmp/system_design.json' and the project schedule at '/tmp/project_schedule.json' for detailed information.\", 'tasks': []}\n\n# Current Task\n\n\n# Instruction\nBased on the context, write a plan or modify an existing plan to achieve the goal. A plan consists of one to 3 tasks.\nIf plan is created, you should track the progress and update the plan accordingly, such as Plan.finish_current_task, Plan.append_task, Plan.reset_task, Plan.replace_task, etc.\nWhen presented a current task, tackle the task using the available commands.\nPay close attention to new user message, review the conversation history, use RoleZero.reply_to_human to respond to new user requirement.\nNote:\n1. If you keeping encountering errors, unexpected situation, or you are not sure of proceeding, use RoleZero.ask_human to ask for help.\n2. Carefully review your progress at the current task, if your actions so far has not fulfilled the task instruction, you should continue with current task. Otherwise, finish current task.\n3. Each time you finish a task, use RoleZero.reply_to_human to report your progress.\n4. Each time you write a code in your response, write with the Editor directly without preparing a repetitive code block beforehand.\n5. Take on ONE task and write ONE code file in each response. DON'T attempt all tasks in one response.\n6. When not specified, you should write files in a folder named \"src\". If you know the project path, then write in a \"src\" folder under the project path.\n7. When provided system design or project schedule, you MUST read them first before making a plan, then adhere to them in your implementation, especially in the programming language, package, or framework. You MUST implement all code files prescribed in the system design or project schedule. You can create a plan first with each task corresponding to implementing one code file.\n8. Write at most one file per task, do your best to implement THE ONLY ONE FILE. CAREFULLY CHECK THAT YOU DONT MISS ANY NECESSARY CLASS/FUNCTION IN THIS FILE.\n9. COMPLETE CODE: Your code will be part of the entire project, so please implement complete, reliable, reusable code snippets.\n10. When provided system design, YOU MUST FOLLOW \"Data structures and interfaces\". DONT CHANGE ANY DESIGN. Do not use public member functions that do not exist in your design.\n11. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n12. To modify code in a file, read the entire file, make changes, and update the file with the complete code, ensuring that no line numbers are included in the final write.\n13. When a system design or project schedule is provided, at the end of the plan, add a CodeRview Task for each file; for example, if there are three files, add three CodeRview Tasks. For each CodeRview Task, just call ValidateAndRewriteCode.run.\n"
|
||||
}
|
||||
],
|
||||
"resp": "\nExplanation: Based on the system design and project schedule, I will create a plan to implement the 2048 game. The plan will consist of tasks to create the necessary files (`index.html`, `style.css`, and `script.js`) and review each file to ensure they meet the design specifications.\n\nHere's the plan:\n\n1. **Task 1**: Create `index.html` - This file will contain the HTML structure necessary for the game's UI.\n2. **Task 2**: Create `style.css` - This file will define the CSS styles to make the game visually appealing and responsive.\n3. **Task 3**: Create `script.js` - This file will contain the JavaScript code for the game logic and UI interactions.\n4. **Code Review Tasks**: Review each file to ensure they meet the project requirements and adhere to the system design.\n\nLet's start by appending the first task to the plan.\n\n```json\n[\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"1\",\n \"dependent_task_ids\": [],\n \"instruction\": \"Create the index.html file with the basic HTML structure for the 2048 game.\",\n \"assignee\": \"Alex\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"2\",\n \"dependent_task_ids\": [\"1\"],\n \"instruction\": \"Create the style.css file with the necessary CSS to style the 2048 game.\",\n \"assignee\": \"Alex\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"3\",\n \"dependent_task_ids\": [\"1\", \"2\"],\n \"instruction\": \"Create the script.js file containing the JavaScript logic for the 2048 game.\",\n \"assignee\": \"Alex\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"4\",\n \"dependent_task_ids\": [\"1\"],\n \"instruction\": \"Use ValidateAndRewriteCode to review the code in index.html to ensure it meets the design specifications.\",\n \"assignee\": \"Alex\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"5\",\n \"dependent_task_ids\": [\"2\"],\n \"instruction\": \"Use ValidateAndRewriteCode to review the code in style.css to ensure it meets the design specifications.\",\n \"assignee\": \"Alex\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"6\",\n \"dependent_task_ids\": [\"3\"],\n \"instruction\": \"Use ValidateAndRewriteCode to review the code in script.js to ensure it meets the design specifications.\",\n \"assignee\": \"Alex\"\n }\n }\n]\n```\n"
|
||||
}
|
||||
]
|
||||
22
examples/data/exp_pool/team_leader_exps.json
Normal file
22
examples/data/exp_pool/team_leader_exps.json
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
[{
|
||||
"req": [{
|
||||
"role": "user",
|
||||
"content": "\n# Current Plan\n{'goal': \"from to {'<all>'}: Write a 1024 game using JavaScript and HTML code without using any frameworks, user can play with keyboard.\", 'tasks': []}\n\n# Current Task\n\n\n# Instruction\nYou are a team leader, and you are responsible for drafting tasks and routing tasks to your team members.\nYour team member:\nTim: Team Leader, \nAlice: Product Manager, efficiently create a successful product that meets market demands and user expectations\nBob: Architect, design a concise, usable, complete software system\nEve: Project Manager, break down tasks according to PRD/technical design, generate a task list, and analyze task dependencies to start with the prerequisite modules\nAlex: Engineer, Take on game, app, and web development\nDavid: DataAnalyst, Take on any data-related tasks, such as data analysis, machine learning, deep learning, web browsing, web scraping, web searching, web deployment, terminal operation, git and github operation, etc.\n\nYou should NOT assign consecutive tasks to the same team member, instead, assign an aggregated task (or the complete requirement) and let the team member to decompose it.\nWhen creating a new plan involving multiple members, create all tasks at once.\nIf plan is created, you should track the progress based on team member feedback message, and update plan accordingly, such as Plan.finish_current_task, Plan.reset_task, Plan.replace_task, etc.\nYou should use TeamLeader.publish_team_message to team members, asking them to start their task. DONT omit any necessary info such as path, link, environment, programming language, framework, requirement, constraint from original content to team members because you are their sole info source.\nPay close attention to new user message, review the conversation history, use RoleZero.reply_to_human to respond to the user directly, DON'T ask your team members.\n\nNote:\n1. If the requirement is a pure DATA-RELATED requirement, such as bug fixes, issue reporting, environment setup, terminal operations, pip install, web browsing, web scraping, web searching, web imitation, data science, data analysis, machine learning, deep learning, text-to-image etc. DON'T decompose it, assign a single task with the original user requirement as instruction directly to Data Analyst.\n2. If the requirement is developing a software, game, app, or website, excluding the above data-related tasks, you should decompose the requirement into multiple tasks and assign them to different team members based on their expertise, usually the sequence of Product Manager -> Architect -> Project Manager -> Engineer -> (optional: QaEngine if present) -> (optional: DataAnalyst if user requests deployment), each assigned ONE task. When publishing message to Product Manager, you should directly copy the full original user requirement.\n3. If the requirement contains both DATA-RELATED part mentioned in 1 and software development part mentioned in 2, you should decompose the software development part and assign them to different team members based on their expertise, and assign the DATA-RELATED part to Data Analyst David directly.\n4. If the requirement is a common-sense, logical, or math problem, you should respond directly without assigning any task to team members.\n5. If you think the requirement is not clear or ambiguous, you should ask the user for clarification immediately. Assign tasks only after all info is clear.\n6. It is helpful for Engineer to have both the system design and the project schedule for writing the code, so include paths of both files (if available) and remind Engineer to definitely read them when publishing message to Engineer.\n7. If the requirement is writing a TRD and software framework, you should assign it to Architect. When publishing message to Architect, you should directly copy the full original user requirement.\n"
|
||||
}],
|
||||
"resp": "\nExplanation: The requirement is about software development. Assign each tasks to a different team member based on their expertise. When publishing message to Product Manager, we copy original user requirement directly to ensure no information loss.\n```json\n[\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"1\",\n \"dependent_task_ids\": [],\n \"instruction\": \"Create a product requirement document (PRD) outlining the features, user interface, and user experience of the CLI python snake game.\",\n \"assignee\": \"Alice\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"2\",\n \"dependent_task_ids\": [\"1\"],\n \"instruction\": \"Design the software architecture for the CLI snake game, including the choice of programming language, libraries, and data flow.\",\n \"assignee\": \"Bob\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"3\",\n \"dependent_task_ids\": [\"2\"],\n \"instruction\": \"Break down the architecture into manageable tasks, identify task dependencies, and prepare a detailed task list for implementation.\",\n \"assignee\": \"Eve\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"4\",\n \"dependent_task_ids\": [\"3\"],\n \"instruction\": \"Implement the core game logic for the CLI snake game, including snake movement, food generation, and score tracking.\",\n \"assignee\": \"Alex\"\n }\n },\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"5\",\n \"dependent_task_ids\": [\"4\"],\n \"instruction\": \"Write comprehensive tests for the game logic and user interface to ensure functionality and reliability.\",\n \"assignee\": \"Edward\"\n }\n },\n {\n \"command_name\": \"TeamLeader.publish_message\",\n \"args\": {\n \"content\": \"Create a cli snake game using Python\",\n \"send_to\": \"Alice\"\n }\n },\n {\n \"command_name\": \"RoleZero.reply_to_human\",\n \"args\": {\n \"content\": \"I have assigned the tasks to the team members. Alice will create the PRD, Bob will design the software architecture, Eve will break down the architecture into tasks, Alex will implement the core game logic, and Edward will write comprehensive tests. The team will work on the project accordingly\"\n }\n },\n {\n \"command_name\": \"end\"\n }\n]\n```\n"
|
||||
}, {
|
||||
"req": [{
|
||||
"role": "user",
|
||||
"content": "\n# Current Plan\n{'goal': \"from to {'<all>'}: Run data analysis on sklearn Wine recognition dataset, include a plot, and train a model to predict wine class (20% as validation), and show validation accuracy.\", 'tasks': []}\n\n# Current Task\n\n\n# Instruction\nYou are a team leader, and you are responsible for drafting tasks and routing tasks to your team members.\nYour team member:\nTim: Team Leader, \nAlice: Product Manager, efficiently create a successful product that meets market demands and user expectations\nBob: Architect, design a concise, usable, complete software system\nEve: Project Manager, break down tasks according to PRD/technical design, generate a task list, and analyze task dependencies to start with the prerequisite modules\nAlex: Engineer, Take on game, app, and web development\nDavid: DataAnalyst, Take on any data-related tasks, such as data analysis, machine learning, deep learning, web browsing, web scraping, web searching, web deployment, terminal operation, git and github operation, etc.\n\nYou should NOT assign consecutive tasks to the same team member, instead, assign an aggregated task (or the complete requirement) and let the team member to decompose it.\nWhen creating a new plan involving multiple members, create all tasks at once.\nIf plan is created, you should track the progress based on team member feedback message, and update plan accordingly, such as Plan.finish_current_task, Plan.reset_task, Plan.replace_task, etc.\nYou should use TeamLeader.publish_team_message to team members, asking them to start their task. DONT omit any necessary info such as path, link, environment, programming language, framework, requirement, constraint from original content to team members because you are their sole info source.\nPay close attention to new user message, review the conversation history, use RoleZero.reply_to_human to respond to the user directly, DON'T ask your team members.\n\nNote:\n1. If the requirement is a pure DATA-RELATED requirement, such as bug fixes, issue reporting, environment setup, terminal operations, pip install, web browsing, web scraping, web searching, web imitation, data science, data analysis, machine learning, deep learning, text-to-image etc. DON'T decompose it, assign a single task with the original user requirement as instruction directly to Data Analyst.\n2. If the requirement is developing a software, game, app, or website, excluding the above data-related tasks, you should decompose the requirement into multiple tasks and assign them to different team members based on their expertise, usually the sequence of Product Manager -> Architect -> Project Manager -> Engineer -> (optional: QaEngine if present) -> (optional: DataAnalyst if user requests deployment), each assigned ONE task. When publishing message to Product Manager, you should directly copy the full original user requirement.\n3. If the requirement contains both DATA-RELATED part mentioned in 1 and software development part mentioned in 2, you should decompose the software development part and assign them to different team members based on their expertise, and assign the DATA-RELATED part to Data Analyst David directly.\n4. If the requirement is a common-sense, logical, or math problem, you should respond directly without assigning any task to team members.\n5. If you think the requirement is not clear or ambiguous, you should ask the user for clarification immediately. Assign tasks only after all info is clear.\n6. It is helpful for Engineer to have both the system design and the project schedule for writing the code, so include paths of both files (if available) and remind Engineer to definitely read them when publishing message to Engineer.\n7. If the requirement is writing a TRD and software framework, you should assign it to Architect. When publishing message to Architect, you should directly copy the full original user requirement.\n"
|
||||
}],
|
||||
"resp": "\nExplanation: DON'T decompose requirement if it is a DATA-RELATED task, assign a single task directly to Data Analyst David. He will manage the decomposition and implementation.\n```json\n[\n {\n \"command_name\": \"Plan.append_task\",\n \"args\": {\n \"task_id\": \"1\",\n \"dependent_task_ids\": [],\n \"instruction\": \"Run data analysis on sklearn Wine recognition dataset, include a plot, and train a model to predict wine class (20% as validation), and show validation accuracy.\",\n \"assignee\": \"David\"\n }\n },\n {\n \"command_name\": \"TeamLeader.publish_message\",\n \"args\": {\n \"content\": \"Run data analysis on sklearn Wine recognition dataset, include a plot, and train a model to predict wine class (20% as validation), and show validation accuracy.\",\n \"send_to\": \"David\"\n }\n },\n {\n \"command_name\": \"RoleZero.reply_to_human\",\n \"args\": {\n \"content\": \"I have assigned the task to David. He will break down the task further by himself and starts solving it.\"\n }\n },\n {\n \"command_name\": \"end\"\n }\n]\n```\n"
|
||||
}, {
|
||||
"req": "Conversation History:\n[\n ...,\n {'role': 'assistant', 'content': 'from Alice(Product Manager) to {'Bob'}: {'docs': {'20240424153821.json': {'root_path': 'docs/prd', 'filename': '20240424153821.json', 'content': '{\"Language\":\"en_us\",\"Programming Language\":\"Python\",\"Original Requirements\":\"create a cli snake game\",\"Project Name\":\"snake_game\",\"Product Goals\":[\"Develop an intuitive and addictive snake game\",...], ...}}}}},\n]\n",
|
||||
"resp": "\nExplanation: You received a message from Alice, the Product Manager, that she has completed the PRD, use Plan.finish_current_task to mark her task as finished and moves the plan to the next task. Based on plan status, next task is for Bob (Architect), publish a message asking him to start. The message content should contain important path info.\n```json\n[\n {\n \"command_name\": \"Plan.finish_current_task\",\n \"args\": {}\n },\n {\n \"command_name\": \"TeamLeader.publish_message\",\n \"args\": {\n \"content\": \"Please design the software architecture for the snake game based on the PRD created by Alice. The PRD is at 'docs/prd/20240424153821.json'. Include the choice of programming language, libraries, and data flow, etc.\",\n \"send_to\": \"Bob\"\n }\n },\n {\n \"command_name\": \"RoleZero.reply_to_human\",\n \"args\": {\n \"content\": \"Alice has completed the PRD. I have marked her task as finished and sent the PRD to Bob. Bob will work on the software architecture.\"\n }\n },\n {\n \"command_name\": \"end\"\n }\n]\n```\n"
|
||||
}, {
|
||||
"req": [{
|
||||
"role": "user",
|
||||
"content": "\n# Current Plan\n{'goal': \"from to {'<all>'}: how does the project go?\", 'tasks': []}\n\n# Current Task\n\n\n# Instruction\nYou are a team leader, and you are responsible for drafting tasks and routing tasks to your team members.\nYour team member:\nTim: Team Leader, \nAlice: Product Manager, efficiently create a successful product that meets market demands and user expectations\nBob: Architect, design a concise, usable, complete software system\nEve: Project Manager, break down tasks according to PRD/technical design, generate a task list, and analyze task dependencies to start with the prerequisite modules\nAlex: Engineer, Take on game, app, and web development\nDavid: DataAnalyst, Take on any data-related tasks, such as data analysis, machine learning, deep learning, web browsing, web scraping, web searching, web deployment, terminal operation, git and github operation, etc.\n\nYou should NOT assign consecutive tasks to the same team member, instead, assign an aggregated task (or the complete requirement) and let the team member to decompose it.\nWhen creating a new plan involving multiple members, create all tasks at once.\nIf plan is created, you should track the progress based on team member feedback message, and update plan accordingly, such as Plan.finish_current_task, Plan.reset_task, Plan.replace_task, etc.\nYou should use TeamLeader.publish_team_message to team members, asking them to start their task. DONT omit any necessary info such as path, link, environment, programming language, framework, requirement, constraint from original content to team members because you are their sole info source.\nPay close attention to new user message, review the conversation history, use RoleZero.reply_to_human to respond to the user directly, DON'T ask your team members.\n\nNote:\n1. If the requirement is a pure DATA-RELATED requirement, such as bug fixes, issue reporting, environment setup, terminal operations, pip install, web browsing, web scraping, web searching, web imitation, data science, data analysis, machine learning, deep learning, text-to-image etc. DON'T decompose it, assign a single task with the original user requirement as instruction directly to Data Analyst.\n2. If the requirement is developing a software, game, app, or website, excluding the above data-related tasks, you should decompose the requirement into multiple tasks and assign them to different team members based on their expertise, usually the sequence of Product Manager -> Architect -> Project Manager -> Engineer -> (optional: QaEngine if present) -> (optional: DataAnalyst if user requests deployment), each assigned ONE task. When publishing message to Product Manager, you should directly copy the full original user requirement.\n3. If the requirement contains both DATA-RELATED part mentioned in 1 and software development part mentioned in 2, you should decompose the software development part and assign them to different team members based on their expertise, and assign the DATA-RELATED part to Data Analyst David directly.\n4. If the requirement is a common-sense, logical, or math problem, you should respond directly without assigning any task to team members.\n5. If you think the requirement is not clear or ambiguous, you should ask the user for clarification immediately. Assign tasks only after all info is clear.\n6. It is helpful for Engineer to have both the system design and the project schedule for writing the code, so include paths of both files (if available) and remind Engineer to definitely read them when publishing message to Engineer.\n7. If the requirement is writing a TRD and software framework, you should assign it to Architect. When publishing message to Architect, you should directly copy the full original user requirement.\n"
|
||||
}],
|
||||
"resp": "\nExplanation: The user is asking for a general update on the project status. Give a straight answer about the current task the team is working on and provide a summary of the completed tasks.\n```json\n[\n {\n \"command_name\": \"RoleZero.reply_to_human\",\n \"args\": {\n \"content\": \"The team is currently working on ... We have completed ...\"\n }\n },\n {\n \"command_name\": \"end\"\n }\n]\n```\n"
|
||||
}]
|
||||
Binary file not shown.
|
|
@ -1,6 +1,7 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
from metagpt.tools.libs.web_scraping import view_page_element_to_scrape
|
||||
|
||||
|
||||
async def main():
|
||||
|
|
@ -10,7 +11,7 @@ async def main():
|
|||
prompt = f"""This is a collection of arxiv urls: '{urls}' .
|
||||
Record each article, remove duplicates by title (they may have multiple tags), filter out papers related to
|
||||
large language model / agent / llm, print top 100 and visualize the word count of the titles"""
|
||||
di = DataInterpreter(react_mode="react", tools=["scrape_web_playwright"])
|
||||
di = DataInterpreter(react_mode="react", tools=[view_page_element_to_scrape.__name__])
|
||||
|
||||
await di.run(prompt)
|
||||
|
||||
|
|
|
|||
65
examples/di/atomization_capacity_plan.py
Normal file
65
examples/di/atomization_capacity_plan.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
import fire
|
||||
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
|
||||
|
||||
async def main():
|
||||
tl = TeamLeader()
|
||||
logger.info("\n=== Adding Initial Tasks ===")
|
||||
tl.planner.plan.append_task(
|
||||
task_id="T1", dependent_task_ids=[], instruction="Create Product Requirements Document (PRD)", assignee="Alice"
|
||||
)
|
||||
tl.planner.plan.append_task(
|
||||
task_id="T2", dependent_task_ids=["T1"], instruction="Design System Architecture", assignee="Bob"
|
||||
)
|
||||
|
||||
# 1. Add Development Tasks
|
||||
logger.info("\n=== Adding Development Tasks ===")
|
||||
tl.planner.plan.append_task(
|
||||
task_id="T3", dependent_task_ids=["T2"], instruction="Implement Core Function Modules", assignee="Alex"
|
||||
)
|
||||
|
||||
tl.planner.plan.append_task(
|
||||
task_id="T4", dependent_task_ids=["T2"], instruction="Implement User Interface", assignee="Alex"
|
||||
)
|
||||
|
||||
# 2. Complete Some Tasks
|
||||
logger.info("\n=== Execute and Complete Tasks ===")
|
||||
logger.info(f"Current Task: {tl.planner.plan.current_task.instruction}")
|
||||
tl.planner.plan.finish_current_task() # Complete T1
|
||||
|
||||
logger.info(f"Current Task: {tl.planner.plan.current_task.instruction}")
|
||||
tl.planner.plan.finish_current_task() # Complete T2
|
||||
|
||||
# 3. Replace Tasks
|
||||
logger.info("\n=== Replace Task ===")
|
||||
tl.planner.plan.replace_task(
|
||||
task_id="T3",
|
||||
new_dependent_task_ids=["T2"],
|
||||
new_instruction="Implement Core Function Modules (Add New Features)",
|
||||
new_assignee="Senior_Developer",
|
||||
)
|
||||
|
||||
# 4. Add Testing Tasks
|
||||
logger.info("\n=== Add Testing Tasks ===")
|
||||
tl.planner.plan.append_task(
|
||||
task_id="T5", dependent_task_ids=["T3", "T4"], instruction="Execute Integration Tests", assignee="Edward"
|
||||
)
|
||||
|
||||
# 5. Reset Task Demonstration
|
||||
logger.info("\n=== Reset Task ===")
|
||||
logger.info("Reset Task T3 (This will also reset T5 which depends on it)")
|
||||
tl.planner.plan.reset_task("T3")
|
||||
|
||||
# Display Final Status
|
||||
logger.info("\n=== Final Status ===")
|
||||
logger.info(f"Completed Tasks: {len([t for t in tl.planner.plan.tasks if t.is_finished])}")
|
||||
logger.info(f"Current Task: {tl.planner.plan.current_task.instruction}")
|
||||
logger.info("All Tasks:")
|
||||
for task in tl.planner.plan.tasks:
|
||||
logger.info(f"- {task.task_id}: {task.instruction} (Completed: {task.is_finished})")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(main)
|
||||
22
examples/di/automated_planning_of_tasks.py
Normal file
22
examples/di/automated_planning_of_tasks.py
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
import fire
|
||||
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
|
||||
|
||||
async def main():
|
||||
# Create an instance of TeamLeader
|
||||
tl = TeamLeader()
|
||||
|
||||
# Update the plan with the goal to create a 2048 game
|
||||
# This will auto generate tasks needed to accomplish the goal
|
||||
await tl.planner.update_plan(goal="create a 2048 game.")
|
||||
|
||||
# Iterate through all tasks in the plan
|
||||
# Log each task's ID, instruction and completion status
|
||||
for task in tl.planner.plan.tasks:
|
||||
logger.info(f"- {task.task_id}: {task.instruction} (Completed: {task.is_finished})")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(main)
|
||||
|
|
@ -6,16 +6,18 @@
|
|||
"""
|
||||
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
from metagpt.tools.libs.web_scraping import view_page_element_to_scrape
|
||||
|
||||
PAPER_LIST_REQ = """"
|
||||
Get data from `paperlist` table in https://papercopilot.com/statistics/iclr-statistics/iclr-2024-statistics/,
|
||||
and save it to a csv file. paper title must include `multiagent` or `large language model`. *notice: print key variables*
|
||||
and save it to a csv file. paper title must include `multiagent` or `large language model`.
|
||||
**Notice: view the page element before writing scraping code**
|
||||
"""
|
||||
|
||||
ECOMMERCE_REQ = """
|
||||
Get products data from website https://scrapeme.live/shop/ and save it as a csv file.
|
||||
**Notice: Firstly parse the web page encoding and the text HTML structure;
|
||||
The first page product name, price, product URL, and image URL must be saved in the csv;**
|
||||
The first page product name, price, product URL, and image URL must be saved in the csv.
|
||||
**Notice: view the page element before writing scraping code**
|
||||
"""
|
||||
|
||||
NEWS_36KR_REQ = """从36kr创投平台https://pitchhub.36kr.com/financing-flash 所有初创企业融资的信息, **注意: 这是一个中文网站**;
|
||||
|
|
@ -25,11 +27,12 @@ NEWS_36KR_REQ = """从36kr创投平台https://pitchhub.36kr.com/financing-flash
|
|||
3. 反思*快讯的html内容示例*中的规律, 设计正则匹配表达式来获取*`快讯`*的标题、链接、时间;
|
||||
4. 筛选最近3天的初创企业融资*`快讯`*, 以list[dict]形式打印前5个。
|
||||
5. 将全部结果存在本地csv中
|
||||
**Notice: view the page element before writing scraping code**
|
||||
"""
|
||||
|
||||
|
||||
async def main():
|
||||
di = DataInterpreter(tools=["scrape_web_playwright"])
|
||||
di = DataInterpreter(tools=[view_page_element_to_scrape.__name__])
|
||||
|
||||
await di.run(ECOMMERCE_REQ)
|
||||
|
||||
|
|
|
|||
40
examples/di/data_analyst_write_code.py
Normal file
40
examples/di/data_analyst_write_code.py
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
import fire
|
||||
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles.di.data_analyst import DataAnalyst
|
||||
|
||||
|
||||
async def main():
|
||||
# Create an instance of DataAnalyst role
|
||||
analyst = DataAnalyst()
|
||||
|
||||
# Set the main goal for the planner - constructing a 2D array
|
||||
analyst.planner.plan.goal = "construct a two-dimensional array"
|
||||
|
||||
# Add a specific task to the planner with detailed parameters:
|
||||
# - task_id: Unique identifier for the task
|
||||
# - dependent_task_ids: List of tasks that need to be completed before this one (empty in this case)
|
||||
# - instruction: Description of what needs to be done
|
||||
# - assignee: Who will execute the task (David)
|
||||
# - task_type: Category of the task (DATA_ANALYSIS)
|
||||
analyst.planner.plan.append_task(
|
||||
task_id="1",
|
||||
dependent_task_ids=[],
|
||||
instruction="construct a two-dimensional array",
|
||||
assignee="David",
|
||||
task_type="DATA_ANALYSIS",
|
||||
)
|
||||
|
||||
# Execute the code generation and execution for creating a 2D array
|
||||
# The write_and_exec_code method will:
|
||||
# 1. Generate the necessary code for creating a 2D array
|
||||
# 2. Execute the generated code
|
||||
# 3. Return the result
|
||||
result = await analyst.write_and_exec_code("construct a two-dimensional array")
|
||||
|
||||
# Log the result of the code execution
|
||||
logger.info(result)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(main)
|
||||
32
examples/di/fix_github_issue.py
Normal file
32
examples/di/fix_github_issue.py
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
"""This example is from a real issue from MetaGPT: https://github.com/geekan/MetaGPT/issues/1067 with corresponding bugfix as https://github.com/geekan/MetaGPT/pull/1069
|
||||
We demonstrate that DataInterpreter has the capability to fix such issues.
|
||||
Prerequisite: You need to manually add the bug back to your local file metagpt/utils/repair_llm_raw_output.py to test DataInterpreter's debugging ability. For detail, please check the issue and PR link above.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
REQ = """
|
||||
# Requirement
|
||||
Below is a github issue, solve it. Use Editor to search for the function, understand it, and modify the relevant code.
|
||||
Write a new test file test.py with Editor and use Terminal to python the test file to ensure you have fixed the issue.
|
||||
When writing test.py, you should import the function from the file you modified and test it with the given input.
|
||||
Notice: Don't write all codes in one response, each time, just write code for one step.
|
||||
|
||||
# Issue
|
||||
>> s = "-1"
|
||||
>> print(extract_state_value_from_output(s))
|
||||
>> 1
|
||||
The extract_state_value_from_output function will process -1 into 1,
|
||||
resulted in an infinite loop for the react mode.
|
||||
"""
|
||||
|
||||
|
||||
async def main():
|
||||
di = DataInterpreter(tools=["Terminal", "Editor"], react_mode="react")
|
||||
await di.run(REQ)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
|
@ -11,10 +11,10 @@ from metagpt.roles.di.data_interpreter import DataInterpreter
|
|||
async def main():
|
||||
web_url = "https://pytorch.org/"
|
||||
prompt = f"""This is a URL of webpage: '{web_url}' .
|
||||
Firstly, utilize Selenium and WebDriver for rendering.
|
||||
Secondly, convert image to a webpage including HTML, CSS and JS in one go.
|
||||
Firstly, open the page and take a screenshot of the page.
|
||||
Secondly, convert the image to a webpage including HTML, CSS and JS in one go.
|
||||
Note: All required dependencies and environments have been fully installed and configured."""
|
||||
di = DataInterpreter(tools=["GPTvGenerator"])
|
||||
di = DataInterpreter(tools=["GPTvGenerator", "Browser"])
|
||||
|
||||
await di.run(prompt)
|
||||
|
||||
|
|
|
|||
38
examples/di/interacting_with_human.py
Normal file
38
examples/di/interacting_with_human.py
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
import fire
|
||||
|
||||
from metagpt.environment.mgx.mgx_env import MGXEnv
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
||||
async def main():
|
||||
# Initialize the MetaGPT environment
|
||||
env = MGXEnv()
|
||||
# Add a TeamLeader role to the environment
|
||||
env.add_roles([TeamLeader()])
|
||||
|
||||
# Get input from human user about what they want to do
|
||||
human_rsp = await env.ask_human("What do you want to do?")
|
||||
|
||||
# Log the human response for tracking
|
||||
logger.info(human_rsp)
|
||||
# Create and publish a message with the human response in the environment
|
||||
env.publish_message(Message(content=human_rsp, role="user"))
|
||||
|
||||
# Get the TeamLeader role instance named 'Mike'
|
||||
tl = env.get_role("Mike")
|
||||
# Execute the TeamLeader's tasks
|
||||
await tl.run()
|
||||
|
||||
# Log information about each task in the TeamLeader's plan
|
||||
for task in tl.planner.plan.tasks:
|
||||
logger.info(f"- {task.task_id}: {task.instruction} (Completed: {task.is_finished})")
|
||||
|
||||
# Send an empty response back to the human and log it
|
||||
resp = await env.reply_to_human("")
|
||||
logger.info(resp)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(main)
|
||||
|
|
@ -1,17 +1,17 @@
|
|||
from metagpt.const import EXAMPLE_DATA_PATH
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
|
||||
async def main():
|
||||
# Notice: pip install metagpt[ocr] before using this example
|
||||
image_path = "image.jpg"
|
||||
image_path = EXAMPLE_DATA_PATH / "di/receipt_shopping.jpg"
|
||||
language = "English"
|
||||
requirement = f"""This is a {language} receipt image.
|
||||
Your goal is to perform OCR on images using PaddleOCR, output text content from the OCR results and discard
|
||||
coordinates and confidence levels, then recognize the total amount from ocr text content, and finally save as table.
|
||||
coordinates and confidence levels, then recognize the total amount from ocr text content, and finally save as csv table.
|
||||
Image path: {image_path}.
|
||||
NOTE: The environments for Paddle and PaddleOCR are all ready and has been fully installed."""
|
||||
di = DataInterpreter()
|
||||
|
||||
di = DataInterpreter(react_mode="react")
|
||||
await di.run(requirement)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import asyncio
|
||||
|
||||
from metagpt.const import DEFAULT_WORKSPACE_ROOT, EXAMPLE_DATA_PATH
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
|
||||
|
|
@ -9,7 +10,7 @@ async def main(requirement: str = ""):
|
|||
|
||||
|
||||
if __name__ == "__main__":
|
||||
image_path = "/your/path/to/the/image.jpeg"
|
||||
save_path = "/your/intended/save/path/for/image_rm_bg.png"
|
||||
image_path = EXAMPLE_DATA_PATH / "di/dog.jpg"
|
||||
save_path = DEFAULT_WORKSPACE_ROOT / "image_rm_bg.png"
|
||||
requirement = f"This is a image, you need to use python toolkit rembg to remove the background of the image and save the result. image path:{image_path}; save path:{save_path}."
|
||||
asyncio.run(main(requirement))
|
||||
|
|
|
|||
19
examples/di/run_flask.py
Normal file
19
examples/di/run_flask.py
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
import asyncio
|
||||
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
USE_GOT_REPO_REQ = """
|
||||
Write a service using Flask, create a conda environment and run it, and call the service's interface for validation.
|
||||
Notice: Don't write all codes in one response, each time, just write code for one step.
|
||||
"""
|
||||
# If you have created a conda environment, you can say:
|
||||
# I have created the conda environment '{env_name}', please use this environment to execute.
|
||||
|
||||
|
||||
async def main():
|
||||
di = DataInterpreter(tools=["Terminal", "Editor"])
|
||||
await di.run(USE_GOT_REPO_REQ)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
39
examples/di/software_company.py
Normal file
39
examples/di/software_company.py
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
import fire
|
||||
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
|
||||
async def main():
|
||||
prompt = """
|
||||
This is a software requirement:
|
||||
```text
|
||||
write a snake game
|
||||
```
|
||||
---
|
||||
1. Writes a PRD based on software requirements.
|
||||
2. Writes a design to the project repository, based on the PRD of the project.
|
||||
3. Writes a project plan to the project repository, based on the design of the project.
|
||||
4. Writes codes to the project repository, based on the project plan of the project.
|
||||
5. Run QA test on the project repository.
|
||||
6. Stage and commit changes for the project repository using Git.
|
||||
Note: All required dependencies and environments have been fully installed and configured.
|
||||
"""
|
||||
di = DataInterpreter(
|
||||
tools=[
|
||||
"WritePRD",
|
||||
"WriteDesign",
|
||||
"WritePlan",
|
||||
"WriteCode",
|
||||
"RunCode",
|
||||
"DebugError",
|
||||
# "git_archive",
|
||||
]
|
||||
)
|
||||
|
||||
await di.run(prompt)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
fire.Fire(main)
|
||||
29
examples/di/use_browser.py
Normal file
29
examples/di/use_browser.py
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
import asyncio
|
||||
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
MG_LLM_CONFIG_REQ = """
|
||||
This is a link to the doc site of MetaGPT project: https://docs.deepwisdom.ai/main/en/
|
||||
Check where you can go to on the site and try to find out the list of LLM APIs supported by MetaGPT.
|
||||
Don't write all codes in one response, each time, just write code for one step.
|
||||
"""
|
||||
|
||||
PAPER_LIST_REQ = """"
|
||||
At https://papercopilot.com/statistics/iclr-statistics/iclr-2024-statistics/,
|
||||
find the first paper whose title includes `multiagent`, open it and summarize its abstract.
|
||||
Don't write all codes in one response, each time, just write code for one step.
|
||||
"""
|
||||
|
||||
DESCRIBE_GITHUB_ISSUE_REQ = """
|
||||
Visit https://github.com/geekan/MetaGPT, navigate to Issues page, open the first issue related to DataInterpreter, then summarize what the issue is in one sentence.
|
||||
Don't write all codes in one response, each time, just write code for one step.
|
||||
"""
|
||||
|
||||
|
||||
async def main():
|
||||
di = DataInterpreter(tools=["Browser"], react_mode="react")
|
||||
await di.run(MG_LLM_CONFIG_REQ)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
19
examples/di/use_github_repo.py
Normal file
19
examples/di/use_github_repo.py
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
import asyncio
|
||||
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
USE_GOT_REPO_REQ = """
|
||||
This is a link to the GOT github repo: https://github.com/spcl/graph-of-thoughts.git.
|
||||
Clone it, read the README to understand the usage, install it, and finally run the quick start example.
|
||||
**Note the config for LLM is at `config/config_got.json`, it's outside the repo path, before using it, you need to copy it into graph-of-thoughts.
|
||||
** Don't write all codes in one response, each time, just write code for one step.
|
||||
"""
|
||||
|
||||
|
||||
async def main():
|
||||
di = DataInterpreter(tools=["Terminal"])
|
||||
await di.run(USE_GOT_REPO_REQ)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
20
examples/exp_pool/README.md
Normal file
20
examples/exp_pool/README.md
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
# Experience Pool
|
||||
|
||||
## Prerequisites
|
||||
- Ensure the RAG module is installed: https://docs.deepwisdom.ai/main/en/guide/in_depth_guides/rag_module.html
|
||||
- Set embedding: https://docs.deepwisdom.ai/main/en/guide/in_depth_guides/rag_module.html
|
||||
- Set `enabled`、`enable_read` and `enable_write` to `true` in the `exp_pool` section of `config2.yaml`
|
||||
|
||||
## Example Files
|
||||
|
||||
### 1. decorator.py
|
||||
Showcases the implementation of the `@exp_cache` decorator.
|
||||
|
||||
### 2. init_exp_pool.py
|
||||
Demonstrates the process of initializing the experience pool.
|
||||
|
||||
### 3. manager.py
|
||||
Illustrates CRUD (Create, Read, Update, Delete) operations for managing experiences in the pool.
|
||||
|
||||
### 4. scorer.py
|
||||
Outlines methods for evaluating and scoring experiences within the pool.
|
||||
28
examples/exp_pool/decorator.py
Normal file
28
examples/exp_pool/decorator.py
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
"""
|
||||
This script demonstrates how to automatically store experiences using @exp_cache and query the stored experiences.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import uuid
|
||||
|
||||
from metagpt.exp_pool import exp_cache, get_exp_manager
|
||||
from metagpt.logs import logger
|
||||
|
||||
|
||||
@exp_cache()
|
||||
async def produce(req=""):
|
||||
return f"{req} {uuid.uuid4().hex}"
|
||||
|
||||
|
||||
async def main():
|
||||
req = "Water"
|
||||
|
||||
resp = await produce(req=req)
|
||||
logger.info(f"The response of `produce({req})` is: {resp}")
|
||||
|
||||
exps = await get_exp_manager().query_exps(req)
|
||||
logger.info(f"Find experiences: {exps}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
97
examples/exp_pool/init_exp_pool.py
Normal file
97
examples/exp_pool/init_exp_pool.py
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
"""Init experience pool.
|
||||
|
||||
Put some useful experiences into the experience pool.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from metagpt.const import EXAMPLE_DATA_PATH
|
||||
from metagpt.exp_pool import get_exp_manager
|
||||
from metagpt.exp_pool.schema import EntryType, Experience, Metric, Score
|
||||
from metagpt.logs import logger
|
||||
from metagpt.utils.common import aread
|
||||
|
||||
|
||||
async def load_file(filepath) -> list[dict]:
|
||||
"""Asynchronously loads and parses a JSON file.
|
||||
|
||||
Args:
|
||||
filepath: Path to the JSON file.
|
||||
|
||||
Returns:
|
||||
A list of dictionaries parsed from the JSON file.
|
||||
"""
|
||||
|
||||
return json.loads(await aread(filepath))
|
||||
|
||||
|
||||
async def add_exp(req: str, resp: str, tag: str, metric: Metric = None):
|
||||
"""Adds a new experience to the experience pool.
|
||||
|
||||
Args:
|
||||
req: The request string.
|
||||
resp: The response string.
|
||||
tag: A tag for categorizing the experience.
|
||||
metric: Optional metric for the experience. Defaults to a score of 10.
|
||||
|
||||
"""
|
||||
|
||||
exp = Experience(
|
||||
req=req,
|
||||
resp=resp,
|
||||
entry_type=EntryType.MANUAL,
|
||||
tag=tag,
|
||||
metric=metric or Metric(score=Score(val=10, reason="Manual")),
|
||||
)
|
||||
exp_manager = get_exp_manager()
|
||||
exp_manager.is_writable = True
|
||||
|
||||
exp_manager.create_exp(exp)
|
||||
logger.info(f"New experience created for the request `{req[:10]}`.")
|
||||
|
||||
|
||||
async def add_exps(exps: list, tag: str):
|
||||
"""Adds multiple experiences to the experience pool.
|
||||
|
||||
Args:
|
||||
exps: A list of experience dictionaries.
|
||||
tag: A tag for categorizing the experiences.
|
||||
|
||||
"""
|
||||
tasks = [
|
||||
add_exp(req=exp["req"] if isinstance(exp["req"], str) else json.dumps(exp["req"]), resp=exp["resp"], tag=tag)
|
||||
for exp in exps
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
|
||||
async def add_exps_from_file(tag: str, filepath: Path):
|
||||
"""Loads experiences from a file and adds them to the experience pool.
|
||||
|
||||
Args:
|
||||
tag: A tag for categorizing the experiences.
|
||||
filepath: Path to the file containing experiences.
|
||||
|
||||
"""
|
||||
|
||||
exps = await load_file(filepath)
|
||||
await add_exps(exps, tag)
|
||||
|
||||
|
||||
def query_exps_count():
|
||||
"""Queries and logs the total count of experiences in the pool."""
|
||||
exp_manager = get_exp_manager()
|
||||
count = exp_manager.get_exps_count()
|
||||
logger.info(f"Experiences Count: {count}")
|
||||
|
||||
|
||||
async def main():
|
||||
await add_exps_from_file("TeamLeader.llm_cached_aask", EXAMPLE_DATA_PATH / "exp_pool/team_leader_exps.json")
|
||||
await add_exps_from_file("Engineer2.llm_cached_aask", EXAMPLE_DATA_PATH / "exp_pool/engineer_exps.json")
|
||||
query_exps_count()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
85
examples/exp_pool/load_exps_from_log.py
Normal file
85
examples/exp_pool/load_exps_from_log.py
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
"""Load and save experiences from the log file."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
from metagpt.exp_pool import get_exp_manager
|
||||
from metagpt.exp_pool.schema import LOG_NEW_EXPERIENCE_PREFIX, Experience
|
||||
from metagpt.logs import logger
|
||||
|
||||
|
||||
def load_exps(log_file_path: str) -> list[Experience]:
|
||||
"""Loads experiences from a log file.
|
||||
|
||||
Args:
|
||||
log_file_path (str): The path to the log file.
|
||||
|
||||
Returns:
|
||||
list[Experience]: A list of Experience objects loaded from the log file.
|
||||
"""
|
||||
|
||||
if not Path(log_file_path).exists():
|
||||
logger.warning(f"`load_exps` called with a non-existent log file path: {log_file_path}")
|
||||
return
|
||||
|
||||
exps = []
|
||||
with open(log_file_path, "r") as log_file:
|
||||
for line in log_file:
|
||||
if LOG_NEW_EXPERIENCE_PREFIX in line:
|
||||
json_str = line.split(LOG_NEW_EXPERIENCE_PREFIX, 1)[1].strip()
|
||||
exp_data = json.loads(json_str)
|
||||
|
||||
exp = Experience(**exp_data)
|
||||
exps.append(exp)
|
||||
|
||||
logger.info(f"Loaded {len(exps)} experiences from log file: {log_file_path}")
|
||||
|
||||
return exps
|
||||
|
||||
|
||||
def save_exps(exps: list[Experience]):
|
||||
"""Saves a list of experiences to the experience pool.
|
||||
|
||||
Args:
|
||||
exps (list[Experience]): The list of experiences to save.
|
||||
"""
|
||||
|
||||
if not exps:
|
||||
logger.warning("`save_exps` called with an empty list of experiences.")
|
||||
return
|
||||
|
||||
manager = get_exp_manager()
|
||||
manager.is_writable = True
|
||||
|
||||
manager.create_exps(exps)
|
||||
logger.info(f"Saved {len(exps)} experiences.")
|
||||
|
||||
|
||||
def get_log_file_path() -> str:
|
||||
"""Retrieves the path to the log file.
|
||||
|
||||
Returns:
|
||||
str: The path to the log file.
|
||||
|
||||
Raises:
|
||||
ValueError: If the log file path cannot be found.
|
||||
"""
|
||||
|
||||
handlers = logger._core.handlers
|
||||
|
||||
for handler in handlers.values():
|
||||
if "log" in handler._name:
|
||||
return handler._name[1:-1]
|
||||
|
||||
raise ValueError("Log file not found")
|
||||
|
||||
|
||||
def main():
|
||||
log_file_path = get_log_file_path()
|
||||
|
||||
exps = load_exps(log_file_path)
|
||||
save_exps(exps)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
31
examples/exp_pool/manager.py
Normal file
31
examples/exp_pool/manager.py
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
"""
|
||||
Demonstrate the creation and querying of experiences.
|
||||
|
||||
This script creates a new experience, logs its creation, and then queries for experiences matching the same request.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
|
||||
from metagpt.exp_pool import get_exp_manager
|
||||
from metagpt.exp_pool.schema import EntryType, Experience
|
||||
from metagpt.logs import logger
|
||||
|
||||
|
||||
async def main():
|
||||
# Define the simple request and response
|
||||
req = "Simple req"
|
||||
resp = "Simple resp"
|
||||
|
||||
# Add the new experience
|
||||
exp = Experience(req=req, resp=resp, entry_type=EntryType.MANUAL)
|
||||
exp_manager = get_exp_manager()
|
||||
exp_manager.create_exp(exp)
|
||||
logger.info(f"New experience created for the request `{req}`.")
|
||||
|
||||
# Query for experiences matching the request
|
||||
exps = await exp_manager.query_exps(req)
|
||||
logger.info(f"Got experiences: {exps}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
44
examples/exp_pool/scorer.py
Normal file
44
examples/exp_pool/scorer.py
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
import asyncio
|
||||
|
||||
from metagpt.exp_pool.scorers import SimpleScorer
|
||||
|
||||
# Request to implement quicksort in Python
|
||||
REQ = "Write a program to implement quicksort in python."
|
||||
|
||||
# First response: Quicksort implementation without base case
|
||||
RESP1 = """
|
||||
def quicksort(arr):
|
||||
return quicksort([x for x in arr[1:] if x <= arr[0]]) + [arr[0]] + quicksort([x for x in arr[1:] if x > arr[0]])
|
||||
"""
|
||||
|
||||
# Second response: Quicksort implementation with base case
|
||||
RESP2 = """
|
||||
def quicksort(arr):
|
||||
if len(arr) <= 1:
|
||||
return arr
|
||||
return quicksort([x for x in arr[1:] if x <= arr[0]]) + [arr[0]] + quicksort([x for x in arr[1:] if x > arr[0]])
|
||||
"""
|
||||
|
||||
|
||||
async def simple():
|
||||
"""Evaluates two quicksort implementations using SimpleScorer.
|
||||
|
||||
Example:
|
||||
{
|
||||
"val": 3,
|
||||
"reason": "The response attempts to implement quicksort but contains a critical flaw: it lacks a base case to terminate the recursion, which will lead to a maximum recursion depth exceeded error for non-empty lists. Additionally, the function does not handle empty lists properly. A correct implementation should include a base case to handle lists of length 0 or 1."
|
||||
}
|
||||
"""
|
||||
|
||||
scorer = SimpleScorer()
|
||||
|
||||
await scorer.evaluate(req=REQ, resp=RESP1)
|
||||
await scorer.evaluate(req=REQ, resp=RESP2)
|
||||
|
||||
|
||||
async def main():
|
||||
await simple()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
125
examples/mgx_write_project_framework.py
Normal file
125
examples/mgx_write_project_framework.py
Normal file
|
|
@ -0,0 +1,125 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : write_project_framework.py
|
||||
@Desc : The implementation of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
import asyncio
|
||||
import json
|
||||
import uuid
|
||||
from json import JSONDecodeError
|
||||
from pathlib import Path
|
||||
from typing import Dict, List
|
||||
|
||||
import typer
|
||||
from pydantic import BaseModel
|
||||
|
||||
from metagpt.config2 import Config
|
||||
from metagpt.const import DEFAULT_WORKSPACE_ROOT
|
||||
from metagpt.context import Context
|
||||
from metagpt.environment import Environment
|
||||
from metagpt.environment.mgx.mgx_env import MGXEnv
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles import Architect
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
from metagpt.schema import AIMessage, UserMessage
|
||||
from metagpt.strategy.experience_retriever import TRDToolExpRetriever
|
||||
from metagpt.utils.common import aread
|
||||
|
||||
app = typer.Typer(add_completion=False)
|
||||
|
||||
|
||||
class EnvBuilder(BaseModel):
|
||||
context: Context
|
||||
user_requirements: List[str]
|
||||
actors: Dict[str, str]
|
||||
technical_constraint: str
|
||||
output_dir: Path
|
||||
|
||||
def build(self) -> Environment:
|
||||
env = MGXEnv(context=self.context)
|
||||
team_leader = TeamLeader()
|
||||
architect = Architect(experience_retriever=TRDToolExpRetriever())
|
||||
|
||||
# Prepare context
|
||||
use_case_actors = "".join([f"- {v}: {k}\n" for k, v in self.actors.items()])
|
||||
msg = """
|
||||
The content of "Actor, System, External System" provides an explanation of actors and systems that appear in UML Use Case diagram.
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
"""
|
||||
architect.rc.memory.add(AIMessage(content=msg.format(use_case_actors=use_case_actors)))
|
||||
|
||||
# Prepare technical requirements
|
||||
msg = """
|
||||
"Additional Technical Requirements" specifies the additional technical requirements that the generated software framework code must meet.
|
||||
## Additional Technical Requirements
|
||||
{technical_requirements}
|
||||
"""
|
||||
architect.rc.memory.add(AIMessage(content=msg.format(technical_requirements=self.technical_constraint)))
|
||||
|
||||
env.add_roles([team_leader, architect])
|
||||
return env
|
||||
|
||||
|
||||
async def develop(
|
||||
context: Context,
|
||||
user_requirement_filename: str,
|
||||
actors_filename: str,
|
||||
constraint_filename: str,
|
||||
output_dir: str,
|
||||
):
|
||||
output_dir = Path(output_dir) if output_dir else DEFAULT_WORKSPACE_ROOT / uuid.uuid4().hex
|
||||
|
||||
v = await aread(filename=user_requirement_filename)
|
||||
try:
|
||||
user_requirements = json.loads(v)
|
||||
except JSONDecodeError:
|
||||
user_requirements = [v]
|
||||
v = await aread(filename=actors_filename)
|
||||
actors = json.loads(v)
|
||||
technical_constraint = await aread(filename=constraint_filename)
|
||||
env_builder = EnvBuilder(
|
||||
context=context,
|
||||
user_requirements=user_requirements,
|
||||
actors=actors,
|
||||
technical_constraint=technical_constraint,
|
||||
output_dir=output_dir,
|
||||
)
|
||||
env = env_builder.build()
|
||||
msg = """
|
||||
Given the user requirement of "User Requirements", write out the software framework.
|
||||
## User Requirements
|
||||
{user_requirements}
|
||||
"""
|
||||
env.publish_message(
|
||||
UserMessage(content=msg.format(user_requirements="\n".join(user_requirements)), send_to="Bob"),
|
||||
user_defined_recipient="Bob",
|
||||
)
|
||||
|
||||
while not env.is_idle:
|
||||
await env.run()
|
||||
|
||||
|
||||
@app.command()
|
||||
def startup(
|
||||
user_requirement_filename: str = typer.Argument(..., help="The filename of the user requirements."),
|
||||
actors_filename: str = typer.Argument(..., help="The filename of UML use case actors description."),
|
||||
llm_config: str = typer.Option(default="", help="Low-cost LLM config"),
|
||||
constraint_filename: str = typer.Option(default="", help="What technical dependency constraints are."),
|
||||
output_dir: str = typer.Option(default="", help="Output directory."),
|
||||
):
|
||||
if llm_config and Path(llm_config).exists():
|
||||
config = Config.from_yaml_file(Path(llm_config))
|
||||
else:
|
||||
logger.info("GPT 4 turbo is recommended")
|
||||
config = Config.default()
|
||||
ctx = Context(config=config)
|
||||
|
||||
asyncio.run(develop(ctx, user_requirement_filename, actors_filename, constraint_filename, output_dir))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
|
|
@ -1,3 +1,4 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""RAG benchmark pipeline"""
|
||||
|
||||
import asyncio
|
||||
|
|
|
|||
|
|
@ -1,72 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import asyncio
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
import typer
|
||||
|
||||
from metagpt.actions.rebuild_class_view import RebuildClassView
|
||||
from metagpt.actions.rebuild_sequence_view import RebuildSequenceView
|
||||
from metagpt.context import Context
|
||||
from metagpt.llm import LLM
|
||||
from metagpt.logs import logger
|
||||
from metagpt.utils.git_repository import GitRepository
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
app = typer.Typer(add_completion=False, pretty_exceptions_show_locals=False)
|
||||
|
||||
|
||||
@app.command("", help="Python project reverse engineering.")
|
||||
def startup(
|
||||
project_root: str = typer.Argument(
|
||||
default="",
|
||||
help="Specify the root directory of the existing project for reverse engineering.",
|
||||
),
|
||||
output_dir: str = typer.Option(default="", help="Specify the output directory path for reverse engineering."),
|
||||
):
|
||||
package_root = Path(project_root)
|
||||
if not package_root.exists():
|
||||
raise FileNotFoundError(f"{project_root} not exists")
|
||||
if not _is_python_package_root(package_root):
|
||||
raise FileNotFoundError(f'There are no "*.py" files under "{project_root}".')
|
||||
init_file = package_root / "__init__.py" # used by pyreverse
|
||||
init_file_exists = init_file.exists()
|
||||
if not init_file_exists:
|
||||
init_file.touch()
|
||||
|
||||
if not output_dir:
|
||||
output_dir = package_root / "../reverse_engineering_output"
|
||||
logger.info(f"output dir:{output_dir}")
|
||||
try:
|
||||
asyncio.run(reverse_engineering(package_root, Path(output_dir)))
|
||||
finally:
|
||||
if not init_file_exists:
|
||||
init_file.unlink(missing_ok=True)
|
||||
tmp_dir = package_root / "__dot__"
|
||||
if tmp_dir.exists():
|
||||
shutil.rmtree(tmp_dir, ignore_errors=True)
|
||||
|
||||
|
||||
def _is_python_package_root(package_root: Path) -> bool:
|
||||
for file_path in package_root.iterdir():
|
||||
if file_path.is_file():
|
||||
if file_path.suffix == ".py":
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
async def reverse_engineering(package_root: Path, output_dir: Path):
|
||||
ctx = Context()
|
||||
ctx.git_repo = GitRepository(output_dir)
|
||||
ctx.repo = ProjectRepo(ctx.git_repo)
|
||||
action = RebuildClassView(name="ReverseEngineering", i_context=str(package_root), llm=LLM(), context=ctx)
|
||||
await action.run()
|
||||
|
||||
action = RebuildSequenceView(name="ReverseEngineering", llm=LLM(), context=ctx)
|
||||
await action.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
27
examples/search_enhanced_qa.py
Normal file
27
examples/search_enhanced_qa.py
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
"""
|
||||
This script demonstrates how to use the SearchEnhancedQA action to answer questions
|
||||
by leveraging web search results. It showcases a simple example of querying about
|
||||
the current weather in Beijing.
|
||||
|
||||
The SearchEnhancedQA action combines web search capabilities with natural language
|
||||
processing to provide informative answers to user queries.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
|
||||
from metagpt.actions.search_enhanced_qa import SearchEnhancedQA
|
||||
|
||||
|
||||
async def main():
|
||||
"""Runs a sample query through SearchEnhancedQA and prints the result."""
|
||||
|
||||
action = SearchEnhancedQA()
|
||||
|
||||
query = "What is the weather like in Beijing today?"
|
||||
answer = await action.run(query)
|
||||
|
||||
print(f"The answer to '{query}' is:\n\n{answer}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
25
examples/serialize_model.py
Normal file
25
examples/serialize_model.py
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
from metagpt.environment.mgx.mgx_env import MGXEnv
|
||||
from metagpt.logs import logger
|
||||
|
||||
|
||||
def main():
|
||||
"""Demonstrates serialization and deserialization using SerializationMixin.
|
||||
|
||||
This example creates an instance of MGXEnv, serializes it to a file,
|
||||
and then deserializes it back to an instance.
|
||||
|
||||
If executed correctly, the following log messages will be output:
|
||||
MGXEnv serialization successful. File saved at: /.../workspace/storage/MGXEnv.json
|
||||
MGXEnv deserialization successful. Instance created from file: /.../workspace/storage/MGXEnv.json
|
||||
The instance is MGXEnv()
|
||||
"""
|
||||
|
||||
env = MGXEnv()
|
||||
env.serialize()
|
||||
|
||||
env: MGXEnv = MGXEnv.deserialize()
|
||||
logger.info(f"The instance is {repr(env)}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,3 +1,5 @@
|
|||
from pathlib import Path
|
||||
|
||||
import chainlit as cl
|
||||
from init_setup import ChainlitEnv
|
||||
|
||||
|
|
@ -67,8 +69,8 @@ async def startup(message: cl.Message) -> None:
|
|||
|
||||
await company.run(n_round=5)
|
||||
|
||||
workdir = company.env.context.git_repo.workdir
|
||||
files = company.env.context.git_repo.get_files(workdir)
|
||||
workdir = Path(company.env.context.config.project_path)
|
||||
files = [file.name for file in workdir.iterdir() if file.is_file()]
|
||||
files = "\n".join([f"{workdir}/{file}" for file in files if not file.startswith(".git")])
|
||||
|
||||
await cl.Message(
|
||||
|
|
|
|||
|
|
@ -5,15 +5,22 @@ Author: garylin2099
|
|||
"""
|
||||
import asyncio
|
||||
|
||||
from metagpt.context import Context
|
||||
from metagpt.environment.mgx.mgx_env import MGXEnv
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
from metagpt.roles.product_manager import ProductManager
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
||||
async def main():
|
||||
msg = "Write a PRD for a snake game"
|
||||
context = Context() # Used to share repo path information between multiple actions within the role.
|
||||
role = ProductManager(context=context)
|
||||
env = MGXEnv()
|
||||
env.add_roles([TeamLeader(), ProductManager()])
|
||||
env.publish_message(Message(content=msg, role="user"))
|
||||
tl = env.get_role("Mike")
|
||||
await tl.run()
|
||||
|
||||
role = env.get_role("Alice")
|
||||
result = await role.run(msg)
|
||||
logger.info(result.content[:100])
|
||||
|
||||
|
|
|
|||
24
examples/write_design.py
Normal file
24
examples/write_design.py
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
import asyncio
|
||||
|
||||
from metagpt.environment.mgx.mgx_env import MGXEnv
|
||||
from metagpt.logs import logger
|
||||
from metagpt.roles.architect import Architect
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
||||
async def main():
|
||||
msg = "Write a TRD for a snake game"
|
||||
env = MGXEnv()
|
||||
env.add_roles([TeamLeader(), Architect()])
|
||||
env.publish_message(Message(content=msg, role="user"))
|
||||
tl = env.get_role("Mike")
|
||||
await tl.run()
|
||||
|
||||
role = env.get_role("Bob")
|
||||
result = await role.run(msg)
|
||||
logger.info(result)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
42
examples/write_game_code.py
Normal file
42
examples/write_game_code.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
import asyncio
|
||||
import time
|
||||
|
||||
from metagpt.environment.mgx.mgx_env import MGXEnv
|
||||
from metagpt.roles.di.engineer2 import Engineer2
|
||||
from metagpt.roles.di.team_leader import TeamLeader
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
||||
async def main(requirement="", user_defined_recipient="", enable_human_input=False, allow_idle_time=30):
|
||||
env = MGXEnv()
|
||||
env.add_roles([TeamLeader(), Engineer2()])
|
||||
|
||||
msg = Message(content=requirement)
|
||||
env.attach_images(msg) # attach image content if applicable
|
||||
|
||||
if user_defined_recipient:
|
||||
msg.send_to = {user_defined_recipient}
|
||||
env.publish_message(msg, user_defined_recipient=user_defined_recipient)
|
||||
else:
|
||||
env.publish_message(msg)
|
||||
|
||||
allow_idle_time = allow_idle_time if enable_human_input else 1
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < allow_idle_time:
|
||||
if not env.is_idle:
|
||||
await env.run()
|
||||
start_time = time.time() # reset start time
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
requirement = "Write code for a 2048 game"
|
||||
user_defined_recipient = ""
|
||||
|
||||
asyncio.run(
|
||||
main(
|
||||
requirement=requirement,
|
||||
user_defined_recipient=user_defined_recipient,
|
||||
enable_human_input=False,
|
||||
allow_idle_time=60,
|
||||
)
|
||||
)
|
||||
|
|
@ -50,9 +50,9 @@ async def generate_novel():
|
|||
"Fill the empty nodes with your own ideas. Be creative! Use your own words!"
|
||||
"I will tip you $100,000 if you write a good novel."
|
||||
)
|
||||
novel_node = await ActionNode.from_pydantic(Novel).fill(context=instruction, llm=LLM())
|
||||
novel_node = await ActionNode.from_pydantic(Novel).fill(req=instruction, llm=LLM())
|
||||
chap_node = await ActionNode.from_pydantic(Chapters).fill(
|
||||
context=f"### instruction\n{instruction}\n### novel\n{novel_node.content}", llm=LLM()
|
||||
req=f"### instruction\n{instruction}\n### novel\n{novel_node.content}", llm=LLM()
|
||||
)
|
||||
print(chap_node.instruct_content)
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,6 @@ from metagpt.schema import (
|
|||
SerializationMixin,
|
||||
TestingContext,
|
||||
)
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
|
||||
class Action(SerializationMixin, ContextMixin, BaseModel):
|
||||
|
|
@ -51,12 +50,6 @@ class Action(SerializationMixin, ContextMixin, BaseModel):
|
|||
data.llm = llm
|
||||
return data
|
||||
|
||||
@property
|
||||
def repo(self) -> ProjectRepo:
|
||||
if not self.context.repo:
|
||||
self.context.repo = ProjectRepo(self.context.git_repo)
|
||||
return self.context.repo
|
||||
|
||||
@property
|
||||
def prompt_schema(self):
|
||||
return self.config.prompt_schema
|
||||
|
|
@ -112,10 +105,15 @@ class Action(SerializationMixin, ContextMixin, BaseModel):
|
|||
msgs = args[0]
|
||||
context = "## History Messages\n"
|
||||
context += "\n".join([f"{idx}: {i}" for idx, i in enumerate(reversed(msgs))])
|
||||
return await self.node.fill(context=context, llm=self.llm)
|
||||
return await self.node.fill(req=context, llm=self.llm)
|
||||
|
||||
async def run(self, *args, **kwargs):
|
||||
"""Run action"""
|
||||
if self.node:
|
||||
return await self._run_action_node(*args, **kwargs)
|
||||
raise NotImplementedError("The run method should be implemented in a subclass.")
|
||||
|
||||
def override_context(self):
|
||||
"""Set `private_context` and `context` to the same `Context` object."""
|
||||
if not self.private_context:
|
||||
self.private_context = self.context
|
||||
|
|
|
|||
|
|
@ -18,7 +18,9 @@ from pydantic import BaseModel, Field, create_model, model_validator
|
|||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions.action_outcls_registry import register_action_outcls
|
||||
from metagpt.const import USE_CONFIG_TIMEOUT
|
||||
from metagpt.const import MARKDOWN_TITLE_PREFIX, USE_CONFIG_TIMEOUT
|
||||
from metagpt.exp_pool import exp_cache
|
||||
from metagpt.exp_pool.serializers import ActionNodeSerializer
|
||||
from metagpt.llm import BaseLLM
|
||||
from metagpt.logs import logger
|
||||
from metagpt.provider.postprocess.llm_output_postprocess import llm_output_postprocess
|
||||
|
|
@ -123,7 +125,7 @@ Follow format example's {prompt_schema} format, generate output and make sure it
|
|||
"""
|
||||
|
||||
|
||||
def dict_to_markdown(d, prefix="- ", kv_sep="\n", postfix="\n"):
|
||||
def dict_to_markdown(d, prefix=MARKDOWN_TITLE_PREFIX, kv_sep="\n", postfix="\n"):
|
||||
markdown_str = ""
|
||||
for key, value in d.items():
|
||||
markdown_str += f"{prefix}{key}{kv_sep}{value}{postfix}"
|
||||
|
|
@ -591,9 +593,11 @@ class ActionNode:
|
|||
|
||||
return extracted_data
|
||||
|
||||
@exp_cache(serializer=ActionNodeSerializer())
|
||||
async def fill(
|
||||
self,
|
||||
context,
|
||||
*,
|
||||
req,
|
||||
llm,
|
||||
schema="json",
|
||||
mode="auto",
|
||||
|
|
@ -605,7 +609,7 @@ class ActionNode:
|
|||
):
|
||||
"""Fill the node(s) with mode.
|
||||
|
||||
:param context: Everything we should know when filling node.
|
||||
:param req: Everything we should know when filling node.
|
||||
:param llm: Large Language Model with pre-defined system message.
|
||||
:param schema: json/markdown, determine example and output format.
|
||||
- raw: free form text
|
||||
|
|
@ -624,7 +628,7 @@ class ActionNode:
|
|||
:return: self
|
||||
"""
|
||||
self.set_llm(llm)
|
||||
self.set_context(context)
|
||||
self.set_context(req)
|
||||
if self.schema:
|
||||
schema = self.schema
|
||||
|
||||
|
|
|
|||
76
metagpt/actions/analyze_requirements.py
Normal file
76
metagpt/actions/analyze_requirements.py
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
from metagpt.actions import Action
|
||||
|
||||
ANALYZE_REQUIREMENTS = """
|
||||
# Example
|
||||
{examples}
|
||||
|
||||
----------------
|
||||
|
||||
# Requirements
|
||||
{requirements}
|
||||
|
||||
# Instructions
|
||||
{instructions}
|
||||
|
||||
# Output Format
|
||||
{output_format}
|
||||
|
||||
Follow the instructions and output format. Do not include any additional content.
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
Example 1
|
||||
Requirements:
|
||||
创建一个贪吃蛇,只需要给出设计文档和代码
|
||||
Outputs:
|
||||
[User Restrictions] : 只需要给出设计文档和代码.
|
||||
[Language Restrictions] : The response, message and instruction must be in Chinese.
|
||||
[Programming Language] : HTML (*.html), CSS (*.css), and JavaScript (*.js)
|
||||
|
||||
Example 2
|
||||
Requirements:
|
||||
Create 2048 game using Python. Do not write PRD.
|
||||
Outputs:
|
||||
[User Restrictions] : Do not write PRD.
|
||||
[Language Restrictions] : The response, message and instruction must be in English.
|
||||
[Programming Language] : Python
|
||||
|
||||
Example 3
|
||||
Requirements:
|
||||
You must ignore create PRD and TRD. Help me write a schedule display program for the Paris Olympics.
|
||||
Outputs:
|
||||
[User Restrictions] : You must ignore create PRD and TRD.
|
||||
[Language Restrictions] : The response, message and instruction must be in English.
|
||||
[Programming Language] : HTML (*.html), CSS (*.css), and JavaScript (*.js)
|
||||
"""
|
||||
|
||||
INSTRUCTIONS = """
|
||||
You must output in the same language as the Requirements.
|
||||
First, This language should be consistent with the language used in the requirement description. determine the natural language you must respond in. If the requirements specify a special language, follow those instructions. The default language for responses is English.
|
||||
Second, extract the restrictions in the requirements, specifically the steps. Do not include detailed demand descriptions; focus only on the restrictions.
|
||||
Third, if the requirements is a software development, extract the program language. If no specific programming language is required, Use HTML (*.html), CSS (*.css), and JavaScript (*.js)
|
||||
|
||||
Note:
|
||||
1. if there is not restrictions, requirements_restrictions must be ""
|
||||
2. if the requirements is a not software development, programming language must be ""
|
||||
"""
|
||||
|
||||
OUTPUT_FORMAT = """
|
||||
[User Restrictions] : the restrictions in the requirements
|
||||
[Language Restrictions] : The response, message and instruction must be in {{language}}
|
||||
[Programming Language] : Your program must use ...
|
||||
"""
|
||||
|
||||
|
||||
class AnalyzeRequirementsRestrictions(Action):
|
||||
"""Write a review for the given context."""
|
||||
|
||||
name: str = "AnalyzeRequirementsRestrictions"
|
||||
|
||||
async def run(self, requirements, isinstance=INSTRUCTIONS, output_format=OUTPUT_FORMAT):
|
||||
"""Analyze the constraints and the language used in the requirements."""
|
||||
prompt = ANALYZE_REQUIREMENTS.format(
|
||||
examples=EXAMPLES, requirements=requirements, instructions=isinstance, output_format=output_format
|
||||
)
|
||||
rsp = await self.llm.aask(prompt)
|
||||
return rsp
|
||||
|
|
@ -9,13 +9,15 @@
|
|||
2. According to Section 2.2.3.1 of RFC 135, replace file data in the message with the file name.
|
||||
"""
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from metagpt.actions.action import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import RunCodeContext, RunCodeResult
|
||||
from metagpt.utils.common import CodeParser
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
PROMPT_TEMPLATE = """
|
||||
NOTICE
|
||||
|
|
@ -47,6 +49,8 @@ Now you should start rewriting the code:
|
|||
|
||||
class DebugError(Action):
|
||||
i_context: RunCodeContext = Field(default_factory=RunCodeContext)
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
async def run(self, *args, **kwargs) -> str:
|
||||
output_doc = await self.repo.test_outputs.get(filename=self.i_context.output_filename)
|
||||
|
|
@ -59,9 +63,7 @@ class DebugError(Action):
|
|||
return ""
|
||||
|
||||
logger.info(f"Debug and rewrite {self.i_context.test_filename}")
|
||||
code_doc = await self.repo.with_src_path(self.context.src_workspace).srcs.get(
|
||||
filename=self.i_context.code_filename
|
||||
)
|
||||
code_doc = await self.repo.srcs.get(filename=self.i_context.code_filename)
|
||||
if not code_doc:
|
||||
return ""
|
||||
test_doc = await self.repo.tests.get(filename=self.i_context.test_filename)
|
||||
|
|
@ -70,6 +72,6 @@ class DebugError(Action):
|
|||
prompt = PROMPT_TEMPLATE.format(code=code_doc.content, test_code=test_doc.content, logs=output_detail.stderr)
|
||||
|
||||
rsp = await self._aask(prompt)
|
||||
code = CodeParser.parse_code(block="", text=rsp)
|
||||
code = CodeParser.parse_code(text=rsp)
|
||||
|
||||
return code
|
||||
|
|
|
|||
|
|
@ -8,12 +8,15 @@
|
|||
1. According to Section 2.2.3.1 of RFC 135, replace file data in the message with the file name.
|
||||
2. According to the design in Section 2.2.3.5.3 of RFC 135, add incremental iteration functionality.
|
||||
@Modified By: mashenquan, 2023/12/5. Move the generation logic of the project name to WritePRD.
|
||||
@Modified By: mashenquan, 2024/5/31. Implement Chapter 3 of RFC 236.
|
||||
"""
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from typing import List, Optional, Union
|
||||
|
||||
from metagpt.actions import Action, ActionOutput
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.actions.design_api_an import (
|
||||
DATA_STRUCTURES_AND_INTERFACES,
|
||||
DESIGN_API_NODE,
|
||||
|
|
@ -24,8 +27,18 @@ from metagpt.actions.design_api_an import (
|
|||
)
|
||||
from metagpt.const import DATA_API_DESIGN_FILE_REPO, SEQ_FLOW_FILE_REPO
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import Document, Documents, Message
|
||||
from metagpt.schema import AIMessage, Document, Documents, Message
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import (
|
||||
aread,
|
||||
awrite,
|
||||
rectify_pathname,
|
||||
save_json_to_markdown,
|
||||
to_markdown_code_block,
|
||||
)
|
||||
from metagpt.utils.mermaid import mermaid_to_file
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
from metagpt.utils.report import DocsReporter, GalleryReporter
|
||||
|
||||
NEW_REQ_TEMPLATE = """
|
||||
### Legacy Content
|
||||
|
|
@ -36,6 +49,7 @@ NEW_REQ_TEMPLATE = """
|
|||
"""
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class WriteDesign(Action):
|
||||
name: str = ""
|
||||
i_context: Optional[str] = None
|
||||
|
|
@ -44,21 +58,98 @@ class WriteDesign(Action):
|
|||
"data structures, library tables, processes, and paths. Please provide your design, feedback "
|
||||
"clearly and in detail."
|
||||
)
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
async def run(self, with_messages: Message, schema: str = None):
|
||||
# Use `git status` to identify which PRD documents have been modified in the `docs/prd` directory.
|
||||
changed_prds = self.repo.docs.prd.changed_files
|
||||
# Use `git status` to identify which design documents in the `docs/system_designs` directory have undergone
|
||||
# changes.
|
||||
changed_system_designs = self.repo.docs.system_design.changed_files
|
||||
async def run(
|
||||
self,
|
||||
with_messages: List[Message] = None,
|
||||
*,
|
||||
user_requirement: str = "",
|
||||
prd_filename: str = "",
|
||||
legacy_design_filename: str = "",
|
||||
extra_info: str = "",
|
||||
output_pathname: str = "",
|
||||
**kwargs,
|
||||
) -> Union[AIMessage, str]:
|
||||
"""
|
||||
Write a system design.
|
||||
|
||||
Args:
|
||||
user_requirement (str): The user's requirements for the system design.
|
||||
prd_filename (str, optional): The filename of the Product Requirement Document (PRD).
|
||||
legacy_design_filename (str, optional): The filename of the legacy design document.
|
||||
extra_info (str, optional): Additional information to be included in the system design.
|
||||
output_pathname (str, optional): The output file path of the document.
|
||||
|
||||
Returns:
|
||||
str: The file path of the generated system design.
|
||||
|
||||
Example:
|
||||
# Write a new system design and save to the path name.
|
||||
>>> user_requirement = "Write system design for a snake game"
|
||||
>>> extra_info = "Your extra information"
|
||||
>>> output_pathname = "snake_game/docs/system_design.json"
|
||||
>>> action = WriteDesign()
|
||||
>>> result = await action.run(user_requirement=user_requirement, extra_info=extra_info, output_pathname=output_pathname)
|
||||
>>> print(result)
|
||||
System Design filename: "/absolute/path/to/snake_game/docs/system_design.json"
|
||||
|
||||
# Rewrite an existing system design and save to the path name.
|
||||
>>> user_requirement = "Write system design for a snake game, include new features such as a web UI"
|
||||
>>> extra_info = "Your extra information"
|
||||
>>> legacy_design_filename = "/absolute/path/to/snake_game/docs/system_design.json"
|
||||
>>> output_pathname = "/absolute/path/to/snake_game/docs/system_design_new.json"
|
||||
>>> action = WriteDesign()
|
||||
>>> result = await action.run(user_requirement=user_requirement, extra_info=extra_info, legacy_design_filename=legacy_design_filename, output_pathname=output_pathname)
|
||||
>>> print(result)
|
||||
System Design filename: "/absolute/path/to/snake_game/docs/system_design_new.json"
|
||||
|
||||
# Write a new system design with the given PRD(Product Requirement Document) and save to the path name.
|
||||
>>> user_requirement = "Write system design for a snake game based on the PRD at /absolute/path/to/snake_game/docs/prd.json"
|
||||
>>> extra_info = "Your extra information"
|
||||
>>> prd_filename = "/absolute/path/to/snake_game/docs/prd.json"
|
||||
>>> output_pathname = "/absolute/path/to/snake_game/docs/sytem_design.json"
|
||||
>>> action = WriteDesign()
|
||||
>>> result = await action.run(user_requirement=user_requirement, extra_info=extra_info, prd_filename=prd_filename, output_pathname=output_pathname)
|
||||
>>> print(result)
|
||||
System Design filename: "/absolute/path/to/snake_game/docs/sytem_design.json"
|
||||
|
||||
# Rewrite an existing system design with the given PRD(Product Requirement Document) and save to the path name.
|
||||
>>> user_requirement = "Write system design for a snake game, include new features such as a web UI"
|
||||
>>> extra_info = "Your extra information"
|
||||
>>> prd_filename = "/absolute/path/to/snake_game/docs/prd.json"
|
||||
>>> legacy_design_filename = "/absolute/path/to/snake_game/docs/system_design.json"
|
||||
>>> output_pathname = "/absolute/path/to/snake_game/docs/system_design_new.json"
|
||||
>>> action = WriteDesign()
|
||||
>>> result = await action.run(user_requirement=user_requirement, extra_info=extra_info, prd_filename=prd_filename, legacy_design_filename=legacy_design_filename, output_pathname=output_pathname)
|
||||
>>> print(result)
|
||||
System Design filename: "/absolute/path/to/snake_game/docs/system_design_new.json"
|
||||
"""
|
||||
if not with_messages:
|
||||
return await self._execute_api(
|
||||
user_requirement=user_requirement,
|
||||
prd_filename=prd_filename,
|
||||
legacy_design_filename=legacy_design_filename,
|
||||
extra_info=extra_info,
|
||||
output_pathname=output_pathname,
|
||||
)
|
||||
|
||||
self.input_args = with_messages[-1].instruct_content
|
||||
self.repo = ProjectRepo(self.input_args.project_path)
|
||||
changed_prds = self.input_args.changed_prd_filenames
|
||||
changed_system_designs = [
|
||||
str(self.repo.docs.system_design.workdir / i)
|
||||
for i in list(self.repo.docs.system_design.changed_files.keys())
|
||||
]
|
||||
|
||||
# For those PRDs and design documents that have undergone changes, regenerate the design content.
|
||||
changed_files = Documents()
|
||||
for filename in changed_prds.keys():
|
||||
for filename in changed_prds:
|
||||
doc = await self._update_system_design(filename=filename)
|
||||
changed_files.docs[filename] = doc
|
||||
|
||||
for filename in changed_system_designs.keys():
|
||||
for filename in changed_system_designs:
|
||||
if filename in changed_files.docs:
|
||||
continue
|
||||
doc = await self._update_system_design(filename=filename)
|
||||
|
|
@ -67,54 +158,122 @@ class WriteDesign(Action):
|
|||
logger.info("Nothing has changed.")
|
||||
# Wait until all files under `docs/system_designs/` are processed before sending the publish message,
|
||||
# leaving room for global optimization in subsequent steps.
|
||||
return ActionOutput(content=changed_files.model_dump_json(), instruct_content=changed_files)
|
||||
kvs = self.input_args.model_dump()
|
||||
kvs["changed_system_design_filenames"] = [
|
||||
str(self.repo.docs.system_design.workdir / i)
|
||||
for i in list(self.repo.docs.system_design.changed_files.keys())
|
||||
]
|
||||
return AIMessage(
|
||||
content="Designing is complete. "
|
||||
+ "\n".join(
|
||||
list(self.repo.docs.system_design.changed_files.keys())
|
||||
+ list(self.repo.resources.data_api_design.changed_files.keys())
|
||||
+ list(self.repo.resources.seq_flow.changed_files.keys())
|
||||
),
|
||||
instruct_content=AIMessage.create_instruct_value(kvs=kvs, class_name="WriteDesignOutput"),
|
||||
cause_by=self,
|
||||
)
|
||||
|
||||
async def _new_system_design(self, context):
|
||||
node = await DESIGN_API_NODE.fill(context=context, llm=self.llm)
|
||||
node = await DESIGN_API_NODE.fill(req=context, llm=self.llm, schema=self.prompt_schema)
|
||||
return node
|
||||
|
||||
async def _merge(self, prd_doc, system_design_doc):
|
||||
context = NEW_REQ_TEMPLATE.format(old_design=system_design_doc.content, context=prd_doc.content)
|
||||
node = await REFINED_DESIGN_NODE.fill(context=context, llm=self.llm)
|
||||
node = await REFINED_DESIGN_NODE.fill(req=context, llm=self.llm, schema=self.prompt_schema)
|
||||
system_design_doc.content = node.instruct_content.model_dump_json()
|
||||
return system_design_doc
|
||||
|
||||
async def _update_system_design(self, filename) -> Document:
|
||||
prd = await self.repo.docs.prd.get(filename)
|
||||
old_system_design_doc = await self.repo.docs.system_design.get(filename)
|
||||
if not old_system_design_doc:
|
||||
system_design = await self._new_system_design(context=prd.content)
|
||||
doc = await self.repo.docs.system_design.save(
|
||||
filename=filename,
|
||||
content=system_design.instruct_content.model_dump_json(),
|
||||
dependencies={prd.root_relative_path},
|
||||
)
|
||||
else:
|
||||
doc = await self._merge(prd_doc=prd, system_design_doc=old_system_design_doc)
|
||||
await self.repo.docs.system_design.save_doc(doc=doc, dependencies={prd.root_relative_path})
|
||||
await self._save_data_api_design(doc)
|
||||
await self._save_seq_flow(doc)
|
||||
await self.repo.resources.system_design.save_pdf(doc=doc)
|
||||
root_relative_path = Path(filename).relative_to(self.repo.workdir)
|
||||
prd = await Document.load(filename=filename, project_path=self.repo.workdir)
|
||||
old_system_design_doc = await self.repo.docs.system_design.get(root_relative_path.name)
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "design"}, "meta")
|
||||
if not old_system_design_doc:
|
||||
system_design = await self._new_system_design(context=prd.content)
|
||||
doc = await self.repo.docs.system_design.save(
|
||||
filename=prd.filename,
|
||||
content=system_design.instruct_content.model_dump_json(),
|
||||
dependencies={prd.root_relative_path},
|
||||
)
|
||||
else:
|
||||
doc = await self._merge(prd_doc=prd, system_design_doc=old_system_design_doc)
|
||||
await self.repo.docs.system_design.save_doc(doc=doc, dependencies={prd.root_relative_path})
|
||||
await self._save_data_api_design(doc)
|
||||
await self._save_seq_flow(doc)
|
||||
md = await self.repo.resources.system_design.save_pdf(doc=doc)
|
||||
await reporter.async_report(self.repo.workdir / md.root_relative_path, "path")
|
||||
return doc
|
||||
|
||||
async def _save_data_api_design(self, design_doc):
|
||||
async def _save_data_api_design(self, design_doc, output_filename: Path = None):
|
||||
m = json.loads(design_doc.content)
|
||||
data_api_design = m.get(DATA_STRUCTURES_AND_INTERFACES.key) or m.get(REFINED_DATA_STRUCTURES_AND_INTERFACES.key)
|
||||
if not data_api_design:
|
||||
return
|
||||
pathname = self.repo.workdir / DATA_API_DESIGN_FILE_REPO / Path(design_doc.filename).with_suffix("")
|
||||
pathname = output_filename or self.repo.workdir / DATA_API_DESIGN_FILE_REPO / Path(
|
||||
design_doc.filename
|
||||
).with_suffix("")
|
||||
await self._save_mermaid_file(data_api_design, pathname)
|
||||
logger.info(f"Save class view to {str(pathname)}")
|
||||
|
||||
async def _save_seq_flow(self, design_doc):
|
||||
async def _save_seq_flow(self, design_doc, output_filename: Path = None):
|
||||
m = json.loads(design_doc.content)
|
||||
seq_flow = m.get(PROGRAM_CALL_FLOW.key) or m.get(REFINED_PROGRAM_CALL_FLOW.key)
|
||||
if not seq_flow:
|
||||
return
|
||||
pathname = self.repo.workdir / Path(SEQ_FLOW_FILE_REPO) / Path(design_doc.filename).with_suffix("")
|
||||
pathname = output_filename or self.repo.workdir / Path(SEQ_FLOW_FILE_REPO) / Path(
|
||||
design_doc.filename
|
||||
).with_suffix("")
|
||||
await self._save_mermaid_file(seq_flow, pathname)
|
||||
logger.info(f"Saving sequence flow to {str(pathname)}")
|
||||
|
||||
async def _save_mermaid_file(self, data: str, pathname: Path):
|
||||
pathname.parent.mkdir(parents=True, exist_ok=True)
|
||||
await mermaid_to_file(self.config.mermaid.engine, data, pathname)
|
||||
image_path = pathname.parent / f"{pathname.name}.svg"
|
||||
if image_path.exists():
|
||||
await GalleryReporter().async_report(image_path, "path")
|
||||
|
||||
async def _execute_api(
|
||||
self,
|
||||
user_requirement: str = "",
|
||||
prd_filename: str = "",
|
||||
legacy_design_filename: str = "",
|
||||
extra_info: str = "",
|
||||
output_pathname: str = "",
|
||||
) -> str:
|
||||
prd_content = ""
|
||||
if prd_filename:
|
||||
prd_filename = rectify_pathname(path=prd_filename, default_filename="prd.json")
|
||||
prd_content = await aread(filename=prd_filename)
|
||||
context = "### User Requirements\n{user_requirement}\n### Extra_info\n{extra_info}\n### PRD\n{prd}\n".format(
|
||||
user_requirement=to_markdown_code_block(user_requirement),
|
||||
extra_info=to_markdown_code_block(extra_info),
|
||||
prd=to_markdown_code_block(prd_content),
|
||||
)
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "design"}, "meta")
|
||||
if not legacy_design_filename:
|
||||
node = await self._new_system_design(context=context)
|
||||
design = Document(content=node.instruct_content.model_dump_json())
|
||||
else:
|
||||
old_design_content = await aread(filename=legacy_design_filename)
|
||||
design = await self._merge(
|
||||
prd_doc=Document(content=context), system_design_doc=Document(content=old_design_content)
|
||||
)
|
||||
|
||||
if not output_pathname:
|
||||
output_pathname = Path(output_pathname) / "docs" / "system_design.json"
|
||||
elif not Path(output_pathname).is_absolute():
|
||||
output_pathname = self.config.workspace.path / output_pathname
|
||||
output_pathname = rectify_pathname(path=output_pathname, default_filename="system_design.json")
|
||||
await awrite(filename=output_pathname, data=design.content)
|
||||
output_filename = output_pathname.parent / f"{output_pathname.stem}-class-diagram"
|
||||
await self._save_data_api_design(design_doc=design, output_filename=output_filename)
|
||||
output_filename = output_pathname.parent / f"{output_pathname.stem}-sequence-diagram"
|
||||
await self._save_seq_flow(design_doc=design, output_filename=output_filename)
|
||||
md_output_filename = output_pathname.with_suffix(".md")
|
||||
await save_json_to_markdown(content=design.content, output_filename=md_output_filename)
|
||||
await reporter.async_report(md_output_filename, "path")
|
||||
return f'System Design filename: "{str(output_pathname)}". \n The System Design has been completed.'
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ from metagpt.utils.mermaid import MMC1, MMC2
|
|||
IMPLEMENTATION_APPROACH = ActionNode(
|
||||
key="Implementation approach",
|
||||
expected_type=str,
|
||||
instruction="Analyze the difficult points of the requirements, select the appropriate open-source framework",
|
||||
instruction="Analyze the difficult points of the requirements, select the appropriate open-source framework.",
|
||||
example="We will ...",
|
||||
)
|
||||
|
||||
|
|
@ -33,8 +33,8 @@ PROJECT_NAME = ActionNode(
|
|||
FILE_LIST = ActionNode(
|
||||
key="File list",
|
||||
expected_type=List[str],
|
||||
instruction="Only need relative paths. ALWAYS write a main.py or app.py here",
|
||||
example=["main.py", "game.py"],
|
||||
instruction="Only need relative paths. Succinctly designate the correct entry file for your project based on the programming language: use main.js for JavaScript, main.py for Python, and so on for other languages.",
|
||||
example=["a.js", "b.py", "c.css", "d.html"],
|
||||
)
|
||||
|
||||
REFINED_FILE_LIST = ActionNode(
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ from __future__ import annotations
|
|||
from typing import Tuple
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.logs import get_human_input, logger
|
||||
from metagpt.schema import Message, Plan
|
||||
|
||||
|
||||
|
|
@ -50,7 +50,7 @@ class AskReview(Action):
|
|||
"Please type your review below:\n"
|
||||
)
|
||||
|
||||
rsp = input(prompt)
|
||||
rsp = await get_human_input(prompt)
|
||||
|
||||
if rsp.lower() in ReviewConst.EXIT_WORDS:
|
||||
exit()
|
||||
|
|
|
|||
|
|
@ -13,9 +13,10 @@ from typing import Literal, Tuple
|
|||
|
||||
import nbformat
|
||||
from nbclient import NotebookClient
|
||||
from nbclient.exceptions import CellTimeoutError, DeadKernelError
|
||||
from nbclient.exceptions import CellExecutionComplete, CellTimeoutError, DeadKernelError
|
||||
from nbclient.util import ensure_async
|
||||
from nbformat import NotebookNode
|
||||
from nbformat.v4 import new_code_cell, new_markdown_cell, new_output
|
||||
from nbformat.v4 import new_code_cell, new_markdown_cell, new_output, output_from_msg
|
||||
from rich.box import MINIMAL
|
||||
from rich.console import Console, Group
|
||||
from rich.live import Live
|
||||
|
|
@ -25,29 +26,79 @@ from rich.syntax import Syntax
|
|||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.utils.report import NotebookReporter
|
||||
|
||||
INSTALL_KEEPLEN = 500
|
||||
INI_CODE = """import warnings
|
||||
import logging
|
||||
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.setLevel(logging.ERROR)
|
||||
warnings.filterwarnings('ignore')"""
|
||||
|
||||
|
||||
class RealtimeOutputNotebookClient(NotebookClient):
|
||||
"""Realtime output of Notebook execution."""
|
||||
|
||||
def __init__(self, *args, notebook_reporter=None, **kwargs) -> None:
|
||||
super().__init__(*args, **kwargs)
|
||||
self.notebook_reporter = notebook_reporter or NotebookReporter()
|
||||
|
||||
async def _async_poll_output_msg(self, parent_msg_id: str, cell: NotebookNode, cell_index: int) -> None:
|
||||
"""Implement a feature to enable sending messages."""
|
||||
assert self.kc is not None
|
||||
while True:
|
||||
msg = await ensure_async(self.kc.iopub_channel.get_msg(timeout=None))
|
||||
await self._send_msg(msg)
|
||||
|
||||
if msg["parent_header"].get("msg_id") == parent_msg_id:
|
||||
try:
|
||||
# Will raise CellExecutionComplete when completed
|
||||
self.process_message(msg, cell, cell_index)
|
||||
except CellExecutionComplete:
|
||||
return
|
||||
|
||||
async def _send_msg(self, msg: dict):
|
||||
msg_type = msg.get("header", {}).get("msg_type")
|
||||
if msg_type not in ["stream", "error", "execute_result"]:
|
||||
return
|
||||
|
||||
await self.notebook_reporter.async_report(output_from_msg(msg), "content")
|
||||
|
||||
|
||||
class ExecuteNbCode(Action):
|
||||
"""execute notebook code block, return result to llm, and display it."""
|
||||
|
||||
nb: NotebookNode
|
||||
nb_client: NotebookClient
|
||||
nb_client: RealtimeOutputNotebookClient = None
|
||||
console: Console
|
||||
interaction: str
|
||||
timeout: int = 600
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
nb=nbformat.v4.new_notebook(),
|
||||
timeout=600,
|
||||
):
|
||||
def __init__(self, nb=nbformat.v4.new_notebook(), timeout=600):
|
||||
super().__init__(
|
||||
nb=nb,
|
||||
nb_client=NotebookClient(nb, timeout=timeout),
|
||||
timeout=timeout,
|
||||
console=Console(),
|
||||
interaction=("ipython" if self.is_ipython() else "terminal"),
|
||||
)
|
||||
self.reporter = NotebookReporter()
|
||||
self.set_nb_client()
|
||||
self.init_called = False
|
||||
|
||||
async def init_code(self):
|
||||
if not self.init_called:
|
||||
await self.run(INI_CODE)
|
||||
self.init_called = True
|
||||
|
||||
def set_nb_client(self):
|
||||
self.nb_client = RealtimeOutputNotebookClient(
|
||||
self.nb,
|
||||
timeout=self.timeout,
|
||||
resources={"metadata": {"path": self.config.workspace.path}},
|
||||
notebook_reporter=self.reporter,
|
||||
coalesce_streams=True,
|
||||
)
|
||||
|
||||
async def build(self):
|
||||
if self.nb_client.kc is None or not await self.nb_client.kc.is_alive():
|
||||
|
|
@ -82,7 +133,7 @@ class ExecuteNbCode(Action):
|
|||
# sleep 1s to wait for the kernel to be cleaned up completely
|
||||
await asyncio.sleep(1)
|
||||
await self.build()
|
||||
self.nb_client = NotebookClient(self.nb, timeout=self.timeout)
|
||||
self.set_nb_client()
|
||||
|
||||
def add_code_cell(self, code: str):
|
||||
self.nb.cells.append(new_code_cell(source=code))
|
||||
|
|
@ -106,7 +157,7 @@ class ExecuteNbCode(Action):
|
|||
else:
|
||||
cell["outputs"].append(new_output(output_type="stream", name="stdout", text=str(output)))
|
||||
|
||||
def parse_outputs(self, outputs: list[str], keep_len: int = 2000) -> Tuple[bool, str]:
|
||||
def parse_outputs(self, outputs: list[str], keep_len: int = 5000) -> Tuple[bool, str]:
|
||||
"""Parses the outputs received from notebook execution."""
|
||||
assert isinstance(outputs, list)
|
||||
parsed_output, is_success = [], True
|
||||
|
|
@ -135,9 +186,12 @@ class ExecuteNbCode(Action):
|
|||
is_success = False
|
||||
|
||||
output_text = remove_escape_and_color_codes(output_text)
|
||||
if is_success:
|
||||
output_text = remove_log_and_warning_lines(output_text)
|
||||
# The useful information of the exception is at the end,
|
||||
# the useful information of normal output is at the begining.
|
||||
output_text = output_text[:keep_len] if is_success else output_text[-keep_len:]
|
||||
if "<!DOCTYPE html>" not in output_text:
|
||||
output_text = output_text[:keep_len] if is_success else output_text[-keep_len:]
|
||||
|
||||
parsed_output.append(output_text)
|
||||
return is_success, ",".join(parsed_output)
|
||||
|
|
@ -172,6 +226,8 @@ class ExecuteNbCode(Action):
|
|||
"""set timeout for run code.
|
||||
returns the success or failure of the cell execution, and an optional error message.
|
||||
"""
|
||||
await self.reporter.async_report(cell, "content")
|
||||
|
||||
try:
|
||||
await self.nb_client.async_execute_cell(cell, cell_index)
|
||||
return self.parse_outputs(self.nb.cells[-1].outputs)
|
||||
|
|
@ -193,29 +249,45 @@ class ExecuteNbCode(Action):
|
|||
"""
|
||||
self._display(code, language)
|
||||
|
||||
if language == "python":
|
||||
# add code to the notebook
|
||||
self.add_code_cell(code=code)
|
||||
async with self.reporter:
|
||||
if language == "python":
|
||||
# add code to the notebook
|
||||
self.add_code_cell(code=code)
|
||||
|
||||
# build code executor
|
||||
await self.build()
|
||||
# build code executor
|
||||
await self.build()
|
||||
|
||||
# run code
|
||||
cell_index = len(self.nb.cells) - 1
|
||||
success, outputs = await self.run_cell(self.nb.cells[-1], cell_index)
|
||||
# run code
|
||||
cell_index = len(self.nb.cells) - 1
|
||||
success, outputs = await self.run_cell(self.nb.cells[-1], cell_index)
|
||||
|
||||
if "!pip" in code:
|
||||
success = False
|
||||
if "!pip" in code:
|
||||
success = False
|
||||
outputs = outputs[-INSTALL_KEEPLEN:]
|
||||
elif "git clone" in code:
|
||||
outputs = outputs[:INSTALL_KEEPLEN] + "..." + outputs[-INSTALL_KEEPLEN:]
|
||||
|
||||
elif language == "markdown":
|
||||
# add markdown content to markdown cell in a notebook.
|
||||
self.add_markdown_cell(code)
|
||||
# return True, beacuse there is no execution failure for markdown cell.
|
||||
outputs, success = code, True
|
||||
else:
|
||||
raise ValueError(f"Only support for language: python, markdown, but got {language}, ")
|
||||
|
||||
file_path = self.config.workspace.path / "code.ipynb"
|
||||
nbformat.write(self.nb, file_path)
|
||||
await self.reporter.async_report(file_path, "path")
|
||||
|
||||
return outputs, success
|
||||
|
||||
elif language == "markdown":
|
||||
# add markdown content to markdown cell in a notebook.
|
||||
self.add_markdown_cell(code)
|
||||
# return True, beacuse there is no execution failure for markdown cell.
|
||||
return code, True
|
||||
else:
|
||||
raise ValueError(f"Only support for language: python, markdown, but got {language}, ")
|
||||
|
||||
def remove_log_and_warning_lines(input_str: str) -> str:
|
||||
delete_lines = ["[warning]", "warning:", "[cv]", "[info]"]
|
||||
result = "\n".join(
|
||||
[line for line in input_str.split("\n") if not any(dl in line.lower() for dl in delete_lines)]
|
||||
).strip()
|
||||
return result
|
||||
|
||||
|
||||
def remove_escape_and_color_codes(input_str: str):
|
||||
|
|
|
|||
5
metagpt/actions/di/run_command.py
Normal file
5
metagpt/actions/di/run_command.py
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
from metagpt.actions import Action
|
||||
|
||||
|
||||
class RunCommand(Action):
|
||||
"""A dummy RunCommand action used as a symbol only"""
|
||||
|
|
@ -40,6 +40,7 @@ class WriteAnalysisCode(Action):
|
|||
tool_info: str = "",
|
||||
working_memory: list[Message] = None,
|
||||
use_reflection: bool = False,
|
||||
memory: list[Message] = None,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
structual_prompt = STRUCTUAL_PROMPT.format(
|
||||
|
|
@ -49,14 +50,15 @@ class WriteAnalysisCode(Action):
|
|||
)
|
||||
|
||||
working_memory = working_memory or []
|
||||
context = self.llm.format_msg([Message(content=structual_prompt, role="user")] + working_memory)
|
||||
memory = memory or []
|
||||
context = self.llm.format_msg(memory + [Message(content=structual_prompt, role="user")] + working_memory)
|
||||
|
||||
# LLM call
|
||||
if use_reflection:
|
||||
code = await self._debug_with_reflection(context=context, working_memory=working_memory)
|
||||
else:
|
||||
rsp = await self.llm.aask(context, system_msgs=[INTERPRETER_SYSTEM_MSG], **kwargs)
|
||||
code = CodeParser.parse_code(block=None, text=rsp)
|
||||
code = CodeParser.parse_code(text=rsp, lang="python")
|
||||
|
||||
return code
|
||||
|
||||
|
|
@ -68,5 +70,5 @@ class CheckData(Action):
|
|||
code_written = "\n\n".join(code_written)
|
||||
prompt = CHECK_DATA_PROMPT.format(code_written=code_written)
|
||||
rsp = await self._aask(prompt)
|
||||
code = CodeParser.parse_code(block=None, text=rsp)
|
||||
code = CodeParser.parse_code(text=rsp)
|
||||
return code
|
||||
|
|
|
|||
|
|
@ -16,38 +16,38 @@ from metagpt.schema import Message, Plan, Task
|
|||
from metagpt.strategy.task_type import TaskType
|
||||
from metagpt.utils.common import CodeParser
|
||||
|
||||
PROMPT_TEMPLATE: str = """
|
||||
# Context:
|
||||
{context}
|
||||
# Available Task Types:
|
||||
{task_type_desc}
|
||||
# Task:
|
||||
Based on the context, write a plan or modify an existing plan of what you should do to achieve the goal. A plan consists of one to {max_tasks} tasks.
|
||||
If you are modifying an existing plan, carefully follow the instruction, don't make unnecessary changes. Give the whole plan unless instructed to modify only one task of the plan.
|
||||
If you encounter errors on the current task, revise and output the current single task only.
|
||||
Output a list of jsons following the format:
|
||||
```json
|
||||
[
|
||||
{{
|
||||
"task_id": str = "unique identifier for a task in plan, can be an ordinal",
|
||||
"dependent_task_ids": list[str] = "ids of tasks prerequisite to this task",
|
||||
"instruction": "what you should do in this task, one short phrase or sentence.",
|
||||
"task_type": "type of this task, should be one of Available Task Types.",
|
||||
}},
|
||||
...
|
||||
]
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
class WritePlan(Action):
|
||||
PROMPT_TEMPLATE: str = """
|
||||
# Context:
|
||||
{context}
|
||||
# Available Task Types:
|
||||
{task_type_desc}
|
||||
# Task:
|
||||
Based on the context, write a plan or modify an existing plan of what you should do to achieve the goal. A plan consists of one to {max_tasks} tasks.
|
||||
If you are modifying an existing plan, carefully follow the instruction, don't make unnecessary changes. Give the whole plan unless instructed to modify only one task of the plan.
|
||||
If you encounter errors on the current task, revise and output the current single task only.
|
||||
Output a list of jsons following the format:
|
||||
```json
|
||||
[
|
||||
{{
|
||||
"task_id": str = "unique identifier for a task in plan, can be an ordinal",
|
||||
"dependent_task_ids": list[str] = "ids of tasks prerequisite to this task",
|
||||
"instruction": "what you should do in this task, one short phrase or sentence",
|
||||
"task_type": "type of this task, should be one of Available Task Types",
|
||||
}},
|
||||
...
|
||||
]
|
||||
```
|
||||
"""
|
||||
|
||||
async def run(self, context: list[Message], max_tasks: int = 5) -> str:
|
||||
task_type_desc = "\n".join([f"- **{tt.type_name}**: {tt.value.desc}" for tt in TaskType])
|
||||
prompt = self.PROMPT_TEMPLATE.format(
|
||||
prompt = PROMPT_TEMPLATE.format(
|
||||
context="\n".join([str(ct) for ct in context]), max_tasks=max_tasks, task_type_desc=task_type_desc
|
||||
)
|
||||
rsp = await self._aask(prompt)
|
||||
rsp = CodeParser.parse_code(block=None, text=rsp)
|
||||
rsp = CodeParser.parse_code(text=rsp)
|
||||
return rsp
|
||||
|
||||
|
||||
|
|
@ -65,10 +65,14 @@ def update_plan_from_rsp(rsp: str, current_plan: Plan):
|
|||
# handle a single task
|
||||
if current_plan.has_task_id(tasks[0].task_id):
|
||||
# replace an existing task
|
||||
current_plan.replace_task(tasks[0])
|
||||
current_plan.replace_task(
|
||||
tasks[0].task_id, tasks[0].dependent_task_ids, tasks[0].instruction, tasks[0].assignee
|
||||
)
|
||||
else:
|
||||
# append one task
|
||||
current_plan.append_task(tasks[0])
|
||||
current_plan.append_task(
|
||||
tasks[0].task_id, tasks[0].dependent_task_ids, tasks[0].instruction, tasks[0].assignee
|
||||
)
|
||||
|
||||
else:
|
||||
# add tasks in general
|
||||
|
|
|
|||
123
metagpt/actions/extract_readme.py
Normal file
123
metagpt/actions/extract_readme.py
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Module Description: This script defines the LearnReadMe class, which is an action to learn from the contents of
|
||||
a README.md file.
|
||||
Author: mashenquan
|
||||
Date: 2024-3-20
|
||||
"""
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.const import GRAPH_REPO_FILE_REPO
|
||||
from metagpt.schema import Message
|
||||
from metagpt.utils.common import aread
|
||||
from metagpt.utils.di_graph_repository import DiGraphRepository
|
||||
from metagpt.utils.graph_repository import GraphKeyword, GraphRepository
|
||||
|
||||
|
||||
class ExtractReadMe(Action):
|
||||
"""
|
||||
An action to extract summary, installation, configuration, usages from the contents of a README.md file.
|
||||
|
||||
Attributes:
|
||||
graph_db (Optional[GraphRepository]): A graph database repository.
|
||||
install_to_path (Optional[str]): The path where the repository to install to.
|
||||
"""
|
||||
|
||||
graph_db: Optional[GraphRepository] = None
|
||||
install_to_path: Optional[str] = Field(default="/TO/PATH")
|
||||
_readme: Optional[str] = None
|
||||
_filename: Optional[str] = None
|
||||
|
||||
async def run(self, with_messages=None, **kwargs):
|
||||
"""
|
||||
Implementation of `Action`'s `run` method.
|
||||
|
||||
Args:
|
||||
with_messages (Optional[Type]): An optional argument specifying messages to react to.
|
||||
"""
|
||||
graph_repo_pathname = self.context.git_repo.workdir / GRAPH_REPO_FILE_REPO / self.context.git_repo.workdir.name
|
||||
self.graph_db = await DiGraphRepository.load_from(str(graph_repo_pathname.with_suffix(".json")))
|
||||
summary = await self._summarize()
|
||||
await self.graph_db.insert(subject=self._filename, predicate=GraphKeyword.HAS_SUMMARY, object_=summary)
|
||||
install = await self._extract_install()
|
||||
await self.graph_db.insert(subject=self._filename, predicate=GraphKeyword.HAS_INSTALL, object_=install)
|
||||
conf = await self._extract_configuration()
|
||||
await self.graph_db.insert(subject=self._filename, predicate=GraphKeyword.HAS_CONFIG, object_=conf)
|
||||
usage = await self._extract_usage()
|
||||
await self.graph_db.insert(subject=self._filename, predicate=GraphKeyword.HAS_USAGE, object_=usage)
|
||||
|
||||
await self.graph_db.save()
|
||||
|
||||
return Message(content="", cause_by=self)
|
||||
|
||||
async def _summarize(self) -> str:
|
||||
readme = await self._get()
|
||||
summary = await self.llm.aask(
|
||||
readme,
|
||||
system_msgs=[
|
||||
"You are a tool can summarize git repository README.md file.",
|
||||
"Return the summary about what is the repository.",
|
||||
],
|
||||
stream=False,
|
||||
)
|
||||
return summary
|
||||
|
||||
async def _extract_install(self) -> str:
|
||||
await self._get()
|
||||
install = await self.llm.aask(
|
||||
self._readme,
|
||||
system_msgs=[
|
||||
"You are a tool can install git repository according to README.md file.",
|
||||
"Return a bash code block of markdown including:\n"
|
||||
f"1. git clone the repository to the directory `{self.install_to_path}`;\n"
|
||||
f"2. cd `{self.install_to_path}`;\n"
|
||||
f"3. install the repository.",
|
||||
],
|
||||
stream=False,
|
||||
)
|
||||
return install
|
||||
|
||||
async def _extract_configuration(self) -> str:
|
||||
await self._get()
|
||||
configuration = await self.llm.aask(
|
||||
self._readme,
|
||||
system_msgs=[
|
||||
"You are a tool can configure git repository according to README.md file.",
|
||||
"Return a bash code block of markdown object to configure the repository if necessary, otherwise return"
|
||||
" a empty bash code block of markdown object",
|
||||
],
|
||||
stream=False,
|
||||
)
|
||||
return configuration
|
||||
|
||||
async def _extract_usage(self) -> str:
|
||||
await self._get()
|
||||
usage = await self.llm.aask(
|
||||
self._readme,
|
||||
system_msgs=[
|
||||
"You are a tool can summarize all usages of git repository according to README.md file.",
|
||||
"Return a list of code block of markdown objects to demonstrates the usage of the repository.",
|
||||
],
|
||||
stream=False,
|
||||
)
|
||||
return usage
|
||||
|
||||
async def _get(self) -> str:
|
||||
if self._readme is not None:
|
||||
return self._readme
|
||||
root = Path(self.i_context).resolve()
|
||||
filename = None
|
||||
for file_path in root.iterdir():
|
||||
if file_path.is_file() and file_path.stem == "README":
|
||||
filename = file_path
|
||||
break
|
||||
if not filename:
|
||||
return ""
|
||||
self._readme = await aread(filename=filename, encoding="utf-8")
|
||||
self._filename = str(filename)
|
||||
return self._readme
|
||||
|
|
@ -22,4 +22,4 @@ class GenerateQuestions(Action):
|
|||
name: str = "GenerateQuestions"
|
||||
|
||||
async def run(self, context) -> ActionNode:
|
||||
return await QUESTIONS.fill(context=context, llm=self.llm)
|
||||
return await QUESTIONS.fill(req=context, llm=self.llm)
|
||||
|
|
|
|||
226
metagpt/actions/import_repo.py
Normal file
226
metagpt/actions/import_repo.py
Normal file
|
|
@ -0,0 +1,226 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
|
||||
This script defines an action to import a Git repository into the MetaGPT project format, enabling incremental
|
||||
appending of requirements.
|
||||
The MetaGPT project format encompasses a structured representation of project data compatible with MetaGPT's
|
||||
capabilities, facilitating the integration of Git repositories into MetaGPT workflows while allowing for the gradual
|
||||
addition of requirements.
|
||||
|
||||
"""
|
||||
import json
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import List, Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.actions.extract_readme import ExtractReadMe
|
||||
from metagpt.actions.rebuild_class_view import RebuildClassView
|
||||
from metagpt.actions.rebuild_sequence_view import RebuildSequenceView
|
||||
from metagpt.const import GRAPH_REPO_FILE_REPO
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import Message
|
||||
from metagpt.tools.libs.git import git_clone
|
||||
from metagpt.utils.common import (
|
||||
aread,
|
||||
awrite,
|
||||
list_files,
|
||||
parse_json_code_block,
|
||||
split_namespace,
|
||||
)
|
||||
from metagpt.utils.di_graph_repository import DiGraphRepository
|
||||
from metagpt.utils.file_repository import FileRepository
|
||||
from metagpt.utils.git_repository import GitRepository
|
||||
from metagpt.utils.graph_repository import GraphKeyword, GraphRepository
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
|
||||
class ImportRepo(Action):
|
||||
"""
|
||||
An action to import a Git repository into a graph database and create related artifacts.
|
||||
|
||||
Attributes:
|
||||
repo_path (str): The URL of the Git repository to import.
|
||||
graph_db (Optional[GraphRepository]): The output graph database of the Git repository.
|
||||
rid (str): The output requirement ID.
|
||||
"""
|
||||
|
||||
repo_path: str # input, git repo url.
|
||||
graph_db: Optional[GraphRepository] = None # output. graph db of the git repository
|
||||
rid: str = "" # output, requirement ID.
|
||||
|
||||
async def run(self, with_messages: List[Message] = None, **kwargs) -> Message:
|
||||
"""
|
||||
Runs the import process for the Git repository.
|
||||
|
||||
Args:
|
||||
with_messages (List[Message], optional): Additional messages to include.
|
||||
**kwargs: Additional keyword arguments.
|
||||
|
||||
Returns:
|
||||
Message: A message indicating the completion of the import process.
|
||||
"""
|
||||
await self._create_repo()
|
||||
await self._create_prd()
|
||||
await self._create_system_design()
|
||||
self.context.git_repo.archive(comments="Import")
|
||||
|
||||
async def _create_repo(self):
|
||||
path = await git_clone(url=self.repo_path, output_dir=self.config.workspace.path)
|
||||
self.repo_path = str(path)
|
||||
self.config.project_path = path
|
||||
self.context.git_repo = GitRepository(local_path=path, auto_init=True)
|
||||
self.context.repo = ProjectRepo(self.context.git_repo)
|
||||
self.context.src_workspace = await self._guess_src_workspace()
|
||||
await awrite(
|
||||
filename=self.context.repo.workdir / ".src_workspace",
|
||||
data=str(self.context.src_workspace.relative_to(self.context.repo.workdir)),
|
||||
)
|
||||
|
||||
async def _create_prd(self):
|
||||
action = ExtractReadMe(i_context=str(self.context.repo.workdir), context=self.context)
|
||||
await action.run()
|
||||
graph_repo_pathname = self.context.git_repo.workdir / GRAPH_REPO_FILE_REPO / self.context.git_repo.workdir.name
|
||||
self.graph_db = await DiGraphRepository.load_from(str(graph_repo_pathname.with_suffix(".json")))
|
||||
rows = await self.graph_db.select(predicate=GraphKeyword.HAS_SUMMARY)
|
||||
prd = {"Project Name": self.context.repo.workdir.name}
|
||||
for r in rows:
|
||||
if Path(r.subject).stem == "README":
|
||||
prd["Original Requirements"] = r.object_
|
||||
break
|
||||
self.rid = FileRepository.new_filename()
|
||||
await self.repo.docs.prd.save(filename=self.rid + ".json", content=json.dumps(prd))
|
||||
|
||||
async def _create_system_design(self):
|
||||
action = RebuildClassView(
|
||||
name="ReverseEngineering", i_context=str(self.context.src_workspace), context=self.context
|
||||
)
|
||||
await action.run()
|
||||
rows = await action.graph_db.select(predicate="hasMermaidClassDiagramFile")
|
||||
class_view_filename = rows[0].object_
|
||||
logger.info(f"class view:{class_view_filename}")
|
||||
|
||||
rows = await action.graph_db.select(predicate=GraphKeyword.HAS_PAGE_INFO)
|
||||
tag = "__name__:__main__"
|
||||
entries = []
|
||||
src_workspace = self.context.src_workspace.relative_to(self.context.repo.workdir)
|
||||
for r in rows:
|
||||
if tag in r.subject:
|
||||
path = split_namespace(r.subject)[0]
|
||||
elif tag in r.object_:
|
||||
path = split_namespace(r.object_)[0]
|
||||
else:
|
||||
continue
|
||||
if Path(path).is_relative_to(src_workspace):
|
||||
entries.append(Path(path))
|
||||
main_entry = await self._guess_main_entry(entries)
|
||||
full_path = RebuildSequenceView.get_full_filename(self.context.repo.workdir, main_entry)
|
||||
action = RebuildSequenceView(context=self.context, i_context=str(full_path))
|
||||
try:
|
||||
await action.run()
|
||||
except Exception as e:
|
||||
logger.warning(f"{e}, use the last successful version.")
|
||||
files = list_files(self.context.repo.resources.data_api_design.workdir)
|
||||
pattern = re.compile(r"[^a-zA-Z0-9]")
|
||||
name = re.sub(pattern, "_", str(main_entry))
|
||||
filename = Path(name).with_suffix(".sequence_diagram.mmd")
|
||||
postfix = str(filename)
|
||||
sequence_files = [i for i in files if postfix in str(i)]
|
||||
content = await aread(filename=sequence_files[0])
|
||||
await self.context.repo.resources.data_api_design.save(
|
||||
filename=self.repo.workdir.stem + ".sequence_diagram.mmd", content=content
|
||||
)
|
||||
await self._save_system_design()
|
||||
|
||||
async def _save_system_design(self):
|
||||
class_view = await self.context.repo.resources.data_api_design.get(
|
||||
filename=self.repo.workdir.stem + ".class_diagram.mmd"
|
||||
)
|
||||
sequence_view = await self.context.repo.resources.data_api_design.get(
|
||||
filename=self.repo.workdir.stem + ".sequence_diagram.mmd"
|
||||
)
|
||||
file_list = self.context.git_repo.get_files(relative_path=".", root_relative_path=self.context.src_workspace)
|
||||
data = {
|
||||
"Data structures and interfaces": class_view.content,
|
||||
"Program call flow": sequence_view.content,
|
||||
"File list": [str(i) for i in file_list],
|
||||
}
|
||||
await self.context.repo.docs.system_design.save(filename=self.rid + ".json", content=json.dumps(data))
|
||||
|
||||
async def _guess_src_workspace(self) -> Path:
|
||||
files = list_files(self.context.repo.workdir)
|
||||
dirs = [i.parent for i in files if i.name == "__init__.py"]
|
||||
distinct = set()
|
||||
for i in dirs:
|
||||
done = False
|
||||
for j in distinct:
|
||||
if i.is_relative_to(j):
|
||||
done = True
|
||||
break
|
||||
if j.is_relative_to(i):
|
||||
break
|
||||
if not done:
|
||||
distinct = {j for j in distinct if not j.is_relative_to(i)}
|
||||
distinct.add(i)
|
||||
if len(distinct) == 1:
|
||||
return list(distinct)[0]
|
||||
prompt = "\n".join([f"- {str(i)}" for i in distinct])
|
||||
rsp = await self.llm.aask(
|
||||
prompt,
|
||||
system_msgs=[
|
||||
"You are a tool to choose the source code path from a list of paths based on the directory name.",
|
||||
"You should identify the source code path among paths such as unit test path, examples path, etc.",
|
||||
"Return a markdown JSON object containing:\n"
|
||||
'- a "src" field containing the source code path;\n'
|
||||
'- a "reason" field containing explaining why other paths is not the source code path\n',
|
||||
],
|
||||
)
|
||||
logger.debug(rsp)
|
||||
json_blocks = parse_json_code_block(rsp)
|
||||
|
||||
class Data(BaseModel):
|
||||
src: str
|
||||
reason: str
|
||||
|
||||
data = Data.model_validate_json(json_blocks[0])
|
||||
logger.info(f"src_workspace: {data.src}")
|
||||
return Path(data.src)
|
||||
|
||||
async def _guess_main_entry(self, entries: List[Path]) -> Path:
|
||||
if len(entries) == 1:
|
||||
return entries[0]
|
||||
|
||||
file_list = "## File List\n"
|
||||
file_list += "\n".join([f"- {i}" for i in entries])
|
||||
|
||||
rows = await self.graph_db.select(predicate=GraphKeyword.HAS_USAGE)
|
||||
usage = "## Usage\n"
|
||||
for r in rows:
|
||||
if Path(r.subject).stem == "README":
|
||||
usage += r.object_
|
||||
|
||||
prompt = file_list + "\n---\n" + usage
|
||||
rsp = await self.llm.aask(
|
||||
prompt,
|
||||
system_msgs=[
|
||||
'You are a tool to choose the source file path from "File List" which is used in "Usage".',
|
||||
'You choose the source file path based on the name of file and the class name and package name used in "Usage".',
|
||||
"Return a markdown JSON object containing:\n"
|
||||
'- a "filename" field containing the chosen source file path from "File List" which is used in "Usage";\n'
|
||||
'- a "reason" field explaining why.',
|
||||
],
|
||||
stream=False,
|
||||
)
|
||||
logger.debug(rsp)
|
||||
json_blocks = parse_json_code_block(rsp)
|
||||
|
||||
class Data(BaseModel):
|
||||
filename: str
|
||||
reason: str
|
||||
|
||||
data = Data.model_validate_json(json_blocks[0])
|
||||
logger.info(f"main: {data.filename}")
|
||||
return Path(data.filename)
|
||||
|
|
@ -9,12 +9,14 @@
|
|||
"""
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from typing import Dict, Optional
|
||||
|
||||
from metagpt.actions import Action, ActionOutput
|
||||
from metagpt.actions import Action, UserRequirement
|
||||
from metagpt.const import REQUIREMENT_FILENAME
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import AIMessage
|
||||
from metagpt.utils.common import any_to_str
|
||||
from metagpt.utils.file_repository import FileRepository
|
||||
from metagpt.utils.git_repository import GitRepository
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
|
||||
|
|
@ -23,12 +25,19 @@ class PrepareDocuments(Action):
|
|||
|
||||
name: str = "PrepareDocuments"
|
||||
i_context: Optional[str] = None
|
||||
key_descriptions: Optional[Dict[str, str]] = None
|
||||
send_to: str
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
if not self.key_descriptions:
|
||||
self.key_descriptions = {"project_path": 'the project path if exists in "Original Requirement"'}
|
||||
|
||||
@property
|
||||
def config(self):
|
||||
return self.context.config
|
||||
|
||||
def _init_repo(self):
|
||||
def _init_repo(self) -> ProjectRepo:
|
||||
"""Initialize the Git environment."""
|
||||
if not self.config.project_path:
|
||||
name = self.config.project_name or FileRepository.new_filename()
|
||||
|
|
@ -37,16 +46,45 @@ class PrepareDocuments(Action):
|
|||
path = Path(self.config.project_path)
|
||||
if path.exists() and not self.config.inc:
|
||||
shutil.rmtree(path)
|
||||
self.config.project_path = path
|
||||
self.context.git_repo = GitRepository(local_path=path, auto_init=True)
|
||||
self.context.repo = ProjectRepo(self.context.git_repo)
|
||||
self.context.kwargs.project_path = path
|
||||
self.context.kwargs.inc = self.config.inc
|
||||
return ProjectRepo(path)
|
||||
|
||||
async def run(self, with_messages, **kwargs):
|
||||
"""Create and initialize the workspace folder, initialize the Git environment."""
|
||||
self._init_repo()
|
||||
user_requirements = [i for i in with_messages if i.cause_by == any_to_str(UserRequirement)]
|
||||
if not self.config.project_path and user_requirements and self.key_descriptions:
|
||||
args = await user_requirements[0].parse_resources(llm=self.llm, key_descriptions=self.key_descriptions)
|
||||
for k, v in args.items():
|
||||
if not v or k in ["resources", "reason"]:
|
||||
continue
|
||||
self.context.kwargs.set(k, v)
|
||||
logger.info(f"{k}={v}")
|
||||
if self.context.kwargs.project_path:
|
||||
self.config.update_via_cli(
|
||||
project_path=self.context.kwargs.project_path,
|
||||
project_name="",
|
||||
inc=False,
|
||||
reqa_file=self.context.kwargs.reqa_file or "",
|
||||
max_auto_summarize_code=0,
|
||||
)
|
||||
|
||||
repo = self._init_repo()
|
||||
|
||||
# Write the newly added requirements from the main parameter idea to `docs/requirement.txt`.
|
||||
doc = await self.repo.docs.save(filename=REQUIREMENT_FILENAME, content=with_messages[0].content)
|
||||
await repo.docs.save(filename=REQUIREMENT_FILENAME, content=with_messages[0].content)
|
||||
# Send a Message notification to the WritePRD action, instructing it to process requirements using
|
||||
# `docs/requirement.txt` and `docs/prd/`.
|
||||
return ActionOutput(content=doc.content, instruct_content=doc)
|
||||
return AIMessage(
|
||||
content="",
|
||||
instruct_content=AIMessage.create_instruct_value(
|
||||
kvs={
|
||||
"project_path": str(repo.workdir),
|
||||
"requirements_filename": str(repo.docs.workdir / REQUIREMENT_FILENAME),
|
||||
"prd_filenames": [str(repo.docs.prd.workdir / i) for i in repo.docs.prd.all_files],
|
||||
},
|
||||
class_name="PrepareDocumentsOutput",
|
||||
),
|
||||
cause_by=self,
|
||||
send_to=self.send_to,
|
||||
)
|
||||
|
|
|
|||
|
|
@ -22,4 +22,4 @@ class PrepareInterview(Action):
|
|||
name: str = "PrepareInterview"
|
||||
|
||||
async def run(self, context):
|
||||
return await QUESTIONS.fill(context=context, llm=self.llm)
|
||||
return await QUESTIONS.fill(req=context, llm=self.llm)
|
||||
|
|
|
|||
|
|
@ -8,17 +8,30 @@
|
|||
1. Divide the context into three components: legacy code, unit test code, and console log.
|
||||
2. Move the document storage operations related to WritePRD from the save operation of WriteDesign.
|
||||
3. According to the design in Section 2.2.3.5.4 of RFC 135, add incremental iteration functionality.
|
||||
@Modified By: mashenquan, 2024/5/31. Implement Chapter 3 of RFC 236.
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import Optional
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from metagpt.actions.action import Action
|
||||
from metagpt.actions.action_output import ActionOutput
|
||||
from metagpt.actions.project_management_an import PM_NODE, REFINED_PM_NODE
|
||||
from metagpt.const import PACKAGE_REQUIREMENTS_FILENAME
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import Document, Documents
|
||||
from metagpt.schema import AIMessage, Document, Documents, Message
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import (
|
||||
aread,
|
||||
awrite,
|
||||
rectify_pathname,
|
||||
save_json_to_markdown,
|
||||
to_markdown_code_block,
|
||||
)
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
from metagpt.utils.report import DocsReporter
|
||||
|
||||
NEW_REQ_TEMPLATE = """
|
||||
### Legacy Content
|
||||
|
|
@ -29,19 +42,67 @@ NEW_REQ_TEMPLATE = """
|
|||
"""
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class WriteTasks(Action):
|
||||
name: str = "CreateTasks"
|
||||
i_context: Optional[str] = None
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
async def run(self, with_messages):
|
||||
changed_system_designs = self.repo.docs.system_design.changed_files
|
||||
changed_tasks = self.repo.docs.task.changed_files
|
||||
async def run(
|
||||
self,
|
||||
with_messages: List[Message] = None,
|
||||
*,
|
||||
user_requirement: str = "",
|
||||
design_filename: str = "",
|
||||
output_pathname: str = "",
|
||||
**kwargs,
|
||||
) -> Union[AIMessage, str]:
|
||||
"""
|
||||
Write a project schedule given a project system design file.
|
||||
|
||||
Args:
|
||||
user_requirement (str, optional): A string specifying the user's requirements. Defaults to an empty string.
|
||||
design_filename (str): The output file path of the document. Defaults to an empty string.
|
||||
output_pathname (str, optional): The output path name of file that the project schedule should be saved to.
|
||||
**kwargs: Additional keyword arguments.
|
||||
|
||||
Returns:
|
||||
str: Path to the generated project schedule.
|
||||
|
||||
Example:
|
||||
# Write a project schedule with a given system design.
|
||||
>>> design_filename = "/absolute/path/to/snake_game/docs/system_design.json"
|
||||
>>> output_pathname = "/absolute/path/to/snake_game/docs/project_schedule.json"
|
||||
>>> user_requirement = "Write project schedule for a snake game following these requirements:..."
|
||||
>>> action = WriteTasks()
|
||||
>>> result = await action.run(user_requirement=user_requirement, design_filename=design_filename, output_pathname=output_pathname)
|
||||
>>> print(result)
|
||||
The project schedule is at /absolute/path/to/snake_game/docs/project_schedule.json
|
||||
|
||||
# Write a project schedule with a user requirement.
|
||||
>>> user_requirement = "Write project schedule for a snake game following these requirements: ..."
|
||||
>>> output_pathname = "/absolute/path/to/snake_game/docs/project_schedule.json"
|
||||
>>> action = WriteTasks()
|
||||
>>> result = await action.run(user_requirement=user_requirement, output_pathname=output_pathname)
|
||||
>>> print(result)
|
||||
The project schedule is at /absolute/path/to/snake_game/docs/project_schedule.json
|
||||
"""
|
||||
if not with_messages:
|
||||
return await self._execute_api(
|
||||
user_requirement=user_requirement, design_filename=design_filename, output_pathname=output_pathname
|
||||
)
|
||||
|
||||
self.input_args = with_messages[-1].instruct_content
|
||||
self.repo = ProjectRepo(self.input_args.project_path)
|
||||
changed_system_designs = self.input_args.changed_system_design_filenames
|
||||
changed_tasks = [str(self.repo.docs.task.workdir / i) for i in list(self.repo.docs.task.changed_files.keys())]
|
||||
change_files = Documents()
|
||||
# Rewrite the system designs that have undergone changes based on the git head diff under
|
||||
# `docs/system_designs/`.
|
||||
for filename in changed_system_designs:
|
||||
task_doc = await self._update_tasks(filename=filename)
|
||||
change_files.docs[filename] = task_doc
|
||||
change_files.docs[str(self.repo.docs.task.workdir / task_doc.filename)] = task_doc
|
||||
|
||||
# Rewrite the task files that have undergone changes based on the git head diff under `docs/tasks/`.
|
||||
for filename in changed_tasks:
|
||||
|
|
@ -54,31 +115,50 @@ class WriteTasks(Action):
|
|||
logger.info("Nothing has changed.")
|
||||
# Wait until all files under `docs/tasks/` are processed before sending the publish_message, leaving room for
|
||||
# global optimization in subsequent steps.
|
||||
return ActionOutput(content=change_files.model_dump_json(), instruct_content=change_files)
|
||||
kvs = self.input_args.model_dump()
|
||||
kvs["changed_task_filenames"] = [
|
||||
str(self.repo.docs.task.workdir / i) for i in list(self.repo.docs.task.changed_files.keys())
|
||||
]
|
||||
kvs["python_package_dependency_filename"] = str(self.repo.workdir / PACKAGE_REQUIREMENTS_FILENAME)
|
||||
return AIMessage(
|
||||
content="WBS is completed. "
|
||||
+ "\n".join(
|
||||
[PACKAGE_REQUIREMENTS_FILENAME]
|
||||
+ list(self.repo.docs.task.changed_files.keys())
|
||||
+ list(self.repo.resources.api_spec_and_task.changed_files.keys())
|
||||
),
|
||||
instruct_content=AIMessage.create_instruct_value(kvs=kvs, class_name="WriteTaskOutput"),
|
||||
cause_by=self,
|
||||
)
|
||||
|
||||
async def _update_tasks(self, filename):
|
||||
system_design_doc = await self.repo.docs.system_design.get(filename)
|
||||
task_doc = await self.repo.docs.task.get(filename)
|
||||
if task_doc:
|
||||
task_doc = await self._merge(system_design_doc=system_design_doc, task_doc=task_doc)
|
||||
await self.repo.docs.task.save_doc(doc=task_doc, dependencies={system_design_doc.root_relative_path})
|
||||
else:
|
||||
rsp = await self._run_new_tasks(context=system_design_doc.content)
|
||||
task_doc = await self.repo.docs.task.save(
|
||||
filename=filename,
|
||||
content=rsp.instruct_content.model_dump_json(),
|
||||
dependencies={system_design_doc.root_relative_path},
|
||||
)
|
||||
await self._update_requirements(task_doc)
|
||||
root_relative_path = Path(filename).relative_to(self.repo.workdir)
|
||||
system_design_doc = await Document.load(filename=filename, project_path=self.repo.workdir)
|
||||
task_doc = await self.repo.docs.task.get(root_relative_path.name)
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "task"}, "meta")
|
||||
if task_doc:
|
||||
task_doc = await self._merge(system_design_doc=system_design_doc, task_doc=task_doc)
|
||||
await self.repo.docs.task.save_doc(doc=task_doc, dependencies={system_design_doc.root_relative_path})
|
||||
else:
|
||||
rsp = await self._run_new_tasks(context=system_design_doc.content)
|
||||
task_doc = await self.repo.docs.task.save(
|
||||
filename=system_design_doc.filename,
|
||||
content=rsp.instruct_content.model_dump_json(),
|
||||
dependencies={system_design_doc.root_relative_path},
|
||||
)
|
||||
await self._update_requirements(task_doc)
|
||||
md = await self.repo.resources.api_spec_and_task.save_pdf(doc=task_doc)
|
||||
await reporter.async_report(self.repo.workdir / md.root_relative_path, "path")
|
||||
return task_doc
|
||||
|
||||
async def _run_new_tasks(self, context):
|
||||
node = await PM_NODE.fill(context, self.llm, schema=self.prompt_schema)
|
||||
async def _run_new_tasks(self, context: str):
|
||||
node = await PM_NODE.fill(req=context, llm=self.llm, schema=self.prompt_schema)
|
||||
return node
|
||||
|
||||
async def _merge(self, system_design_doc, task_doc) -> Document:
|
||||
context = NEW_REQ_TEMPLATE.format(context=system_design_doc.content, old_task=task_doc.content)
|
||||
node = await REFINED_PM_NODE.fill(context, self.llm, schema=self.prompt_schema)
|
||||
node = await REFINED_PM_NODE.fill(req=context, llm=self.llm, schema=self.prompt_schema)
|
||||
task_doc.content = node.instruct_content.model_dump_json()
|
||||
return task_doc
|
||||
|
||||
|
|
@ -94,3 +174,28 @@ class WriteTasks(Action):
|
|||
continue
|
||||
packages.add(pkg)
|
||||
await self.repo.save(filename=PACKAGE_REQUIREMENTS_FILENAME, content="\n".join(packages))
|
||||
|
||||
async def _execute_api(
|
||||
self, user_requirement: str = "", design_filename: str = "", output_pathname: str = ""
|
||||
) -> str:
|
||||
context = to_markdown_code_block(user_requirement)
|
||||
if design_filename:
|
||||
design_filename = rectify_pathname(path=design_filename, default_filename="system_design.md")
|
||||
content = await aread(filename=design_filename)
|
||||
context += to_markdown_code_block(content)
|
||||
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "task"}, "meta")
|
||||
node = await self._run_new_tasks(context)
|
||||
file_content = node.instruct_content.model_dump_json()
|
||||
|
||||
if not output_pathname:
|
||||
output_pathname = Path(output_pathname) / "docs" / "project_schedule.json"
|
||||
elif not Path(output_pathname).is_absolute():
|
||||
output_pathname = self.config.workspace.path / output_pathname
|
||||
output_pathname = rectify_pathname(path=output_pathname, default_filename="project_schedule.json")
|
||||
await awrite(filename=output_pathname, data=file_content)
|
||||
md_output_filename = output_pathname.with_suffix(".md")
|
||||
await save_json_to_markdown(content=file_content, output_filename=md_output_filename)
|
||||
await reporter.async_report(md_output_filename, "path")
|
||||
return f'Project Schedule filename: "{str(output_pathname)}"'
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ from metagpt.actions.action_node import ActionNode
|
|||
REQUIRED_PACKAGES = ActionNode(
|
||||
key="Required packages",
|
||||
expected_type=Optional[List[str]],
|
||||
instruction="Provide required third-party packages in requirements.txt format.",
|
||||
instruction="Provide required packages The response language should correspond to the context and requirements.",
|
||||
example=["flask==1.1.2", "bcrypt==3.2.0"],
|
||||
)
|
||||
|
||||
|
|
@ -27,7 +27,9 @@ LOGIC_ANALYSIS = ActionNode(
|
|||
key="Logic Analysis",
|
||||
expected_type=List[List[str]],
|
||||
instruction="Provide a list of files with the classes/methods/functions to be implemented, "
|
||||
"including dependency analysis and imports.",
|
||||
"including dependency analysis and imports."
|
||||
"Ensure consistency between System Design and Logic Analysis; the files must match exactly. "
|
||||
"If the file is written in Vue or React, use Tailwind CSS for styling.",
|
||||
example=[
|
||||
["game.py", "Contains Game class and ... functions"],
|
||||
["main.py", "Contains main function, from game import Game"],
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ from typing import Optional, Set, Tuple
|
|||
import aiofiles
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.config2 import config
|
||||
from metagpt.const import (
|
||||
AGGREGATION,
|
||||
COMPOSITION,
|
||||
|
|
@ -40,7 +39,7 @@ class RebuildClassView(Action):
|
|||
|
||||
graph_db: Optional[GraphRepository] = None
|
||||
|
||||
async def run(self, with_messages=None, format=config.prompt_schema):
|
||||
async def run(self, with_messages=None, format=None):
|
||||
"""
|
||||
Implementation of `Action`'s `run` method.
|
||||
|
||||
|
|
@ -48,6 +47,7 @@ class RebuildClassView(Action):
|
|||
with_messages (Optional[Type]): An optional argument specifying messages to react to.
|
||||
format (str): The format for the prompt schema.
|
||||
"""
|
||||
format = format if format else self.config.prompt_schema
|
||||
graph_repo_pathname = self.context.git_repo.workdir / GRAPH_REPO_FILE_REPO / self.context.git_repo.workdir.name
|
||||
self.graph_db = await DiGraphRepository.load_from(str(graph_repo_pathname.with_suffix(".json")))
|
||||
repo_parser = RepoParser(base_directory=Path(self.i_context))
|
||||
|
|
|
|||
|
|
@ -18,7 +18,6 @@ from pydantic import BaseModel
|
|||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.config2 import config
|
||||
from metagpt.const import GRAPH_REPO_FILE_REPO
|
||||
from metagpt.logs import logger
|
||||
from metagpt.repo_parser import CodeBlockInfo, DotClassInfo
|
||||
|
|
@ -84,7 +83,7 @@ class RebuildSequenceView(Action):
|
|||
|
||||
graph_db: Optional[GraphRepository] = None
|
||||
|
||||
async def run(self, with_messages=None, format=config.prompt_schema):
|
||||
async def run(self, with_messages=None, format=None):
|
||||
"""
|
||||
Implementation of `Action`'s `run` method.
|
||||
|
||||
|
|
@ -92,6 +91,7 @@ class RebuildSequenceView(Action):
|
|||
with_messages (Optional[Type]): An optional argument specifying messages to react to.
|
||||
format (str): The format for the prompt schema.
|
||||
"""
|
||||
format = format if format else self.config.prompt_schema
|
||||
graph_repo_pathname = self.context.git_repo.workdir / GRAPH_REPO_FILE_REPO / self.context.git_repo.workdir.name
|
||||
self.graph_db = await DiGraphRepository.load_from(str(graph_repo_pathname.with_suffix(".json")))
|
||||
if not self.i_context:
|
||||
|
|
@ -244,15 +244,6 @@ class RebuildSequenceView(Action):
|
|||
class_view = await self._get_uml_class_view(ns_class_name)
|
||||
source_code = await self._get_source_code(ns_class_name)
|
||||
|
||||
# prompt_blocks = [
|
||||
# "## Instruction\n"
|
||||
# "You are a python code to UML 2.0 Use Case translator.\n"
|
||||
# 'The generated UML 2.0 Use Case must include the roles or entities listed in "Participants".\n'
|
||||
# "The functional descriptions of Actors and Use Cases in the generated UML 2.0 Use Case must not "
|
||||
# 'conflict with the information in "Mermaid Class Views".\n'
|
||||
# 'The section under `if __name__ == "__main__":` of "Source Code" contains information about external '
|
||||
# "system interactions with the internal system.\n"
|
||||
# ]
|
||||
prompt_blocks = []
|
||||
block = "## Participants\n"
|
||||
for p in participants:
|
||||
|
|
@ -340,6 +331,7 @@ class RebuildSequenceView(Action):
|
|||
system_msgs=[
|
||||
"You are a Mermaid Sequence Diagram translator in function detail.",
|
||||
"Translate the markdown text to a Mermaid Sequence Diagram.",
|
||||
"Response must be concise.",
|
||||
"Return a markdown mermaid code block.",
|
||||
],
|
||||
stream=False,
|
||||
|
|
@ -440,7 +432,7 @@ class RebuildSequenceView(Action):
|
|||
rows = await self.graph_db.select(subject=ns_class_name, predicate=GraphKeyword.HAS_PAGE_INFO)
|
||||
filename = split_namespace(ns_class_name=ns_class_name)[0]
|
||||
if not rows:
|
||||
src_filename = RebuildSequenceView._get_full_filename(root=self.i_context, pathname=filename)
|
||||
src_filename = RebuildSequenceView.get_full_filename(root=self.i_context, pathname=filename)
|
||||
if not src_filename:
|
||||
return ""
|
||||
return await aread(filename=src_filename, encoding="utf-8")
|
||||
|
|
@ -450,7 +442,7 @@ class RebuildSequenceView(Action):
|
|||
)
|
||||
|
||||
@staticmethod
|
||||
def _get_full_filename(root: str | Path, pathname: str | Path) -> Path | None:
|
||||
def get_full_filename(root: str | Path, pathname: str | Path) -> Path | None:
|
||||
"""
|
||||
Convert package name to the full path of the module.
|
||||
|
||||
|
|
@ -466,7 +458,7 @@ class RebuildSequenceView(Action):
|
|||
"metagpt/management/skill_manager.py", then the returned value will be
|
||||
"/User/xxx/github/MetaGPT/metagpt/management/skill_manager.py"
|
||||
"""
|
||||
if re.match(r"^/.+", pathname):
|
||||
if re.match(r"^/.+", str(pathname)):
|
||||
return pathname
|
||||
files = list_files(root=root)
|
||||
postfix = "/" + str(pathname)
|
||||
|
|
|
|||
11
metagpt/actions/requirement_analysis/__init__.py
Normal file
11
metagpt/actions/requirement_analysis/__init__.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : __init__.py
|
||||
@Desc : The implementation of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
from metagpt.actions.requirement_analysis.evaluate_action import EvaluationData, EvaluateAction
|
||||
|
||||
__all__ = [EvaluationData, EvaluateAction]
|
||||
74
metagpt/actions/requirement_analysis/evaluate_action.py
Normal file
74
metagpt/actions/requirement_analysis/evaluate_action.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : evaluate_action.py
|
||||
@Desc : The implementation of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.utils.common import CodeParser, general_after_log, to_markdown_code_block
|
||||
|
||||
|
||||
class EvaluationData(BaseModel):
|
||||
"""Model to represent evaluation data.
|
||||
|
||||
Attributes:
|
||||
is_pass (bool): Indicates if the evaluation passed or failed.
|
||||
conclusion (Optional[str]): Conclusion or remarks about the evaluation.
|
||||
"""
|
||||
|
||||
is_pass: bool
|
||||
conclusion: Optional[str] = None
|
||||
|
||||
|
||||
class EvaluateAction(Action):
|
||||
"""The base class for an evaluation action.
|
||||
|
||||
This class provides methods to evaluate prompts using a specified language model.
|
||||
"""
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _evaluate(self, prompt: str) -> (bool, str):
|
||||
"""Evaluates a given prompt.
|
||||
|
||||
Args:
|
||||
prompt (str): The prompt to be evaluated.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing:
|
||||
- bool: Indicates if the evaluation passed.
|
||||
- str: The JSON string containing the evaluation data.
|
||||
"""
|
||||
rsp = await self.llm.aask(prompt)
|
||||
json_data = CodeParser.parse_code(text=rsp, lang="json")
|
||||
data = EvaluationData.model_validate_json(json_data)
|
||||
return data.is_pass, to_markdown_code_block(val=json_data, type_="json")
|
||||
|
||||
async def _vote(self, prompt: str) -> EvaluationData:
|
||||
"""Evaluates a prompt multiple times and returns the consensus.
|
||||
|
||||
Args:
|
||||
prompt (str): The prompt to be evaluated.
|
||||
|
||||
Returns:
|
||||
EvaluationData: An object containing the evaluation result and a summary of evaluations.
|
||||
"""
|
||||
evaluations = {}
|
||||
for i in range(3):
|
||||
vote, evaluation = await self._evaluate(prompt)
|
||||
val = evaluations.get(vote, [])
|
||||
val.append(evaluation)
|
||||
if len(val) > 1:
|
||||
return EvaluationData(is_pass=vote, conclusion="\n".join(val))
|
||||
evaluations[vote] = val
|
||||
86
metagpt/actions/requirement_analysis/framework/__init__.py
Normal file
86
metagpt/actions/requirement_analysis/framework/__init__.py
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : __init__.py
|
||||
@Desc : The implementation of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
import json
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional, Union, List
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from metagpt.actions.requirement_analysis.framework.evaluate_framework import EvaluateFramework
|
||||
from metagpt.actions.requirement_analysis.framework.write_framework import WriteFramework
|
||||
from metagpt.config2 import config
|
||||
from metagpt.utils.common import awrite
|
||||
|
||||
|
||||
async def save_framework(
|
||||
dir_data: str, trd: Optional[str] = None, output_dir: Optional[Union[str, Path]] = None
|
||||
) -> List[str]:
|
||||
"""
|
||||
Saves framework data to files based on input JSON data and optionally saves a TRD (technical requirements document).
|
||||
|
||||
Args:
|
||||
dir_data (str): JSON data in string format enclosed in triple backticks ("```json" "...data..." "```").
|
||||
trd (str, optional): Technical requirements document content to be saved. Defaults to None.
|
||||
output_dir (Union[str, Path], optional): Output directory path where files will be saved. If not provided,
|
||||
a default directory is created based on the current timestamp and a random UUID suffix.
|
||||
|
||||
Returns:
|
||||
List[str]: List of file paths where data was saved.
|
||||
|
||||
Raises:
|
||||
Any exceptions raised during file writing operations.
|
||||
|
||||
Notes:
|
||||
- JSON data should be provided in the format "```json ...data... ```".
|
||||
- The function ensures that paths and filenames are correctly formatted and creates necessary directories.
|
||||
|
||||
Example:
|
||||
```python
|
||||
dir_data = "```json\n[{\"path\": \"/folder\", \"filename\": \"file1.txt\", \"content\": \"Some content\"}]\n```"
|
||||
trd = "Technical requirements document content."
|
||||
output_dir = '/path/to/output/dir'
|
||||
saved_files = await save_framework(dir_data, trd, output_dir)
|
||||
print(saved_files)
|
||||
```
|
||||
"""
|
||||
output_dir = (
|
||||
Path(output_dir)
|
||||
if output_dir
|
||||
else config.workspace.path / (datetime.now().strftime("%Y%m%d%H%M%ST") + uuid.uuid4().hex[0:8])
|
||||
)
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
json_data = dir_data.removeprefix("```json").removesuffix("```")
|
||||
items = json.loads(json_data)
|
||||
|
||||
class Data(BaseModel):
|
||||
path: str
|
||||
filename: str
|
||||
content: str
|
||||
|
||||
if trd:
|
||||
pathname = output_dir / "TRD.md"
|
||||
await awrite(filename=pathname, data=trd)
|
||||
|
||||
files = []
|
||||
for i in items:
|
||||
v = Data.model_validate(i)
|
||||
if v.path and v.path[0] == "/":
|
||||
v.path = "." + v.path
|
||||
pathname = output_dir / v.path
|
||||
pathname.mkdir(parents=True, exist_ok=True)
|
||||
pathname = pathname / v.filename
|
||||
await awrite(filename=pathname, data=v.content)
|
||||
files.append(str(pathname))
|
||||
return files
|
||||
|
||||
|
||||
__all__ = [WriteFramework, EvaluateFramework]
|
||||
|
|
@ -0,0 +1,106 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : evaluate_framework.py
|
||||
@Desc : The implementation of Chapter 2.1.8 of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
|
||||
from metagpt.actions.requirement_analysis import EvaluateAction, EvaluationData
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import to_markdown_code_block
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class EvaluateFramework(EvaluateAction):
|
||||
"""WriteFramework deal with the following situations:
|
||||
1. Given a TRD and the software framework based on the TRD, evaluate the quality of the software framework.
|
||||
"""
|
||||
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
use_case_actors: str,
|
||||
trd: str,
|
||||
acknowledge: str,
|
||||
legacy_output: str,
|
||||
additional_technical_requirements: str,
|
||||
) -> EvaluationData:
|
||||
"""
|
||||
Run the evaluation of the software framework based on the provided TRD and related parameters.
|
||||
|
||||
Args:
|
||||
use_case_actors (str): A description of the actors involved in the use case.
|
||||
trd (str): The Technical Requirements Document (TRD) that outlines the requirements for the software framework.
|
||||
acknowledge (str): External acknowledgments or acknowledgments information related to the framework.
|
||||
legacy_output (str): The previous versions of software framework returned by `WriteFramework`.
|
||||
additional_technical_requirements (str): Additional technical requirements that need to be considered during evaluation.
|
||||
|
||||
Returns:
|
||||
EvaluationData: An object containing the results of the evaluation.
|
||||
|
||||
Example:
|
||||
>>> evaluate_framework = EvaluateFramework()
|
||||
>>> use_case_actors = "- Actor: game player;\\n- System: snake game; \\n- External System: game center;"
|
||||
>>> trd = "## TRD\\n..."
|
||||
>>> acknowledge = "## Interfaces\\n..."
|
||||
>>> framework = '{"path":"balabala", "filename":"...", ...'
|
||||
>>> constraint = "Using Java language, ..."
|
||||
>>> evaluation = await evaluate_framework.run(
|
||||
>>> use_case_actors=use_case_actors,
|
||||
>>> trd=trd,
|
||||
>>> acknowledge=acknowledge,
|
||||
>>> legacy_output=framework,
|
||||
>>> additional_technical_requirements=constraint,
|
||||
>>> )
|
||||
>>> is_pass = evaluation.is_pass
|
||||
>>> print(is_pass)
|
||||
True
|
||||
>>> evaluation_conclusion = evaluation.conclusion
|
||||
>>> print(evaluation_conclusion)
|
||||
Balabala...
|
||||
"""
|
||||
prompt = PROMPT.format(
|
||||
use_case_actors=use_case_actors,
|
||||
trd=to_markdown_code_block(val=trd),
|
||||
acknowledge=to_markdown_code_block(val=acknowledge),
|
||||
legacy_output=to_markdown_code_block(val=legacy_output),
|
||||
additional_technical_requirements=to_markdown_code_block(val=additional_technical_requirements),
|
||||
)
|
||||
return await self._vote(prompt)
|
||||
|
||||
|
||||
PROMPT = """
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
|
||||
## Legacy TRD
|
||||
{trd}
|
||||
|
||||
## Acknowledge
|
||||
{acknowledge}
|
||||
|
||||
## Legacy Outputs
|
||||
{legacy_output}
|
||||
|
||||
## Additional Technical Requirements
|
||||
{additional_technical_requirements}
|
||||
|
||||
---
|
||||
You are a tool that evaluates the quality of framework code based on the TRD content;
|
||||
You need to refer to the content of the "Legacy TRD" section to check for any errors or omissions in the framework code found in "Legacy Outputs";
|
||||
The content of "Actor, System, External System" provides an explanation of actors and systems that appear in UML Use Case diagram;
|
||||
Information about the external system missing from the "Legacy TRD" can be found in the "Acknowledge" section;
|
||||
Which interfaces defined in "Acknowledge" are used in the "Legacy TRD"?
|
||||
Do not implement the interface in "Acknowledge" section until it is used in "Legacy TRD", you can check whether they are the same interface by looking at its ID or url;
|
||||
Parts not mentioned in the "Legacy TRD" will be handled by other TRDs, therefore, processes not present in the "Legacy TRD" are considered ready;
|
||||
"Additional Technical Requirements" specifies the additional technical requirements that the generated software framework code must meet;
|
||||
Do the parameters of the interface of the external system used in the code comply with it's specifications in 'Acknowledge'?
|
||||
Is there a lack of necessary configuration files?
|
||||
Return a markdown JSON object with:
|
||||
- an "issues" key containing a string list of natural text about the issues that need to addressed, found in the "Legacy Outputs" if any exits, each issue found must provide a detailed description and include reasons;
|
||||
- a "conclusion" key containing the evaluation conclusion;
|
||||
- a "misalignment" key containing the judgement detail of the natural text string list about the misalignment with "Legacy TRD";
|
||||
- a "is_pass" key containing a true boolean value if there is not any issue in the "Legacy Outputs";
|
||||
"""
|
||||
|
|
@ -0,0 +1,156 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : write_framework.py
|
||||
@Desc : The implementation of Chapter 2.1.8 of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
import json
|
||||
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import general_after_log, to_markdown_code_block
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class WriteFramework(Action):
|
||||
"""WriteFramework deal with the following situations:
|
||||
1. Given a TRD, write out the software framework.
|
||||
"""
|
||||
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
use_case_actors: str,
|
||||
trd: str,
|
||||
acknowledge: str,
|
||||
legacy_output: str,
|
||||
evaluation_conclusion: str,
|
||||
additional_technical_requirements: str,
|
||||
) -> str:
|
||||
"""
|
||||
Run the action to generate a software framework based on the provided TRD and related information.
|
||||
|
||||
Args:
|
||||
use_case_actors (str): Description of the use case actors involved.
|
||||
trd (str): Technical Requirements Document detailing the requirements.
|
||||
acknowledge (str): External acknowledgements or acknowledgements required.
|
||||
legacy_output (str): Previous version of the software framework returned by `WriteFramework.run`.
|
||||
evaluation_conclusion (str): Conclusion from the evaluation of the requirements.
|
||||
additional_technical_requirements (str): Any additional technical requirements.
|
||||
|
||||
Returns:
|
||||
str: The generated software framework as a string.
|
||||
|
||||
Example:
|
||||
>>> write_framework = WriteFramework()
|
||||
>>> use_case_actors = "- Actor: game player;\\n- System: snake game; \\n- External System: game center;"
|
||||
>>> trd = "## TRD\\n..."
|
||||
>>> acknowledge = "## Interfaces\\n..."
|
||||
>>> legacy_output = '{"path":"balabala", "filename":"...", ...'
|
||||
>>> evaluation_conclusion = "Balabala..."
|
||||
>>> constraint = "Using Java language, ..."
|
||||
>>> framework = await write_framework.run(
|
||||
>>> use_case_actors=use_case_actors,
|
||||
>>> trd=trd,
|
||||
>>> acknowledge=acknowledge,
|
||||
>>> legacy_output=framework,
|
||||
>>> evaluation_conclusion=evaluation_conclusion,
|
||||
>>> additional_technical_requirements=constraint,
|
||||
>>> )
|
||||
>>> print(framework)
|
||||
{"path":"balabala", "filename":"...", ...
|
||||
|
||||
"""
|
||||
acknowledge = await self._extract_external_interfaces(trd=trd, knowledge=acknowledge)
|
||||
prompt = PROMPT.format(
|
||||
use_case_actors=use_case_actors,
|
||||
trd=to_markdown_code_block(val=trd),
|
||||
acknowledge=to_markdown_code_block(val=acknowledge),
|
||||
legacy_output=to_markdown_code_block(val=legacy_output),
|
||||
evaluation_conclusion=evaluation_conclusion,
|
||||
additional_technical_requirements=to_markdown_code_block(val=additional_technical_requirements),
|
||||
)
|
||||
return await self._write(prompt)
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _write(self, prompt: str) -> str:
|
||||
rsp = await self.llm.aask(prompt)
|
||||
# Do not use `CodeParser` here.
|
||||
tags = ["```json", "```"]
|
||||
bix = rsp.find(tags[0])
|
||||
eix = rsp.rfind(tags[1])
|
||||
if bix >= 0:
|
||||
rsp = rsp[bix : eix + len(tags[1])]
|
||||
json_data = rsp.removeprefix("```json").removesuffix("```")
|
||||
json.loads(json_data) # validate
|
||||
return json_data
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _extract_external_interfaces(self, trd: str, knowledge: str) -> str:
|
||||
prompt = f"## TRD\n{to_markdown_code_block(val=trd)}\n\n## Knowledge\n{to_markdown_code_block(val=knowledge)}\n"
|
||||
rsp = await self.llm.aask(
|
||||
prompt,
|
||||
system_msgs=[
|
||||
"You are a tool that removes impurities from articles; you can remove irrelevant content from articles.",
|
||||
'Identify which interfaces are used in "TRD"? Remove the relevant content of the interfaces NOT used in "TRD" from "Knowledge" and return the simplified content of "Knowledge".',
|
||||
],
|
||||
)
|
||||
return rsp
|
||||
|
||||
|
||||
PROMPT = """
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
|
||||
## TRD
|
||||
{trd}
|
||||
|
||||
## Acknowledge
|
||||
{acknowledge}
|
||||
|
||||
## Legacy Outputs
|
||||
{legacy_output}
|
||||
|
||||
## Evaluation Conclusion
|
||||
{evaluation_conclusion}
|
||||
|
||||
## Additional Technical Requirements
|
||||
{additional_technical_requirements}
|
||||
|
||||
---
|
||||
You are a tool that generates software framework code based on TRD.
|
||||
The content of "Actor, System, External System" provides an explanation of actors and systems that appear in UML Use Case diagram;
|
||||
The descriptions of the interfaces of the external system used in the "TRD" can be found in the "Acknowledge" section; Do not implement the interface of the external system in "Acknowledge" section until it is used in "TRD";
|
||||
"Legacy Outputs" contains the software framework code generated by you last time, which you can improve by addressing the issues raised in "Evaluation Conclusion";
|
||||
"Additional Technical Requirements" specifies the additional technical requirements that the generated software framework code must meet;
|
||||
Develop the software framework based on the "TRD", the output files should include:
|
||||
- The `README.md` file should include:
|
||||
- The folder structure diagram of the entire project;
|
||||
- Correspondence between classes, interfaces, and functions with the content in the "TRD" section;
|
||||
- Prerequisites if necessary;
|
||||
- Installation if necessary;
|
||||
- Configuration if necessary;
|
||||
- Usage if necessary;
|
||||
- The `CLASS.md` file should include the class diagram in PlantUML format based on the "TRD";
|
||||
- The `SEQUENCE.md` file should include the sequence diagram in PlantUML format based on the "TRD";
|
||||
- The source code files that implement the "TRD" and "Additional Technical Requirements"; do not add comments to source code files;
|
||||
- The configuration files that required by the source code files, "TRD" and "Additional Technical Requirements";
|
||||
|
||||
Return a markdown JSON object list, each object containing:
|
||||
- a "path" key with a value specifying its path;
|
||||
- a "filename" key with a value specifying its file name;
|
||||
- a "content" key with a value containing its file content;
|
||||
"""
|
||||
123
metagpt/actions/requirement_analysis/requirement/pic2txt.py
Normal file
123
metagpt/actions/requirement_analysis/requirement/pic2txt.py
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/27
|
||||
@Author : mashenquan
|
||||
@File : pic2txt.py
|
||||
"""
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import encode_image, general_after_log, to_markdown_code_block
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class Pic2Txt(Action):
|
||||
"""Pic2Txt deal with the following situations:
|
||||
Given some pictures depicting user requirements alongside contextual description, write out the intact textual user requirements.
|
||||
"""
|
||||
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
image_paths: List[str],
|
||||
textual_user_requirement: str = "",
|
||||
legacy_output: str = "",
|
||||
evaluation_conclusion: str = "",
|
||||
additional_technical_requirements: str = "",
|
||||
) -> str:
|
||||
"""
|
||||
Given some pictures depicting user requirements alongside contextual description, write out the intact textual user requirements
|
||||
|
||||
Args:
|
||||
image_paths (List[str]): A list of file paths to the input image(s) depicting user requirements.
|
||||
textual_user_requirement (str, optional): Textual user requirement that alongside the given images, if any.
|
||||
legacy_output (str, optional): The intact textual user requirements generated by you last time, if any.
|
||||
evaluation_conclusion (str, optional): Conclusion or evaluation based on the processed requirements.
|
||||
additional_technical_requirements (str, optional): Any supplementary technical details relevant to the process.
|
||||
|
||||
Returns:
|
||||
str: Textual representation of user requirements extracted from the provided image(s).
|
||||
|
||||
Raises:
|
||||
ValueError: If image_paths list is empty.
|
||||
OSError: If there is an issue accessing or reading the image files.
|
||||
|
||||
Example:
|
||||
>>> images = ["requirements/pic/1.png", "requirements/pic/2.png", "requirements/pic/3.png"]
|
||||
>>> textual_user_requirements = "User requirement paragraph 1 ..., . paragraph 2......"
|
||||
>>> action = Pic2Txt()
|
||||
>>> intact_textual_user_requirements = await action.run(image_paths=images, textual_user_requirement=textual_user_requirements)
|
||||
>>> print(intact_textual_user_requirements)
|
||||
"User requirement paragraph 1 ...,  This picture describes... paragraph 2......"
|
||||
|
||||
"""
|
||||
descriptions = {}
|
||||
for i in image_paths:
|
||||
filename = Path(i)
|
||||
base64_image = encode_image(filename)
|
||||
rsp = await self._pic2txt(
|
||||
"Generate a paragraph of text based on the content of the image, the language of the text is consistent with the language in the image.",
|
||||
base64_image=base64_image,
|
||||
)
|
||||
descriptions[filename.name] = rsp
|
||||
|
||||
prompt = PROMPT.format(
|
||||
textual_user_requirement=textual_user_requirement,
|
||||
acknowledge=to_markdown_code_block(val=json.dumps(descriptions), type_="json"),
|
||||
legacy_output=to_markdown_code_block(val=legacy_output),
|
||||
evaluation_conclusion=evaluation_conclusion,
|
||||
additional_technical_requirements=to_markdown_code_block(val=additional_technical_requirements),
|
||||
)
|
||||
return await self._write(prompt)
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _write(self, prompt: str) -> str:
|
||||
rsp = await self.llm.aask(prompt)
|
||||
return rsp
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _pic2txt(self, prompt: str, base64_image: str) -> str:
|
||||
rsp = await self.llm.aask(prompt, images=base64_image)
|
||||
return rsp
|
||||
|
||||
|
||||
PROMPT = """
|
||||
## Textual User Requirements
|
||||
{textual_user_requirement}
|
||||
|
||||
## Acknowledge
|
||||
{acknowledge}
|
||||
|
||||
## Legacy Outputs
|
||||
{legacy_output}
|
||||
|
||||
## Evaluation Conclusion
|
||||
{evaluation_conclusion}
|
||||
|
||||
## Additional Technical Requirements
|
||||
{additional_technical_requirements}
|
||||
|
||||
---
|
||||
You are a tool that generates an intact textual user requirements given a few of textual fragments of user requirements and some fragments of UI pictures.
|
||||
The content of "Textual User Requirements" provides a few of textual fragments of user requirements;
|
||||
The content of "Acknowledge" provides the descriptions of pictures used in "Textual User Requirements";
|
||||
"Legacy Outputs" contains the intact textual user requirements generated by you last time, which you can improve by addressing the issues raised in "Evaluation Conclusion";
|
||||
"Additional Technical Requirements" specifies the additional technical requirements that the generated textual user requirements must meet;
|
||||
You need to merge the text content of the corresponding image in the "Acknowledge" into the "Textual User Requirements" to generate a complete, natural and coherent description of the user requirements;
|
||||
Return the intact textual user requirements according to the given fragments of the user requirement of "Textual User Requirements" and the UI pictures;
|
||||
"""
|
||||
16
metagpt/actions/requirement_analysis/trd/__init__.py
Normal file
16
metagpt/actions/requirement_analysis/trd/__init__.py
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : __init__.py
|
||||
@Desc : The implementation of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
|
||||
|
||||
from metagpt.actions.requirement_analysis.trd.detect_interaction import DetectInteraction
|
||||
from metagpt.actions.requirement_analysis.trd.evaluate_trd import EvaluateTRD
|
||||
from metagpt.actions.requirement_analysis.trd.write_trd import WriteTRD
|
||||
from metagpt.actions.requirement_analysis.trd.compress_external_interfaces import CompressExternalInterfaces
|
||||
|
||||
__all__ = [CompressExternalInterfaces, DetectInteraction, WriteTRD, EvaluateTRD]
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : compress_external_interfaces.py
|
||||
@Desc : The implementation of Chapter 2.1.5 of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import general_after_log
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class CompressExternalInterfaces(Action):
|
||||
"""CompressExternalInterfaces deal with the following situations:
|
||||
1. Given a natural text of acknowledgement, it extracts and compresses the information about external system interfaces.
|
||||
"""
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
acknowledge: str,
|
||||
) -> str:
|
||||
"""
|
||||
Extracts and compresses information about external system interfaces from a given acknowledgement text.
|
||||
|
||||
Args:
|
||||
acknowledge (str): A natural text of acknowledgement containing details about external system interfaces.
|
||||
|
||||
Returns:
|
||||
str: A compressed version of the information about external system interfaces.
|
||||
|
||||
Example:
|
||||
>>> compress_acknowledge = CompressExternalInterfaces()
|
||||
>>> acknowledge = "## Interfaces\\n..."
|
||||
>>> available_external_interfaces = await compress_acknowledge.run(acknowledge=acknowledge)
|
||||
>>> print(available_external_interfaces)
|
||||
```json\n[\n{\n"id": 1,\n"inputs": {...
|
||||
"""
|
||||
return await self.llm.aask(
|
||||
msg=acknowledge,
|
||||
system_msgs=[
|
||||
"Extracts and compresses the information about external system interfaces.",
|
||||
"Return a markdown JSON list of objects, each object containing:\n"
|
||||
'- an "id" key containing the interface id;\n'
|
||||
'- an "inputs" key containing a dict of input parameters that consist of name and description pairs;\n'
|
||||
'- an "outputs" key containing a dict of returns that consist of name and description pairs;\n',
|
||||
],
|
||||
)
|
||||
101
metagpt/actions/requirement_analysis/trd/detect_interaction.py
Normal file
101
metagpt/actions/requirement_analysis/trd/detect_interaction.py
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : detect_interaction.py
|
||||
@Desc : The implementation of Chapter 2.1.6 of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import general_after_log, to_markdown_code_block
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class DetectInteraction(Action):
|
||||
"""DetectInteraction deal with the following situations:
|
||||
1. Given a natural text of user requirements, it identifies the interaction events and the participants of those interactions from the original text.
|
||||
"""
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
user_requirements: str,
|
||||
use_case_actors: str,
|
||||
legacy_interaction_events: str,
|
||||
evaluation_conclusion: str,
|
||||
) -> str:
|
||||
"""
|
||||
Identifies interaction events and participants from the user requirements.
|
||||
|
||||
Args:
|
||||
user_requirements (str): A natural language text detailing the user's requirements.
|
||||
use_case_actors (str): A description of the actors involved in the use case.
|
||||
legacy_interaction_events (str): The previous version of the interaction events identified by you.
|
||||
evaluation_conclusion (str): The external evaluation conclusions regarding the interactions events identified by you.
|
||||
|
||||
Returns:
|
||||
str: A string summarizing the identified interaction events and their participants.
|
||||
|
||||
Example:
|
||||
>>> detect_interaction = DetectInteraction()
|
||||
>>> user_requirements = "User requirements 1. ..."
|
||||
>>> use_case_actors = "- Actor: game player;\\n- System: snake game; \\n- External System: game center;"
|
||||
>>> previous_version_interaction_events = "['interaction ...', ...]"
|
||||
>>> evaluation_conclusion = "Issues: ..."
|
||||
>>> interaction_events = await detect_interaction.run(
|
||||
>>> user_requirements=user_requirements,
|
||||
>>> use_case_actors=use_case_actors,
|
||||
>>> legacy_interaction_events=previous_version_interaction_events,
|
||||
>>> evaluation_conclusion=evaluation_conclusion,
|
||||
>>> )
|
||||
>>> print(interaction_events)
|
||||
"['interaction ...', ...]"
|
||||
"""
|
||||
msg = PROMPT.format(
|
||||
use_case_actors=use_case_actors,
|
||||
original_user_requirements=to_markdown_code_block(val=user_requirements),
|
||||
previous_version_of_interaction_events=legacy_interaction_events,
|
||||
the_evaluation_conclusion_of_previous_version_of_trd=evaluation_conclusion,
|
||||
)
|
||||
return await self.llm.aask(msg=msg)
|
||||
|
||||
|
||||
PROMPT = """
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
|
||||
## User Requirements
|
||||
{original_user_requirements}
|
||||
|
||||
## Legacy Interaction Events
|
||||
{previous_version_of_interaction_events}
|
||||
|
||||
## Evaluation Conclusion
|
||||
{the_evaluation_conclusion_of_previous_version_of_trd}
|
||||
|
||||
---
|
||||
You are a tool for capturing interaction events.
|
||||
"Actor, System, External System" provides the possible participants of the interaction event;
|
||||
"Legacy Interaction Events" is the contents of the interaction events that you output earlier;
|
||||
Some descriptions in the "Evaluation Conclusion" relate to the content of "User Requirements", and these descriptions in the "Evaluation Conclusion" address some issues regarding the content of "Legacy Interaction Events";
|
||||
You need to capture the interaction events occurring in the description within the content of "User Requirements" word-for-word, including:
|
||||
1. Who is interacting with whom. An interaction event has a maximum of 2 participants. If there are multiple participants, it indicates that multiple events are combined into one event and should be further split;
|
||||
2. When an interaction event occurs, who is the initiator? What data did the initiator enter?
|
||||
3. What data does the interaction event ultimately return according to the "User Requirements"?
|
||||
|
||||
You can check the data flow described in the "User Requirements" to see if there are any missing interaction events;
|
||||
Return a markdown JSON object list, each object of the list containing:
|
||||
- a "name" key containing the name of the interaction event;
|
||||
- a "participants" key containing a string list of the names of the two participants;
|
||||
- a "initiator" key containing the name of the participant who initiate the interaction;
|
||||
- a "input" key containing a natural text description about the input data;
|
||||
"""
|
||||
115
metagpt/actions/requirement_analysis/trd/evaluate_trd.py
Normal file
115
metagpt/actions/requirement_analysis/trd/evaluate_trd.py
Normal file
|
|
@ -0,0 +1,115 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : evaluate_trd.py
|
||||
@Desc : The implementation of Chapter 2.1.6~2.1.7 of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
|
||||
from metagpt.actions.requirement_analysis import EvaluateAction, EvaluationData
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import to_markdown_code_block
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class EvaluateTRD(EvaluateAction):
|
||||
"""EvaluateTRD deal with the following situations:
|
||||
1. Given a TRD, evaluates the quality and returns a conclusion.
|
||||
"""
|
||||
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
user_requirements: str,
|
||||
use_case_actors: str,
|
||||
trd: str,
|
||||
interaction_events: str,
|
||||
legacy_user_requirements_interaction_events: str = "",
|
||||
) -> EvaluationData:
|
||||
"""
|
||||
Evaluates the given TRD based on user requirements, use case actors, interaction events, and optionally external legacy interaction events.
|
||||
|
||||
Args:
|
||||
user_requirements (str): The requirements provided by the user.
|
||||
use_case_actors (str): The actors involved in the use case.
|
||||
trd (str): The TRD (Technical Requirements Document) to be evaluated.
|
||||
interaction_events (str): The interaction events related to the user requirements and the TRD.
|
||||
legacy_user_requirements_interaction_events (str, optional): External legacy interaction events tied to the user requirements. Defaults to an empty string.
|
||||
|
||||
Returns:
|
||||
EvaluationData: The conclusion of the TRD evaluation.
|
||||
|
||||
Example:
|
||||
>>> evaluate_trd = EvaluateTRD()
|
||||
>>> user_requirements = "User requirements 1. ..."
|
||||
>>> use_case_actors = "- Actor: game player;\\n- System: snake game; \\n- External System: game center;"
|
||||
>>> trd = "## TRD\\n..."
|
||||
>>> interaction_events = "['interaction ...', ...]"
|
||||
>>> evaluation_conclusion = "Issues: ..."
|
||||
>>> legacy_user_requirements_interaction_events = ["user requirements 1. ...", ...]
|
||||
>>> evaluation = await evaluate_trd.run(
|
||||
>>> user_requirements=user_requirements,
|
||||
>>> use_case_actors=use_case_actors,
|
||||
>>> trd=trd,
|
||||
>>> interaction_events=interaction_events,
|
||||
>>> legacy_user_requirements_interaction_events=str(legacy_user_requirements_interaction_events),
|
||||
>>> )
|
||||
>>> is_pass = evaluation.is_pass
|
||||
>>> print(is_pass)
|
||||
True
|
||||
>>> evaluation_conclusion = evaluation.conclusion
|
||||
>>> print(evaluation_conclusion)
|
||||
## Conclustion\n balabalabala...
|
||||
|
||||
"""
|
||||
prompt = PROMPT.format(
|
||||
use_case_actors=use_case_actors,
|
||||
user_requirements=to_markdown_code_block(val=user_requirements),
|
||||
trd=to_markdown_code_block(val=trd),
|
||||
legacy_user_requirements_interaction_events=legacy_user_requirements_interaction_events,
|
||||
interaction_events=interaction_events,
|
||||
)
|
||||
return await self._vote(prompt)
|
||||
|
||||
|
||||
PROMPT = """
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
|
||||
## User Requirements
|
||||
{user_requirements}
|
||||
|
||||
## TRD Design
|
||||
{trd}
|
||||
|
||||
## External Interaction Events
|
||||
{legacy_user_requirements_interaction_events}
|
||||
|
||||
## Interaction Events
|
||||
{legacy_user_requirements_interaction_events}
|
||||
{interaction_events}
|
||||
|
||||
---
|
||||
You are a tool to evaluate the TRD design.
|
||||
"Actor, System, External System" provides the all possible participants in interaction events;
|
||||
"User Requirements" provides the original requirements description, any parts not mentioned in this description will be handled by other modules, so do not fabricate requirements;
|
||||
"External Interaction Events" is provided by an external module for your use, its content is also referred to "Interaction Events" section; The content in "External Interaction Events" can be determined to be problem-free;
|
||||
"External Interaction Events" provides some identified interaction events and the interacting participants based on the part of the content of the "User Requirements";
|
||||
"Interaction Events" provides some identified interaction events and the interacting participants based on the content of the "User Requirements";
|
||||
"TRD Design" provides a comprehensive design of the implementation steps for the original requirements, incorporating the interaction events from "Interaction Events" and adding additional steps to connect the complete upstream and downstream data flows;
|
||||
In order to integrate the full upstream and downstream data flow, the "TRD Design" allows for the inclusion of steps that do not appear in the original requirements description, but do not conflict with those explicitly described in the "User Requirements";
|
||||
Which interactions from "Interaction Events" correspond to which steps in "TRD Design"? Please provide reasons.
|
||||
Which aspects of "TRD Design" and "Interaction Events" do not align with the descriptions in "User Requirements"? Please provide detailed descriptions and reasons.
|
||||
If the descriptions in "User Requirements" are divided into multiple steps in "TRD Design" and "Interaction Events," it can be considered compliant with the descriptions in "User Requirements" as long as it does not conflict with them;
|
||||
There is a possibility of missing details in the descriptions of "User Requirements". Any additional steps in "TRD Design" and "Interaction Events" are considered compliant with "User Requirements" as long as they do not conflict with the descriptions provided in "User Requirements";
|
||||
If there are interaction events with external systems in "TRD Design", you must explicitly specify the ID of the external interface to use for the interaction events, the input and output parameters of the used external interface must explictly match the input and output of the interaction event;
|
||||
Does the sequence of steps in "Interaction Events" cause performance or cost issues? Please provide detailed descriptions and reasons;
|
||||
If each step of "TRD Design" has input data, its input data is provided either by the output of the previous steps or by participants of "Actor, System, External System", and there should be no passive data;
|
||||
Return a markdown JSON object with:
|
||||
- an "issues" key containing a string list of natural text about the issues that need to be addressed, found in the "TRD Design" if any exist, each issue found must provide a detailed description and include reasons;
|
||||
- a "conclusion" key containing the evaluation conclusion;
|
||||
- a "correspondence_between" key containing the judgement detail of the natural text string list about the correspondence between "Interaction Events" and "TRD Design" steps;
|
||||
- a "misalignment" key containing the judgement detail of the natural text string list about the misalignment with "User Requirements";
|
||||
- a "is_pass" key containing a true boolean value if there is not any issue in the "TRD Design";
|
||||
"""
|
||||
261
metagpt/actions/requirement_analysis/trd/write_trd.py
Normal file
261
metagpt/actions/requirement_analysis/trd/write_trd.py
Normal file
|
|
@ -0,0 +1,261 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/6/13
|
||||
@Author : mashenquan
|
||||
@File : write_trd.py
|
||||
@Desc : The implementation of Chapter 2.1.6~2.1.7 of RFC243. https://deepwisdom.feishu.cn/wiki/QobGwPkImijoyukBUKHcrYetnBb
|
||||
"""
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import general_after_log, to_markdown_code_block
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class WriteTRD(Action):
|
||||
"""WriteTRD deal with the following situations:
|
||||
1. Given some new user requirements, write out a new TRD(Technical Requirements Document).
|
||||
2. Given some incremental user requirements, update the legacy TRD.
|
||||
"""
|
||||
|
||||
async def run(
|
||||
self,
|
||||
*,
|
||||
user_requirements: str = "",
|
||||
use_case_actors: str,
|
||||
available_external_interfaces: str,
|
||||
evaluation_conclusion: str = "",
|
||||
interaction_events: str,
|
||||
previous_version_trd: str = "",
|
||||
legacy_user_requirements: str = "",
|
||||
legacy_user_requirements_trd: str = "",
|
||||
legacy_user_requirements_interaction_events: str = "",
|
||||
) -> str:
|
||||
"""
|
||||
Handles the writing or updating of a Technical Requirements Document (TRD) based on user requirements.
|
||||
|
||||
Args:
|
||||
user_requirements (str): The new/incremental user requirements.
|
||||
use_case_actors (str): Description of the actors involved in the use case.
|
||||
available_external_interfaces (str): List of available external interfaces.
|
||||
evaluation_conclusion (str, optional): The conclusion of the evaluation of the TRD written by you. Defaults to an empty string.
|
||||
interaction_events (str): The interaction events related to the user requirements that you are handling.
|
||||
previous_version_trd (str, optional): The previous version of the TRD written by you, for updating.
|
||||
legacy_user_requirements (str, optional): Existing user requirements handled by an external object for your use. Defaults to an empty string.
|
||||
legacy_user_requirements_trd (str, optional): The TRD associated with the existing user requirements handled by an external object for your use. Defaults to an empty string.
|
||||
legacy_user_requirements_interaction_events (str, optional): Interaction events related to the existing user requirements handled by an external object for your use. Defaults to an empty string.
|
||||
|
||||
Returns:
|
||||
str: The newly created or updated TRD written by you.
|
||||
|
||||
Example:
|
||||
>>> # Given a new user requirements, write out a new TRD.
|
||||
>>> user_requirements = "Write a 'snake game' TRD."
|
||||
>>> use_case_actors = "- Actor: game player;\\n- System: snake game; \\n- External System: game center;"
|
||||
>>> available_external_interfaces = "The available external interfaces returned by `CompressExternalInterfaces.run` are ..."
|
||||
>>> previous_version_trd = "TRD ..." # The last version of the TRD written out if there is.
|
||||
>>> evaluation_conclusion = "Conclusion ..." # The conclusion returned by `EvaluateTRD.run` if there is.
|
||||
>>> interaction_events = "Interaction ..." # The interaction events returned by `DetectInteraction.run`.
|
||||
>>> write_trd = WriteTRD()
|
||||
>>> new_version_trd = await write_trd.run(
|
||||
>>> user_requirements=user_requirements,
|
||||
>>> use_case_actors=use_case_actors,
|
||||
>>> available_external_interfaces=available_external_interfaces,
|
||||
>>> evaluation_conclusion=evaluation_conclusion,
|
||||
>>> interaction_events=interaction_events,
|
||||
>>> previous_version_trd=previous_version_trd,
|
||||
>>> )
|
||||
>>> print(new_version_trd)
|
||||
## Technical Requirements Document\n ...
|
||||
|
||||
>>> # Given an incremental requirements, update the legacy TRD.
|
||||
>>> legacy_user_requirements = ["User requirements 1. ...", "User requirements 2. ...", ...]
|
||||
>>> legacy_user_requirements_trd = "## Technical Requirements Document\\n ..." # The TRD before integrating more user requirements.
|
||||
>>> legacy_user_requirements_interaction_events = ["The interaction events list of user requirements 1 ...", "The interaction events list of user requiremnts 2 ...", ...]
|
||||
>>> use_case_actors = "- Actor: game player;\\n- System: snake game; \\n- External System: game center;"
|
||||
>>> available_external_interfaces = "The available external interfaces returned by `CompressExternalInterfaces.run` are ..."
|
||||
>>> increment_requirements = "The incremental user requirements are ..."
|
||||
>>> evaluation_conclusion = "Conclusion ..." # The conclusion returned by `EvaluateTRD.run` if there is.
|
||||
>>> previous_version_trd = "TRD ..." # The last version of the TRD written out if there is.
|
||||
>>> write_trd = WriteTRD()
|
||||
>>> new_version_trd = await write_trd.run(
|
||||
>>> user_requirements=increment_requirements,
|
||||
>>> use_case_actors=use_case_actors,
|
||||
>>> available_external_interfaces=available_external_interfaces,
|
||||
>>> evaluation_conclusion=evaluation_conclusion,
|
||||
>>> interaction_events=interaction_events,
|
||||
>>> previous_version_trd=previous_version_trd,
|
||||
>>> legacy_user_requirements=str(legacy_user_requirements),
|
||||
>>> legacy_user_requirements_trd=legacy_user_requirements_trd,
|
||||
>>> legacy_user_requirements_interaction_events=str(legacy_user_requirements_interaction_events),
|
||||
>>> )
|
||||
>>> print(new_version_trd)
|
||||
## Technical Requirements Document\n ...
|
||||
"""
|
||||
if legacy_user_requirements:
|
||||
return await self._write_incremental_trd(
|
||||
use_case_actors=use_case_actors,
|
||||
legacy_user_requirements=legacy_user_requirements,
|
||||
available_external_interfaces=available_external_interfaces,
|
||||
legacy_user_requirements_trd=legacy_user_requirements_trd,
|
||||
legacy_user_requirements_interaction_events=legacy_user_requirements_interaction_events,
|
||||
incremental_user_requirements=user_requirements,
|
||||
previous_version_trd=previous_version_trd,
|
||||
evaluation_conclusion=evaluation_conclusion,
|
||||
incremental_user_requirements_interaction_events=interaction_events,
|
||||
)
|
||||
return await self._write_new_trd(
|
||||
use_case_actors=use_case_actors,
|
||||
original_user_requirement=user_requirements,
|
||||
available_external_interfaces=available_external_interfaces,
|
||||
legacy_trd=previous_version_trd,
|
||||
evaluation_conclusion=evaluation_conclusion,
|
||||
interaction_events=interaction_events,
|
||||
)
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _write_new_trd(
|
||||
self,
|
||||
*,
|
||||
use_case_actors: str,
|
||||
original_user_requirement: str,
|
||||
available_external_interfaces: str,
|
||||
legacy_trd: str,
|
||||
evaluation_conclusion: str,
|
||||
interaction_events: str,
|
||||
) -> str:
|
||||
prompt = NEW_PROMPT.format(
|
||||
use_case_actors=use_case_actors,
|
||||
original_user_requirement=to_markdown_code_block(val=original_user_requirement),
|
||||
available_external_interfaces=available_external_interfaces,
|
||||
legacy_trd=to_markdown_code_block(val=legacy_trd),
|
||||
evaluation_conclusion=evaluation_conclusion,
|
||||
interaction_events=interaction_events,
|
||||
)
|
||||
return await self.llm.aask(prompt)
|
||||
|
||||
@retry(
|
||||
wait=wait_random_exponential(min=1, max=20),
|
||||
stop=stop_after_attempt(6),
|
||||
after=general_after_log(logger),
|
||||
)
|
||||
async def _write_incremental_trd(
|
||||
self,
|
||||
*,
|
||||
use_case_actors: str,
|
||||
legacy_user_requirements: str,
|
||||
available_external_interfaces: str,
|
||||
legacy_user_requirements_trd: str,
|
||||
legacy_user_requirements_interaction_events: str,
|
||||
incremental_user_requirements: str,
|
||||
previous_version_trd: str,
|
||||
evaluation_conclusion: str,
|
||||
incremental_user_requirements_interaction_events: str,
|
||||
):
|
||||
prompt = INCREMENTAL_PROMPT.format(
|
||||
use_case_actors=use_case_actors,
|
||||
legacy_user_requirements=to_markdown_code_block(val=legacy_user_requirements),
|
||||
available_external_interfaces=available_external_interfaces,
|
||||
legacy_user_requirements_trd=to_markdown_code_block(val=legacy_user_requirements_trd),
|
||||
legacy_user_requirements_interaction_events=legacy_user_requirements_interaction_events,
|
||||
incremental_user_requirements=to_markdown_code_block(val=incremental_user_requirements),
|
||||
previous_version_trd=to_markdown_code_block(val=previous_version_trd),
|
||||
evaluation_conclusion=evaluation_conclusion,
|
||||
incremental_user_requirements_interaction_events=incremental_user_requirements_interaction_events,
|
||||
)
|
||||
return await self.llm.aask(prompt)
|
||||
|
||||
|
||||
NEW_PROMPT = """
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
|
||||
## User Requirements
|
||||
{original_user_requirement}
|
||||
|
||||
## Available External Interfaces
|
||||
{available_external_interfaces}
|
||||
|
||||
## Legacy TRD
|
||||
{legacy_trd}
|
||||
|
||||
## Evaluation Conclusion
|
||||
{evaluation_conclusion}
|
||||
|
||||
## Interaction Events
|
||||
{interaction_events}
|
||||
|
||||
---
|
||||
You are a TRD generator.
|
||||
The content of "Actor, System, External System" provides an explanation of actors and systems that appear in UML Use Case diagram;
|
||||
The content of "Available External Interfaces" provides the candidate steps, along with the inputs and outputs of each step;
|
||||
"User Requirements" provides the original requirements description, any parts not mentioned in this description will be handled by other modules, so do not fabricate requirements;
|
||||
"Legacy TRD" provides the old version of the TRD based on the "User Requirements" and can serve as a reference for the new TRD;
|
||||
"Evaluation Conclusion" provides a summary of the evaluation of the old TRD in the "Legacy TRD" and can serve as a reference for the new TRD;
|
||||
"Interaction Events" provides some identified interaction events and the interacting participants based on the content of the "User Requirements";
|
||||
1. What inputs and outputs are described in the "User Requirements"?
|
||||
2. How many steps are needed to achieve the inputs and outputs described in the "User Requirements"? Which actors from the "Actor, System, External System" section are involved in each step? What are the inputs and outputs of each step? Where is this output used, for example, as input for which interface or where it is required in the requirements, etc.?
|
||||
3. Output a complete Technical Requirements Document (TRD):
|
||||
3.1. In the description, use the actor and system names defined in the "Actor, System, External System" section to describe the interactors;
|
||||
3.2. The content should include the original text of the requirements from "User Requirements";
|
||||
3.3. In the TRD, each step can involve a maximum of two participants. If there are more than two participants, the step needs to be further split;
|
||||
3.4. In the TRD, each step must include detailed descriptions, inputs, outputs, participants, initiator, and the rationale for the step's existence. The rationale should reference the original text to justify it, such as specifying which interface requires the output of this step as parameters or where in the requirements this step is mandated, etc.;
|
||||
3.5. In the TRD, if you need to call interfaces of external systems, you must explicitly specify the interface IDs of the external systems you want to call;
|
||||
"""
|
||||
|
||||
INCREMENTAL_PROMPT = """
|
||||
## Actor, System, External System
|
||||
{use_case_actors}
|
||||
|
||||
## Legacy User Requirements
|
||||
{legacy_user_requirements}
|
||||
|
||||
## Available External Interfaces
|
||||
{available_external_interfaces}
|
||||
|
||||
## The TRD of Legacy User Requirements
|
||||
{legacy_user_requirements_trd}
|
||||
|
||||
|
||||
## The Interaction Events of Legacy User Requirements
|
||||
{legacy_user_requirements_interaction_events}
|
||||
|
||||
## Incremental Requirements
|
||||
{incremental_user_requirements}
|
||||
|
||||
## Legacy TRD
|
||||
{previous_version_trd}
|
||||
|
||||
## Evaluation Conclusion
|
||||
{evaluation_conclusion}
|
||||
|
||||
## Interaction Events
|
||||
{incremental_user_requirements_interaction_events}
|
||||
|
||||
---
|
||||
You are a TRD generator.
|
||||
The content of "Actor, System, External System" provides an explanation of actors and systems that appear in UML Use Case diagram;
|
||||
The content of "Available External Interfaces" provides the candidate steps, along with the inputs and outputs of each step;
|
||||
"Legacy User Requirements" provides the original requirements description handled by other modules for your use;
|
||||
"The TRD of Legacy User Requirements" is the TRD generated by other modules based on the "Legacy User Requirements" for your use;
|
||||
"The Interaction Events of Legacy User Requirements" is the interaction events list generated by other modules based on the "Legacy User Requirements" for your use;
|
||||
"Incremental Requirements" provides the original requirements description that you need to address, any parts not mentioned in this description will be handled by other modules, so do not fabricate requirements;
|
||||
The requirements in "Legacy User Requirements" combined with the "Incremental Requirements" form a complete set of requirements, therefore, you need to add the TRD portion of the "Incremental Requirements" to "The TRD of Legacy User Requirements", the added content must not conflict with the original content of "The TRD of Legacy User Requirements";
|
||||
"Legacy TRD" provides the old version of the TRD you previously wrote based on the "Incremental Requirements" and can serve as a reference for the new TRD;
|
||||
"Evaluation Conclusion" provides a summary of the evaluation of the old TRD you generated in the "Legacy TRD", and the identified issues can serve as a reference for the new TRD you create;
|
||||
"Interaction Events" provides some identified interaction events and the interacting participants based on the content of the "Incremental Requirements";
|
||||
1. What inputs and outputs are described in the "Incremental Requirements"?
|
||||
2. How many steps are needed to achieve the inputs and outputs described in the "Incremental Requirements"? Which actors from the "Actor, System, External System" section are involved in each step? What are the inputs and outputs of each step? Where is this output used, for example, as input for which interface or where it is required in the requirements, etc.?
|
||||
3. Output a complete Technical Requirements Document (TRD):
|
||||
3.1. In the description, use the actor and system names defined in the "Actor, System, External System" section to describe the interactors;
|
||||
3.2. The content should include the original text of the requirements from "User Requirements";
|
||||
3.3. In the TRD, each step can involve a maximum of two participants. If there are more than two participants, the step needs to be further split;
|
||||
3.4. In the TRD, each step must include detailed descriptions, inputs, outputs, participants, initiator, and the rationale for the step's existence. The rationale should reference the original text to justify it, such as specifying which interface requires the output of this step as parameters or where in the requirements this step is mandated, etc.
|
||||
"""
|
||||
|
|
@ -3,16 +3,17 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from typing import Any, Callable, Optional, Union
|
||||
from datetime import datetime
|
||||
from typing import Any, Callable, Coroutine, Optional, Union
|
||||
|
||||
from pydantic import TypeAdapter, model_validator
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.config2 import config
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.search_engine import SearchEngine
|
||||
from metagpt.tools.web_browser_engine import WebBrowserEngine
|
||||
from metagpt.utils.common import OutputParser
|
||||
from metagpt.utils.parse_html import WebPage
|
||||
from metagpt.utils.text import generate_prompt_chunk, reduce_message_length
|
||||
|
||||
LANG_PROMPT = "Please respond in {language}."
|
||||
|
|
@ -43,9 +44,10 @@ COLLECT_AND_RANKURLS_PROMPT = """### Topic
|
|||
{results}
|
||||
|
||||
### Requirements
|
||||
Please remove irrelevant search results that are not related to the query or topic. Then, sort the remaining search results \
|
||||
based on the link credibility. If two results have equal credibility, prioritize them based on the relevance. Provide the
|
||||
ranked results' indices in JSON format, like [0, 1, 3, 4, ...], without including other words.
|
||||
Please remove irrelevant search results that are not related to the query or topic.
|
||||
If the query is time-sensitive or specifies a certain time frame, please also remove search results that are outdated or outside the specified time frame. Notice that the current time is {time_stamp}.
|
||||
Then, sort the remaining search results based on the link credibility. If two results have equal credibility, prioritize them based on the relevance.
|
||||
Provide the ranked results' indices in JSON format, like [0, 1, 3, 4, ...], without including other words.
|
||||
"""
|
||||
|
||||
WEB_BROWSE_AND_SUMMARIZE_PROMPT = """### Requirements
|
||||
|
|
@ -133,8 +135,8 @@ class CollectLinks(Action):
|
|||
if len(remove) == 0:
|
||||
break
|
||||
|
||||
model_name = config.llm.model
|
||||
prompt = reduce_message_length(gen_msg(), model_name, system_text, config.llm.max_token)
|
||||
model_name = self.config.llm.model
|
||||
prompt = reduce_message_length(gen_msg(), model_name, system_text, self.config.llm.max_token)
|
||||
logger.debug(prompt)
|
||||
queries = await self._aask(prompt, [system_text])
|
||||
try:
|
||||
|
|
@ -148,23 +150,27 @@ class CollectLinks(Action):
|
|||
ret[query] = await self._search_and_rank_urls(topic, query, url_per_query)
|
||||
return ret
|
||||
|
||||
async def _search_and_rank_urls(self, topic: str, query: str, num_results: int = 4) -> list[str]:
|
||||
async def _search_and_rank_urls(
|
||||
self, topic: str, query: str, num_results: int = 4, max_num_results: int = None
|
||||
) -> list[str]:
|
||||
"""Search and rank URLs based on a query.
|
||||
|
||||
Args:
|
||||
topic: The research topic.
|
||||
query: The search query.
|
||||
num_results: The number of URLs to collect.
|
||||
max_num_results: The max number of URLs to collect.
|
||||
|
||||
Returns:
|
||||
A list of ranked URLs.
|
||||
"""
|
||||
max_results = max(num_results * 2, 6)
|
||||
results = await self.search_engine.run(query, max_results=max_results, as_string=False)
|
||||
max_results = max_num_results or max(num_results * 2, 6)
|
||||
results = await self._search_urls(query, max_results=max_results)
|
||||
if len(results) == 0:
|
||||
return []
|
||||
_results = "\n".join(f"{i}: {j}" for i, j in zip(range(max_results), results))
|
||||
prompt = COLLECT_AND_RANKURLS_PROMPT.format(topic=topic, query=query, results=_results)
|
||||
time_stamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
prompt = COLLECT_AND_RANKURLS_PROMPT.format(topic=topic, query=query, results=_results, time_stamp=time_stamp)
|
||||
logger.debug(prompt)
|
||||
indices = await self._aask(prompt)
|
||||
try:
|
||||
|
|
@ -178,6 +184,15 @@ class CollectLinks(Action):
|
|||
results = self.rank_func(results)
|
||||
return [i["link"] for i in results[:num_results]]
|
||||
|
||||
async def _search_urls(self, query: str, max_results: int) -> list[dict[str, str]]:
|
||||
"""Use search_engine to get urls.
|
||||
|
||||
Returns:
|
||||
e.g. [{"title": "...", "link": "...", "snippet", "..."}]
|
||||
"""
|
||||
|
||||
return await self.search_engine.run(query, max_results=max_results, as_string=False)
|
||||
|
||||
|
||||
class WebBrowseAndSummarize(Action):
|
||||
"""Action class to explore the web and provide summaries of articles and webpages."""
|
||||
|
|
@ -204,6 +219,8 @@ class WebBrowseAndSummarize(Action):
|
|||
*urls: str,
|
||||
query: str,
|
||||
system_text: str = RESEARCH_BASE_SYSTEM,
|
||||
use_concurrent_summarization: bool = False,
|
||||
per_page_timeout: Optional[float] = None,
|
||||
) -> dict[str, str]:
|
||||
"""Run the action to browse the web and provide summaries.
|
||||
|
||||
|
|
@ -212,18 +229,41 @@ class WebBrowseAndSummarize(Action):
|
|||
urls: Additional URLs to browse.
|
||||
query: The research question.
|
||||
system_text: The system text.
|
||||
use_concurrent_summarization: Whether to concurrently summarize the content of the webpage by LLM.
|
||||
per_page_timeout: The maximum time for fetching a single page in seconds.
|
||||
|
||||
Returns:
|
||||
A dictionary containing the URLs as keys and their summaries as values.
|
||||
"""
|
||||
contents = await self.web_browser_engine.run(url, *urls)
|
||||
if not urls:
|
||||
contents = [contents]
|
||||
contents = await self._fetch_web_contents(url, *urls, per_page_timeout=per_page_timeout)
|
||||
|
||||
all_urls = [url] + list(urls)
|
||||
summarize_tasks = [self._summarize_content(content, query, system_text) for content in contents]
|
||||
summaries = await self._execute_summarize_tasks(summarize_tasks, use_concurrent_summarization)
|
||||
result = {url: summary for url, summary in zip(all_urls, summaries) if summary}
|
||||
|
||||
return result
|
||||
|
||||
async def _fetch_web_contents(
|
||||
self, url: str, *urls: str, per_page_timeout: Optional[float] = None
|
||||
) -> list[WebPage]:
|
||||
"""Fetch web contents from given URLs."""
|
||||
|
||||
contents = await self.web_browser_engine.run(url, *urls, per_page_timeout=per_page_timeout)
|
||||
|
||||
return [contents] if not urls else contents
|
||||
|
||||
async def _summarize_content(self, page: WebPage, query: str, system_text: str) -> str:
|
||||
"""Summarize web content."""
|
||||
try:
|
||||
prompt_template = WEB_BROWSE_AND_SUMMARIZE_PROMPT.format(query=query, content="{}")
|
||||
|
||||
content = page.inner_text
|
||||
|
||||
if self._is_content_invalid(content):
|
||||
logger.warning(f"Invalid content detected for URL {page.url}: {content[:10]}...")
|
||||
return None
|
||||
|
||||
summaries = {}
|
||||
prompt_template = WEB_BROWSE_AND_SUMMARIZE_PROMPT.format(query=query, content="{}")
|
||||
for u, content in zip([url, *urls], contents):
|
||||
content = content.inner_text
|
||||
chunk_summaries = []
|
||||
for prompt in generate_prompt_chunk(content, prompt_template, self.llm.model, system_text, 4096):
|
||||
logger.debug(prompt)
|
||||
|
|
@ -233,18 +273,33 @@ class WebBrowseAndSummarize(Action):
|
|||
chunk_summaries.append(summary)
|
||||
|
||||
if not chunk_summaries:
|
||||
summaries[u] = None
|
||||
continue
|
||||
return None
|
||||
|
||||
if len(chunk_summaries) == 1:
|
||||
summaries[u] = chunk_summaries[0]
|
||||
continue
|
||||
return chunk_summaries[0]
|
||||
|
||||
content = "\n".join(chunk_summaries)
|
||||
prompt = WEB_BROWSE_AND_SUMMARIZE_PROMPT.format(query=query, content=content)
|
||||
summary = await self._aask(prompt, [system_text])
|
||||
summaries[u] = summary
|
||||
return summaries
|
||||
return summary
|
||||
except Exception as e:
|
||||
logger.error(f"Error summarizing content: {e}")
|
||||
return None
|
||||
|
||||
def _is_content_invalid(self, content: str) -> bool:
|
||||
"""Check if the content is invalid based on specific starting phrases."""
|
||||
|
||||
invalid_starts = ["Fail to load page", "Access Denied"]
|
||||
|
||||
return any(content.strip().startswith(phrase) for phrase in invalid_starts)
|
||||
|
||||
async def _execute_summarize_tasks(self, tasks: list[Coroutine[Any, Any, str]], use_concurrent: bool) -> list[str]:
|
||||
"""Execute summarize tasks either concurrently or sequentially."""
|
||||
|
||||
if use_concurrent:
|
||||
return await asyncio.gather(*tasks)
|
||||
|
||||
return [await task for task in tasks]
|
||||
|
||||
|
||||
class ConductResearch(Action):
|
||||
|
|
|
|||
292
metagpt/actions/search_enhanced_qa.py
Normal file
292
metagpt/actions/search_enhanced_qa.py
Normal file
|
|
@ -0,0 +1,292 @@
|
|||
"""Enhancing question-answering capabilities through search engine augmentation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from pydantic import Field, PrivateAttr, model_validator
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.actions.research import CollectLinks, WebBrowseAndSummarize
|
||||
from metagpt.logs import logger
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.tools.web_browser_engine import WebBrowserEngine
|
||||
from metagpt.utils.common import CodeParser
|
||||
from metagpt.utils.parse_html import WebPage
|
||||
from metagpt.utils.report import ThoughtReporter
|
||||
|
||||
REWRITE_QUERY_PROMPT = """
|
||||
Role: You are a highly efficient assistant that provide a better search query for web search engine to answer the given question.
|
||||
|
||||
I will provide you with a question. Your task is to provide a better search query for web search engine.
|
||||
|
||||
## Context
|
||||
### Question
|
||||
{q}
|
||||
|
||||
## Format Example
|
||||
```json
|
||||
{{
|
||||
"query": "the better search query for web search engine.",
|
||||
}}
|
||||
```
|
||||
|
||||
## Instructions
|
||||
- Understand the question given by the user.
|
||||
- Provide a better search query for web search engine to answer the given question, your answer must be written in the same language as the question.
|
||||
- When rewriting, if you are unsure of the specific time, do not include the time.
|
||||
|
||||
## Constraint
|
||||
Format: Just print the result in json format like **Format Example**.
|
||||
|
||||
## Action
|
||||
Follow **Instructions**, generate output and make sure it follows the **Constraint**.
|
||||
"""
|
||||
|
||||
SEARCH_ENHANCED_QA_SYSTEM_PROMPT = """
|
||||
You are a large language AI assistant built by MGX. You are given a user question, and please write clean, concise and accurate answer to the question. You will be given a set of related contexts to the question, each starting with a reference number like [[citation:x]], where x is a number. Please use the context.
|
||||
|
||||
Your answer must be correct, accurate and written by an expert using an unbiased and professional tone. Please limit to 1024 tokens. Do not give any information that is not related to the question, and do not repeat. Say "information is missing on" followed by the related topic, if the given context do not provide sufficient information.
|
||||
|
||||
Do not include [citation:x] in your anwser, where x is a number. Other than code and specific names and citations, your answer must be written in the same language as the question.
|
||||
|
||||
Here are the set of contexts:
|
||||
|
||||
{context}
|
||||
|
||||
Remember, don't blindly repeat the contexts verbatim. And here is the user question:
|
||||
"""
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class SearchEnhancedQA(Action):
|
||||
"""Question answering and info searching through search engine."""
|
||||
|
||||
name: str = "SearchEnhancedQA"
|
||||
desc: str = "Integrating search engine results to anwser the question."
|
||||
|
||||
collect_links_action: CollectLinks = Field(
|
||||
default_factory=CollectLinks, description="Action to collect relevant links from a search engine."
|
||||
)
|
||||
web_browse_and_summarize_action: WebBrowseAndSummarize = Field(
|
||||
default=None,
|
||||
description="Action to explore the web and provide summaries of articles and webpages.",
|
||||
)
|
||||
per_page_timeout: float = Field(
|
||||
default=20, description="The maximum time for fetching a single page is in seconds. Defaults to 20s."
|
||||
)
|
||||
java_script_enabled: bool = Field(
|
||||
default=False, description="Whether or not to enable JavaScript in the web browser context. Defaults to False."
|
||||
)
|
||||
user_agent: str = Field(
|
||||
default="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.1938.81",
|
||||
description="Specific user agent to use in browser.",
|
||||
)
|
||||
extra_http_headers: dict = Field(
|
||||
default={"sec-ch-ua": 'Chromium";v="125", "Not.A/Brand";v="24'},
|
||||
description="An object containing additional HTTP headers to be sent with every request.",
|
||||
)
|
||||
max_chars_per_webpage_summary: int = Field(
|
||||
default=4000, description="Maximum summary length for each web page content."
|
||||
)
|
||||
max_search_results: int = Field(
|
||||
default=10,
|
||||
description="Maximum number of search results (links) to collect using the collect_links_action. This controls the number of potential sources for answering the question.",
|
||||
)
|
||||
|
||||
_reporter: ThoughtReporter = PrivateAttr(ThoughtReporter())
|
||||
|
||||
@model_validator(mode="after")
|
||||
def initialize(self):
|
||||
if self.web_browse_and_summarize_action is None:
|
||||
web_browser_engine = WebBrowserEngine.from_browser_config(
|
||||
self.config.browser,
|
||||
proxy=self.config.proxy,
|
||||
java_script_enabled=self.java_script_enabled,
|
||||
extra_http_headers=self.extra_http_headers,
|
||||
user_agent=self.user_agent,
|
||||
)
|
||||
|
||||
self.web_browse_and_summarize_action = WebBrowseAndSummarize(web_browser_engine=web_browser_engine)
|
||||
|
||||
return self
|
||||
|
||||
async def run(self, query: str, rewrite_query: bool = True) -> str:
|
||||
"""Answer a query by leveraging web search results.
|
||||
|
||||
Args:
|
||||
query (str): The original user query.
|
||||
rewrite_query (bool): Whether to rewrite the query for better web search results. Defaults to True.
|
||||
|
||||
Returns:
|
||||
str: A detailed answer based on web search results.
|
||||
|
||||
Raises:
|
||||
ValueError: If the query is invalid.
|
||||
"""
|
||||
async with self._reporter:
|
||||
await self._reporter.async_report({"type": "search", "stage": "init"})
|
||||
self._validate_query(query)
|
||||
|
||||
processed_query = await self._process_query(query, rewrite_query)
|
||||
context = await self._build_context(processed_query)
|
||||
|
||||
return await self._generate_answer(processed_query, context)
|
||||
|
||||
def _validate_query(self, query: str) -> None:
|
||||
"""Validate the input query.
|
||||
|
||||
Args:
|
||||
query (str): The query to validate.
|
||||
|
||||
Raises:
|
||||
ValueError: If the query is invalid.
|
||||
"""
|
||||
|
||||
if not query.strip():
|
||||
raise ValueError("Query cannot be empty or contain only whitespace.")
|
||||
|
||||
async def _process_query(self, query: str, should_rewrite: bool) -> str:
|
||||
"""Process the query, optionally rewriting it."""
|
||||
|
||||
if should_rewrite:
|
||||
return await self._rewrite_query(query)
|
||||
|
||||
return query
|
||||
|
||||
async def _rewrite_query(self, query: str) -> str:
|
||||
"""Write a better search query for web search engine.
|
||||
|
||||
If the rewrite process fails, the original query is returned.
|
||||
|
||||
Args:
|
||||
query (str): The original search query.
|
||||
|
||||
Returns:
|
||||
str: The rewritten query if successful, otherwise the original query.
|
||||
"""
|
||||
|
||||
prompt = REWRITE_QUERY_PROMPT.format(q=query)
|
||||
|
||||
try:
|
||||
resp = await self._aask(prompt)
|
||||
rewritten_query = self._extract_rewritten_query(resp)
|
||||
|
||||
logger.info(f"Query rewritten: '{query}' -> '{rewritten_query}'")
|
||||
return rewritten_query
|
||||
except Exception as e:
|
||||
logger.warning(f"Query rewrite failed. Returning original query. Error: {e}")
|
||||
return query
|
||||
|
||||
def _extract_rewritten_query(self, response: str) -> str:
|
||||
"""Extract the rewritten query from the LLM's JSON response."""
|
||||
|
||||
resp_json = json.loads(CodeParser.parse_code(response, lang="json"))
|
||||
return resp_json["query"]
|
||||
|
||||
async def _build_context(self, query: str) -> str:
|
||||
"""Construct a context string from web search citations.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
|
||||
Returns:
|
||||
str: Formatted context with numbered citations.
|
||||
"""
|
||||
|
||||
citations = await self._search_citations(query)
|
||||
context = "\n\n".join([f"[[citation:{i+1}]] {c}" for i, c in enumerate(citations)])
|
||||
|
||||
return context
|
||||
|
||||
async def _search_citations(self, query: str) -> list[str]:
|
||||
"""Perform web search and summarize relevant content.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
|
||||
Returns:
|
||||
list[str]: Summaries of relevant web content.
|
||||
"""
|
||||
|
||||
relevant_urls = await self._collect_relevant_links(query)
|
||||
await self._reporter.async_report({"type": "search", "stage": "searching", "urls": relevant_urls})
|
||||
if not relevant_urls:
|
||||
logger.warning(f"No relevant URLs found for query: {query}")
|
||||
return []
|
||||
|
||||
logger.info(f"The Relevant links are: {relevant_urls}")
|
||||
|
||||
web_summaries = await self._summarize_web_content(relevant_urls)
|
||||
if not web_summaries:
|
||||
logger.warning(f"No summaries generated for query: {query}")
|
||||
return []
|
||||
|
||||
citations = list(web_summaries.values())
|
||||
|
||||
return citations
|
||||
|
||||
async def _collect_relevant_links(self, query: str) -> list[str]:
|
||||
"""Search and rank URLs relevant to the query.
|
||||
|
||||
Args:
|
||||
query (str): The search query.
|
||||
|
||||
Returns:
|
||||
list[str]: Ranked list of relevant URLs.
|
||||
"""
|
||||
|
||||
return await self.collect_links_action._search_and_rank_urls(
|
||||
topic=query, query=query, max_num_results=self.max_search_results
|
||||
)
|
||||
|
||||
async def _summarize_web_content(self, urls: list[str]) -> dict[str, str]:
|
||||
"""Fetch and summarize content from given URLs.
|
||||
|
||||
Args:
|
||||
urls (list[str]): List of URLs to summarize.
|
||||
|
||||
Returns:
|
||||
dict[str, str]: Mapping of URLs to their summaries.
|
||||
"""
|
||||
|
||||
contents = await self._fetch_web_contents(urls)
|
||||
|
||||
summaries = {}
|
||||
await self._reporter.async_report(
|
||||
{"type": "search", "stage": "browsing", "pages": [i.model_dump() for i in contents]}
|
||||
)
|
||||
for content in contents:
|
||||
url = content.url
|
||||
inner_text = content.inner_text.replace("\n", "")
|
||||
if self.web_browse_and_summarize_action._is_content_invalid(inner_text):
|
||||
logger.warning(f"Invalid content detected for URL {url}: {inner_text[:10]}...")
|
||||
continue
|
||||
|
||||
summary = inner_text[: self.max_chars_per_webpage_summary]
|
||||
summaries[url] = summary
|
||||
|
||||
return summaries
|
||||
|
||||
async def _fetch_web_contents(self, urls: list[str]) -> list[WebPage]:
|
||||
return await self.web_browse_and_summarize_action._fetch_web_contents(
|
||||
*urls, per_page_timeout=self.per_page_timeout
|
||||
)
|
||||
|
||||
async def _generate_answer(self, query: str, context: str) -> str:
|
||||
"""Generate an answer using the query and context.
|
||||
|
||||
Args:
|
||||
query (str): The user's question.
|
||||
context (str): Relevant information from web search.
|
||||
|
||||
Returns:
|
||||
str: Generated answer based on the context.
|
||||
"""
|
||||
|
||||
system_prompt = SEARCH_ENHANCED_QA_SYSTEM_PROMPT.format(context=context)
|
||||
|
||||
async with ThoughtReporter(uuid=self._reporter.uuid, enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "search", "stage": "answer"})
|
||||
rsp = await self._aask(query, [system_prompt])
|
||||
return rsp
|
||||
|
|
@ -6,13 +6,16 @@
|
|||
@Modified By: mashenquan, 2023/12/5. Archive the summarization content of issue discovery for use in WriteCode.
|
||||
"""
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic import BaseModel, Field
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions.action import Action
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import CodeSummarizeContext
|
||||
from metagpt.utils.common import get_markdown_code_block_type
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
PROMPT_TEMPLATE = """
|
||||
NOTICE
|
||||
|
|
@ -90,6 +93,8 @@ flowchart TB
|
|||
class SummarizeCode(Action):
|
||||
name: str = "SummarizeCode"
|
||||
i_context: CodeSummarizeContext = Field(default_factory=CodeSummarizeContext)
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
@retry(stop=stop_after_attempt(2), wait=wait_random_exponential(min=1, max=60))
|
||||
async def summarize_code(self, prompt):
|
||||
|
|
@ -101,11 +106,10 @@ class SummarizeCode(Action):
|
|||
design_doc = await self.repo.docs.system_design.get(filename=design_pathname.name)
|
||||
task_pathname = Path(self.i_context.task_filename)
|
||||
task_doc = await self.repo.docs.task.get(filename=task_pathname.name)
|
||||
src_file_repo = self.repo.with_src_path(self.context.src_workspace).srcs
|
||||
code_blocks = []
|
||||
for filename in self.i_context.codes_filenames:
|
||||
code_doc = await src_file_repo.get(filename)
|
||||
code_block = f"```python\n{code_doc.content}\n```\n-----"
|
||||
code_doc = await self.repo.srcs.get(filename)
|
||||
code_block = f"```{get_markdown_code_block_type(filename)}\n{code_doc.content}\n```\n---\n"
|
||||
code_blocks.append(code_block)
|
||||
format_example = FORMAT_EXAMPLE
|
||||
prompt = PROMPT_TEMPLATE.format(
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@
|
|||
from typing import Optional
|
||||
|
||||
from metagpt.actions import Action
|
||||
from metagpt.config2 import config
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
|
@ -26,7 +25,7 @@ class TalkAction(Action):
|
|||
|
||||
@property
|
||||
def language(self):
|
||||
return self.context.kwargs.language or config.language
|
||||
return self.context.kwargs.language or self.config.language
|
||||
|
||||
@property
|
||||
def prompt(self):
|
||||
|
|
|
|||
|
|
@ -16,18 +16,20 @@
|
|||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic import BaseModel, Field
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions.action import Action
|
||||
from metagpt.actions.project_management_an import REFINED_TASK_LIST, TASK_LIST
|
||||
from metagpt.actions.write_code_plan_and_change_an import REFINED_TEMPLATE
|
||||
from metagpt.const import BUGFIX_FILENAME, REQUIREMENT_FILENAME
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import CodingContext, Document, RunCodeResult
|
||||
from metagpt.utils.common import CodeParser
|
||||
from metagpt.utils.common import CodeParser, get_markdown_code_block_type
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
from metagpt.utils.report import EditorReporter
|
||||
|
||||
PROMPT_TEMPLATE = """
|
||||
NOTICE
|
||||
|
|
@ -43,9 +45,7 @@ ATTENTION: Use '##' to SPLIT SECTIONS, not '#'. Output format carefully referenc
|
|||
{task}
|
||||
|
||||
## Legacy Code
|
||||
```Code
|
||||
{code}
|
||||
```
|
||||
|
||||
## Debug logs
|
||||
```text
|
||||
|
|
@ -60,9 +60,14 @@ ATTENTION: Use '##' to SPLIT SECTIONS, not '#'. Output format carefully referenc
|
|||
```
|
||||
|
||||
# Format example
|
||||
## Code: {filename}
|
||||
## Code: {demo_filename}.py
|
||||
```python
|
||||
## {filename}
|
||||
## {demo_filename}.py
|
||||
...
|
||||
```
|
||||
## Code: {demo_filename}.js
|
||||
```javascript
|
||||
// {demo_filename}.js
|
||||
...
|
||||
```
|
||||
|
||||
|
|
@ -83,18 +88,26 @@ ATTENTION: Use '##' to SPLIT SECTIONS, not '#'. Output format carefully referenc
|
|||
class WriteCode(Action):
|
||||
name: str = "WriteCode"
|
||||
i_context: Document = Field(default_factory=Document)
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
|
||||
async def write_code(self, prompt) -> str:
|
||||
code_rsp = await self._aask(prompt)
|
||||
code = CodeParser.parse_code(block="", text=code_rsp)
|
||||
code = CodeParser.parse_code(text=code_rsp)
|
||||
return code
|
||||
|
||||
async def run(self, *args, **kwargs) -> CodingContext:
|
||||
bug_feedback = await self.repo.docs.get(filename=BUGFIX_FILENAME)
|
||||
bug_feedback = None
|
||||
if self.input_args and hasattr(self.input_args, "issue_filename"):
|
||||
bug_feedback = await Document.load(self.input_args.issue_filename)
|
||||
coding_context = CodingContext.loads(self.i_context.content)
|
||||
if not coding_context.code_plan_and_change_doc:
|
||||
coding_context.code_plan_and_change_doc = await self.repo.docs.code_plan_and_change.get(
|
||||
filename=coding_context.task_doc.filename
|
||||
)
|
||||
test_doc = await self.repo.test_outputs.get(filename="test_" + coding_context.filename + ".json")
|
||||
requirement_doc = await self.repo.docs.get(filename=REQUIREMENT_FILENAME)
|
||||
requirement_doc = await Document.load(self.input_args.requirements_filename)
|
||||
summary_doc = None
|
||||
if coding_context.design_doc and coding_context.design_doc.filename:
|
||||
summary_doc = await self.repo.docs.code_summary.get(filename=coding_context.design_doc.filename)
|
||||
|
|
@ -103,29 +116,28 @@ class WriteCode(Action):
|
|||
test_detail = RunCodeResult.loads(test_doc.content)
|
||||
logs = test_detail.stderr
|
||||
|
||||
if bug_feedback:
|
||||
code_context = coding_context.code_doc.content
|
||||
elif self.config.inc:
|
||||
if self.config.inc or bug_feedback:
|
||||
code_context = await self.get_codes(
|
||||
coding_context.task_doc, exclude=self.i_context.filename, project_repo=self.repo, use_inc=True
|
||||
)
|
||||
else:
|
||||
code_context = await self.get_codes(
|
||||
coding_context.task_doc,
|
||||
exclude=self.i_context.filename,
|
||||
project_repo=self.repo.with_src_path(self.context.src_workspace),
|
||||
coding_context.task_doc, exclude=self.i_context.filename, project_repo=self.repo
|
||||
)
|
||||
|
||||
if self.config.inc:
|
||||
prompt = REFINED_TEMPLATE.format(
|
||||
user_requirement=requirement_doc.content if requirement_doc else "",
|
||||
code_plan_and_change=str(coding_context.code_plan_and_change_doc),
|
||||
code_plan_and_change=coding_context.code_plan_and_change_doc.content
|
||||
if coding_context.code_plan_and_change_doc
|
||||
else "",
|
||||
design=coding_context.design_doc.content if coding_context.design_doc else "",
|
||||
task=coding_context.task_doc.content if coding_context.task_doc else "",
|
||||
code=code_context,
|
||||
logs=logs,
|
||||
feedback=bug_feedback.content if bug_feedback else "",
|
||||
filename=self.i_context.filename,
|
||||
demo_filename=Path(self.i_context.filename).stem,
|
||||
summary_log=summary_doc.content if summary_doc else "",
|
||||
)
|
||||
else:
|
||||
|
|
@ -136,15 +148,20 @@ class WriteCode(Action):
|
|||
logs=logs,
|
||||
feedback=bug_feedback.content if bug_feedback else "",
|
||||
filename=self.i_context.filename,
|
||||
demo_filename=Path(self.i_context.filename).stem,
|
||||
summary_log=summary_doc.content if summary_doc else "",
|
||||
)
|
||||
logger.info(f"Writing {coding_context.filename}..")
|
||||
code = await self.write_code(prompt)
|
||||
if not coding_context.code_doc:
|
||||
# avoid root_path pydantic ValidationError if use WriteCode alone
|
||||
root_path = self.context.src_workspace if self.context.src_workspace else ""
|
||||
coding_context.code_doc = Document(filename=coding_context.filename, root_path=str(root_path))
|
||||
coding_context.code_doc.content = code
|
||||
async with EditorReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "code", "filename": coding_context.filename}, "meta")
|
||||
code = await self.write_code(prompt)
|
||||
if not coding_context.code_doc:
|
||||
# avoid root_path pydantic ValidationError if use WriteCode alone
|
||||
coding_context.code_doc = Document(
|
||||
filename=coding_context.filename, root_path=str(self.repo.src_relative_path)
|
||||
)
|
||||
coding_context.code_doc.content = code
|
||||
await reporter.async_report(coding_context.code_doc, "document")
|
||||
return coding_context
|
||||
|
||||
@staticmethod
|
||||
|
|
@ -169,35 +186,32 @@ class WriteCode(Action):
|
|||
code_filenames = m.get(TASK_LIST.key, []) if not use_inc else m.get(REFINED_TASK_LIST.key, [])
|
||||
codes = []
|
||||
src_file_repo = project_repo.srcs
|
||||
|
||||
# Incremental development scenario
|
||||
if use_inc:
|
||||
src_files = src_file_repo.all_files
|
||||
# Get the old workspace contained the old codes and old workspace are created in previous CodePlanAndChange
|
||||
old_file_repo = project_repo.git_repo.new_file_repository(relative_path=project_repo.old_workspace)
|
||||
old_files = old_file_repo.all_files
|
||||
# Get the union of the files in the src and old workspaces
|
||||
union_files_list = list(set(src_files) | set(old_files))
|
||||
for filename in union_files_list:
|
||||
for filename in src_file_repo.all_files:
|
||||
code_block_type = get_markdown_code_block_type(filename)
|
||||
# Exclude the current file from the all code snippets
|
||||
if filename == exclude:
|
||||
# If the file is in the old workspace, use the old code
|
||||
# Exclude unnecessary code to maintain a clean and focused main.py file, ensuring only relevant and
|
||||
# essential functionality is included for the project’s requirements
|
||||
if filename in old_files and filename != "main.py":
|
||||
if filename != "main.py":
|
||||
# Use old code
|
||||
doc = await old_file_repo.get(filename=filename)
|
||||
doc = await src_file_repo.get(filename=filename)
|
||||
# If the file is in the src workspace, skip it
|
||||
else:
|
||||
continue
|
||||
codes.insert(0, f"-----Now, {filename} to be rewritten\n```{doc.content}```\n=====")
|
||||
codes.insert(
|
||||
0, f"### The name of file to rewrite: `{filename}`\n```{code_block_type}\n{doc.content}```\n"
|
||||
)
|
||||
logger.info(f"Prepare to rewrite `{filename}`")
|
||||
# The code snippets are generated from the src workspace
|
||||
else:
|
||||
doc = await src_file_repo.get(filename=filename)
|
||||
# If the file does not exist in the src workspace, skip it
|
||||
if not doc:
|
||||
continue
|
||||
codes.append(f"----- {filename}\n```{doc.content}```")
|
||||
codes.append(f"### File Name: `{filename}`\n```{code_block_type}\n{doc.content}```\n\n")
|
||||
|
||||
# Normal scenario
|
||||
else:
|
||||
|
|
@ -208,6 +222,7 @@ class WriteCode(Action):
|
|||
doc = await src_file_repo.get(filename=filename)
|
||||
if not doc:
|
||||
continue
|
||||
codes.append(f"----- {filename}\n```{doc.content}```")
|
||||
code_block_type = get_markdown_code_block_type(filename)
|
||||
codes.append(f"### File Name: `{filename}`\n```{code_block_type}\n{doc.content}```\n\n")
|
||||
|
||||
return "\n".join(codes)
|
||||
|
|
|
|||
|
|
@ -578,7 +578,7 @@ class WriteCodeAN(Action):
|
|||
|
||||
async def run(self, context):
|
||||
self.llm.system_prompt = "You are an outstanding engineer and can implement any code"
|
||||
return await WRITE_MOVE_NODE.fill(context=context, llm=self.llm, schema="json")
|
||||
return await WRITE_MOVE_NODE.fill(req=context, llm=self.llm, schema="json")
|
||||
|
||||
|
||||
async def main():
|
||||
|
|
|
|||
|
|
@ -5,15 +5,16 @@
|
|||
@Author : mannaandpoem
|
||||
@File : write_code_plan_and_change_an.py
|
||||
"""
|
||||
import os
|
||||
from typing import List
|
||||
from typing import List, Optional
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from metagpt.actions.action import Action
|
||||
from metagpt.actions.action_node import ActionNode
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import CodePlanAndChangeContext
|
||||
from metagpt.schema import CodePlanAndChangeContext, Document
|
||||
from metagpt.utils.common import get_markdown_code_block_type
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
|
||||
DEVELOPMENT_PLAN = ActionNode(
|
||||
key="Development Plan",
|
||||
|
|
@ -162,9 +163,8 @@ Role: You are a professional engineer; The main goal is to complete incremental
|
|||
{task}
|
||||
|
||||
## Legacy Code
|
||||
```Code
|
||||
{code}
|
||||
```
|
||||
|
||||
|
||||
## Debug logs
|
||||
```text
|
||||
|
|
@ -179,9 +179,14 @@ Role: You are a professional engineer; The main goal is to complete incremental
|
|||
```
|
||||
|
||||
# Format example
|
||||
## Code: {filename}
|
||||
## Code: {demo_filename}.py
|
||||
```python
|
||||
## {filename}
|
||||
## {demo_filename}.py
|
||||
...
|
||||
```
|
||||
## Code: {demo_filename}.js
|
||||
```javascript
|
||||
// {demo_filename}.js
|
||||
...
|
||||
```
|
||||
|
||||
|
|
@ -206,13 +211,15 @@ WRITE_CODE_PLAN_AND_CHANGE_NODE = ActionNode.from_children("WriteCodePlanAndChan
|
|||
class WriteCodePlanAndChange(Action):
|
||||
name: str = "WriteCodePlanAndChange"
|
||||
i_context: CodePlanAndChangeContext = Field(default_factory=CodePlanAndChangeContext)
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
async def run(self, *args, **kwargs):
|
||||
self.llm.system_prompt = "You are a professional software engineer, your primary responsibility is to "
|
||||
"meticulously craft comprehensive incremental development plan and deliver detailed incremental change"
|
||||
prd_doc = await self.repo.docs.prd.get(filename=self.i_context.prd_filename)
|
||||
design_doc = await self.repo.docs.system_design.get(filename=self.i_context.design_filename)
|
||||
task_doc = await self.repo.docs.task.get(filename=self.i_context.task_filename)
|
||||
prd_doc = await Document.load(filename=self.i_context.prd_filename)
|
||||
design_doc = await Document.load(filename=self.i_context.design_filename)
|
||||
task_doc = await Document.load(filename=self.i_context.task_filename)
|
||||
context = CODE_PLAN_AND_CHANGE_CONTEXT.format(
|
||||
requirement=f"```text\n{self.i_context.requirement}\n```",
|
||||
issue=f"```text\n{self.i_context.issue}\n```",
|
||||
|
|
@ -222,11 +229,12 @@ class WriteCodePlanAndChange(Action):
|
|||
code=await self.get_old_codes(),
|
||||
)
|
||||
logger.info("Writing code plan and change..")
|
||||
return await WRITE_CODE_PLAN_AND_CHANGE_NODE.fill(context=context, llm=self.llm, schema="json")
|
||||
return await WRITE_CODE_PLAN_AND_CHANGE_NODE.fill(req=context, llm=self.llm, schema="json")
|
||||
|
||||
async def get_old_codes(self) -> str:
|
||||
self.repo.old_workspace = self.repo.git_repo.workdir / os.path.basename(self.config.project_path)
|
||||
old_file_repo = self.repo.git_repo.new_file_repository(relative_path=self.repo.old_workspace)
|
||||
old_codes = await old_file_repo.get_all()
|
||||
codes = [f"----- {code.filename}\n```{code.content}```" for code in old_codes]
|
||||
old_codes = await self.repo.srcs.get_all()
|
||||
codes = [
|
||||
f"### File Name: `{code.filename}`\n```{get_markdown_code_block_type(code.filename)}\n{code.content}```\n"
|
||||
for code in old_codes
|
||||
]
|
||||
return "\n".join(codes)
|
||||
|
|
|
|||
|
|
@ -7,16 +7,22 @@
|
|||
@Modified By: mashenquan, 2023/11/27. Following the think-act principle, solidify the task parameters when creating the
|
||||
WriteCode object, rather than passing them in when calling the run function.
|
||||
"""
|
||||
import asyncio
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic import BaseModel, Field
|
||||
from tenacity import retry, stop_after_attempt, wait_random_exponential
|
||||
|
||||
from metagpt.actions import WriteCode
|
||||
from metagpt.actions.action import Action
|
||||
from metagpt.const import REQUIREMENT_FILENAME
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import CodingContext
|
||||
from metagpt.utils.common import CodeParser
|
||||
from metagpt.schema import CodingContext, Document
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import CodeParser, aread, awrite
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
from metagpt.utils.report import EditorReporter
|
||||
|
||||
PROMPT_TEMPLATE = """
|
||||
# System
|
||||
|
|
@ -119,34 +125,48 @@ LGTM
|
|||
|
||||
REWRITE_CODE_TEMPLATE = """
|
||||
# Instruction: rewrite the `{filename}` based on the Code Review and Actions
|
||||
## Rewrite Code: CodeBlock. If it still has some bugs, rewrite {filename} with triple quotes. Do your utmost to optimize THIS SINGLE FILE. Return all completed codes and prohibit the return of unfinished codes.
|
||||
```Code
|
||||
## Rewrite Code: CodeBlock. If it still has some bugs, rewrite {filename} using a Markdown code block, with the filename docstring preceding the code block. Do your utmost to optimize THIS SINGLE FILE. Return all completed codes and prohibit the return of unfinished codes.
|
||||
```python
|
||||
## {filename}
|
||||
...
|
||||
```
|
||||
or
|
||||
```javascript
|
||||
// {filename}
|
||||
...
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
class WriteCodeReview(Action):
|
||||
name: str = "WriteCodeReview"
|
||||
i_context: CodingContext = Field(default_factory=CodingContext)
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
|
||||
async def write_code_review_and_rewrite(self, context_prompt, cr_prompt, filename):
|
||||
async def write_code_review_and_rewrite(self, context_prompt, cr_prompt, doc):
|
||||
filename = doc.filename
|
||||
cr_rsp = await self._aask(context_prompt + cr_prompt)
|
||||
result = CodeParser.parse_block("Code Review Result", cr_rsp)
|
||||
if "LGTM" in result:
|
||||
return result, None
|
||||
|
||||
# if LBTM, rewrite code
|
||||
rewrite_prompt = f"{context_prompt}\n{cr_rsp}\n{REWRITE_CODE_TEMPLATE.format(filename=filename)}"
|
||||
code_rsp = await self._aask(rewrite_prompt)
|
||||
code = CodeParser.parse_code(block="", text=code_rsp)
|
||||
async with EditorReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report(
|
||||
{"type": "code", "filename": filename, "src_path": doc.root_relative_path}, "meta"
|
||||
)
|
||||
rewrite_prompt = f"{context_prompt}\n{cr_rsp}\n{REWRITE_CODE_TEMPLATE.format(filename=filename)}"
|
||||
code_rsp = await self._aask(rewrite_prompt)
|
||||
code = CodeParser.parse_code(text=code_rsp)
|
||||
doc.content = code
|
||||
await reporter.async_report(doc, "document")
|
||||
return result, code
|
||||
|
||||
async def run(self, *args, **kwargs) -> CodingContext:
|
||||
iterative_code = self.i_context.code_doc.content
|
||||
k = self.context.config.code_review_k_times or 1
|
||||
k = self.context.config.code_validate_k_times or 1
|
||||
|
||||
for i in range(k):
|
||||
format_example = FORMAT_EXAMPLE.format(filename=self.i_context.code_doc.filename)
|
||||
|
|
@ -154,7 +174,7 @@ class WriteCodeReview(Action):
|
|||
code_context = await WriteCode.get_codes(
|
||||
self.i_context.task_doc,
|
||||
exclude=self.i_context.filename,
|
||||
project_repo=self.repo.with_src_path(self.context.src_workspace),
|
||||
project_repo=self.repo,
|
||||
use_inc=self.config.inc,
|
||||
)
|
||||
|
||||
|
|
@ -164,7 +184,7 @@ class WriteCodeReview(Action):
|
|||
"## Code Files\n" + code_context + "\n",
|
||||
]
|
||||
if self.config.inc:
|
||||
requirement_doc = await self.repo.docs.get(filename=REQUIREMENT_FILENAME)
|
||||
requirement_doc = await Document.load(filename=self.input_args.requirements_filename)
|
||||
insert_ctx_list = [
|
||||
"## User New Requirements\n" + str(requirement_doc) + "\n",
|
||||
"## Code Plan And Change\n" + str(self.i_context.code_plan_and_change_doc) + "\n",
|
||||
|
|
@ -187,7 +207,7 @@ class WriteCodeReview(Action):
|
|||
f"len(self.i_context.code_doc.content)={len2}"
|
||||
)
|
||||
result, rewrited_code = await self.write_code_review_and_rewrite(
|
||||
context_prompt, cr_prompt, self.i_context.code_doc.filename
|
||||
context_prompt, cr_prompt, self.i_context.code_doc
|
||||
)
|
||||
if "LBTM" in result:
|
||||
iterative_code = rewrited_code
|
||||
|
|
@ -199,3 +219,97 @@ class WriteCodeReview(Action):
|
|||
# 如果rewrited_code是None(原code perfect),那么直接返回code
|
||||
self.i_context.code_doc.content = iterative_code
|
||||
return self.i_context
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class ValidateAndRewriteCode(Action):
|
||||
"""According to the design and task documents, validate the code to ensure it is complete and correct."""
|
||||
|
||||
name: str = "ValidateAndRewriteCode"
|
||||
|
||||
async def run(
|
||||
self,
|
||||
code_path: str,
|
||||
system_design_input: str = "",
|
||||
project_schedule_input: str = "",
|
||||
code_validate_k_times: int = 2,
|
||||
) -> str:
|
||||
"""Validates the provided code based on the accompanying system design and project schedule documentation, return the complete and correct code.
|
||||
|
||||
Read the code from code_path, and write the final code to code_path.
|
||||
If both system_design_input and project_schedule_input are absent, it will return and do nothing.
|
||||
|
||||
Args:
|
||||
code_path (str): The file path of the code snippet to be validated. This should be a string containing the path to the source code file.
|
||||
system_design_input (str): Content or file path of the design document associated with the code. This should describe the system architecture, used in the code. It helps provide context for the validation process.
|
||||
project_schedule_input (str): Content or file path of the task document describing what the code is intended to accomplish. This should outline the functional requirements or objectives of the code.
|
||||
code_validate_k_times (int, optional): The number of iterations for validating and potentially rewriting the code. Defaults to 2.
|
||||
|
||||
Returns:
|
||||
str: The potentially corrected or approved code after validation.
|
||||
|
||||
Example Usage:
|
||||
# Example of how to call the run method with a code snippet and documentation
|
||||
await ValidateAndRewriteCode().run(
|
||||
code_path="/tmp/game.js",
|
||||
system_design_input="/tmp/system_design.json",
|
||||
project_schedule_input="/tmp/project_task_list.json"
|
||||
)
|
||||
"""
|
||||
if not system_design_input and not project_schedule_input:
|
||||
logger.info(
|
||||
"Both `system_design_input` and `project_schedule_input` are absent, ValidateAndRewriteCode will do nothing."
|
||||
)
|
||||
return
|
||||
|
||||
code, design_doc, task_doc = await asyncio.gather(
|
||||
aread(code_path), self._try_aread(system_design_input), self._try_aread(project_schedule_input)
|
||||
)
|
||||
code_doc = self._create_code_doc(code_path=code_path, code=code)
|
||||
review_action = WriteCodeReview(i_context=CodingContext(filename=code_doc.filename))
|
||||
|
||||
context = "\n".join(
|
||||
[
|
||||
"## System Design\n" + design_doc + "\n",
|
||||
"## Task\n" + task_doc + "\n",
|
||||
]
|
||||
)
|
||||
|
||||
for i in range(code_validate_k_times):
|
||||
context_prompt = PROMPT_TEMPLATE.format(context=context, code=code, filename=code_path)
|
||||
cr_prompt = EXAMPLE_AND_INSTRUCTION.format(
|
||||
format_example=FORMAT_EXAMPLE.format(filename=code_path),
|
||||
)
|
||||
logger.info(f"The {i+1}th time to CodeReview: {code_path}.")
|
||||
result, rewrited_code = await review_action.write_code_review_and_rewrite(
|
||||
context_prompt, cr_prompt, doc=code_doc
|
||||
)
|
||||
|
||||
if "LBTM" in result:
|
||||
code = rewrited_code
|
||||
elif "LGTM" in result:
|
||||
break
|
||||
|
||||
await awrite(filename=code_path, data=code)
|
||||
|
||||
return (
|
||||
f"The review and rewriting of the code in the file '{os.path.basename(code_path)}' has been completed."
|
||||
+ code
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
async def _try_aread(input: str) -> str:
|
||||
"""Try to read from the path if it's a file; return input directly if not."""
|
||||
|
||||
if os.path.exists(input):
|
||||
return await aread(input)
|
||||
|
||||
return input
|
||||
|
||||
@staticmethod
|
||||
def _create_code_doc(code_path: str, code: str) -> Document:
|
||||
"""Create a Document to represent the code doc."""
|
||||
|
||||
path = Path(code_path)
|
||||
|
||||
return Document(root_path=str(path.parent), filename=path.name, content=code)
|
||||
|
|
|
|||
|
|
@ -9,12 +9,16 @@
|
|||
2. According to the design in Section 2.2.3.5.2 of RFC 135, add incremental iteration functionality.
|
||||
3. Move the document storage operations related to WritePRD from the save operation of WriteDesign.
|
||||
@Modified By: mashenquan, 2023/12/5. Move the generation logic of the project name to WritePRD.
|
||||
@Modified By: mashenquan, 2024/5/31. Implement Chapter 3 of RFC 236.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from metagpt.actions import Action, ActionOutput
|
||||
from metagpt.actions.action_node import ActionNode
|
||||
|
|
@ -33,10 +37,20 @@ from metagpt.const import (
|
|||
REQUIREMENT_FILENAME,
|
||||
)
|
||||
from metagpt.logs import logger
|
||||
from metagpt.schema import BugFixContext, Document, Documents, Message
|
||||
from metagpt.utils.common import CodeParser
|
||||
from metagpt.schema import AIMessage, Document, Documents, Message
|
||||
from metagpt.tools.tool_registry import register_tool
|
||||
from metagpt.utils.common import (
|
||||
CodeParser,
|
||||
aread,
|
||||
awrite,
|
||||
rectify_pathname,
|
||||
save_json_to_markdown,
|
||||
to_markdown_code_block,
|
||||
)
|
||||
from metagpt.utils.file_repository import FileRepository
|
||||
from metagpt.utils.mermaid import mermaid_to_file
|
||||
from metagpt.utils.project_repo import ProjectRepo
|
||||
from metagpt.utils.report import DocsReporter, GalleryReporter
|
||||
|
||||
CONTEXT_TEMPLATE = """
|
||||
### Project Name
|
||||
|
|
@ -58,6 +72,7 @@ NEW_REQ_TEMPLATE = """
|
|||
"""
|
||||
|
||||
|
||||
@register_tool(include_functions=["run"])
|
||||
class WritePRD(Action):
|
||||
"""WritePRD deal with the following situations:
|
||||
1. Bugfix: If the requirement is a bugfix, the bugfix document will be generated.
|
||||
|
|
@ -65,10 +80,79 @@ class WritePRD(Action):
|
|||
3. Requirement update: If the requirement is an update, the PRD document will be updated.
|
||||
"""
|
||||
|
||||
async def run(self, with_messages, *args, **kwargs) -> ActionOutput | Message:
|
||||
"""Run the action."""
|
||||
req: Document = await self.repo.requirement
|
||||
docs: list[Document] = await self.repo.docs.prd.get_all()
|
||||
repo: Optional[ProjectRepo] = Field(default=None, exclude=True)
|
||||
input_args: Optional[BaseModel] = Field(default=None, exclude=True)
|
||||
|
||||
async def run(
|
||||
self,
|
||||
with_messages: List[Message] = None,
|
||||
*,
|
||||
user_requirement: str = "",
|
||||
output_pathname: str = "",
|
||||
legacy_prd_filename: str = "",
|
||||
extra_info: str = "",
|
||||
**kwargs,
|
||||
) -> Union[AIMessage, str]:
|
||||
"""
|
||||
Write a Product Requirement Document.
|
||||
|
||||
Args:
|
||||
user_requirement (str): A string detailing the user's requirements.
|
||||
output_pathname (str, optional): The output file path of the document. Defaults to "".
|
||||
legacy_prd_filename (str, optional): The file path of the legacy Product Requirement Document to use as a reference. Defaults to "".
|
||||
extra_info (str, optional): Additional information to include in the document. Defaults to "".
|
||||
**kwargs: Additional keyword arguments.
|
||||
|
||||
Returns:
|
||||
str: The file path of the generated Product Requirement Document.
|
||||
|
||||
Example:
|
||||
# Write a new PRD (Product Requirement Document)
|
||||
>>> user_requirement = "Write a snake game"
|
||||
>>> output_pathname = "snake_game/docs/prd.json"
|
||||
>>> extra_info = "YOUR EXTRA INFO, if any"
|
||||
>>> write_prd = WritePRD()
|
||||
>>> result = await write_prd.run(user_requirement=user_requirement, output_pathname=output_pathname, extra_info=extra_info)
|
||||
>>> print(result)
|
||||
PRD filename: "/absolute/path/to/snake_game/docs/prd.json"
|
||||
|
||||
# Rewrite an existing PRD (Product Requirement Document) and save to a new path.
|
||||
>>> user_requirement = "Write PRD for a snake game, include new features such as a web UI"
|
||||
>>> legacy_prd_filename = "/absolute/path/to/snake_game/docs/prd.json"
|
||||
>>> output_pathname = "/absolute/path/to/snake_game/docs/prd_new.json"
|
||||
>>> extra_info = "YOUR EXTRA INFO, if any"
|
||||
>>> write_prd = WritePRD()
|
||||
>>> result = await write_prd.run(user_requirement=user_requirement, legacy_prd_filename=legacy_prd_filename, extra_info=extra_info)
|
||||
>>> print(result)
|
||||
PRD filename: "/absolute/path/to/snake_game/docs/prd_new.json"
|
||||
"""
|
||||
if not with_messages:
|
||||
return await self._execute_api(
|
||||
user_requirement=user_requirement,
|
||||
output_pathname=output_pathname,
|
||||
legacy_prd_filename=legacy_prd_filename,
|
||||
extra_info=extra_info,
|
||||
)
|
||||
|
||||
self.input_args = with_messages[-1].instruct_content
|
||||
if not self.input_args:
|
||||
self.repo = ProjectRepo(self.context.kwargs.project_path)
|
||||
await self.repo.docs.save(filename=REQUIREMENT_FILENAME, content=with_messages[-1].content)
|
||||
self.input_args = AIMessage.create_instruct_value(
|
||||
kvs={
|
||||
"project_path": self.context.kwargs.project_path,
|
||||
"requirements_filename": str(self.repo.docs.workdir / REQUIREMENT_FILENAME),
|
||||
"prd_filenames": [str(self.repo.docs.prd.workdir / i) for i in self.repo.docs.prd.all_files],
|
||||
},
|
||||
class_name="PrepareDocumentsOutput",
|
||||
)
|
||||
else:
|
||||
self.repo = ProjectRepo(self.input_args.project_path)
|
||||
req = await Document.load(filename=self.input_args.requirements_filename)
|
||||
docs: list[Document] = [
|
||||
await Document.load(filename=i, project_path=self.repo.workdir) for i in self.input_args.prd_filenames
|
||||
]
|
||||
|
||||
if not req:
|
||||
raise FileNotFoundError("No requirement document found.")
|
||||
|
||||
|
|
@ -81,49 +165,80 @@ class WritePRD(Action):
|
|||
# if requirement is related to other documents, update them, otherwise create a new one
|
||||
if related_docs := await self.get_related_docs(req, docs):
|
||||
logger.info(f"Requirement update detected: {req.content}")
|
||||
return await self._handle_requirement_update(req, related_docs)
|
||||
await self._handle_requirement_update(req=req, related_docs=related_docs)
|
||||
else:
|
||||
logger.info(f"New requirement detected: {req.content}")
|
||||
return await self._handle_new_requirement(req)
|
||||
await self._handle_new_requirement(req)
|
||||
|
||||
async def _handle_bugfix(self, req: Document) -> Message:
|
||||
kvs = self.input_args.model_dump()
|
||||
kvs["changed_prd_filenames"] = [
|
||||
str(self.repo.docs.prd.workdir / i) for i in list(self.repo.docs.prd.changed_files.keys())
|
||||
]
|
||||
kvs["project_path"] = str(self.repo.workdir)
|
||||
kvs["requirements_filename"] = str(self.repo.docs.workdir / REQUIREMENT_FILENAME)
|
||||
self.context.kwargs.project_path = str(self.repo.workdir)
|
||||
return AIMessage(
|
||||
content="PRD is completed. "
|
||||
+ "\n".join(
|
||||
list(self.repo.docs.prd.changed_files.keys())
|
||||
+ list(self.repo.resources.prd.changed_files.keys())
|
||||
+ list(self.repo.resources.competitive_analysis.changed_files.keys())
|
||||
),
|
||||
instruct_content=AIMessage.create_instruct_value(kvs=kvs, class_name="WritePRDOutput"),
|
||||
cause_by=self,
|
||||
)
|
||||
|
||||
async def _handle_bugfix(self, req: Document) -> AIMessage:
|
||||
# ... bugfix logic ...
|
||||
await self.repo.docs.save(filename=BUGFIX_FILENAME, content=req.content)
|
||||
await self.repo.docs.save(filename=REQUIREMENT_FILENAME, content="")
|
||||
bug_fix = BugFixContext(filename=BUGFIX_FILENAME)
|
||||
return Message(
|
||||
content=bug_fix.model_dump_json(),
|
||||
instruct_content=bug_fix,
|
||||
role="",
|
||||
return AIMessage(
|
||||
content=f"A new issue is received: {BUGFIX_FILENAME}",
|
||||
cause_by=FixBug,
|
||||
sent_from=self,
|
||||
instruct_content=AIMessage.create_instruct_value(
|
||||
{
|
||||
"project_path": str(self.repo.workdir),
|
||||
"issue_filename": str(self.repo.docs.workdir / BUGFIX_FILENAME),
|
||||
"requirements_filename": str(self.repo.docs.workdir / REQUIREMENT_FILENAME),
|
||||
},
|
||||
class_name="IssueDetail",
|
||||
),
|
||||
send_to="Alex", # the name of Engineer
|
||||
)
|
||||
|
||||
async def _new_prd(self, requirement: str) -> ActionNode:
|
||||
project_name = self.project_name
|
||||
context = CONTEXT_TEMPLATE.format(requirements=requirement, project_name=project_name)
|
||||
exclude = [PROJECT_NAME.key] if project_name else []
|
||||
node = await WRITE_PRD_NODE.fill(
|
||||
req=context, llm=self.llm, exclude=exclude, schema=self.prompt_schema
|
||||
) # schema=schema
|
||||
return node
|
||||
|
||||
async def _handle_new_requirement(self, req: Document) -> ActionOutput:
|
||||
"""handle new requirement"""
|
||||
project_name = self.project_name
|
||||
context = CONTEXT_TEMPLATE.format(requirements=req, project_name=project_name)
|
||||
exclude = [PROJECT_NAME.key] if project_name else []
|
||||
node = await WRITE_PRD_NODE.fill(context=context, llm=self.llm, exclude=exclude) # schema=schema
|
||||
await self._rename_workspace(node)
|
||||
new_prd_doc = await self.repo.docs.prd.save(
|
||||
filename=FileRepository.new_filename() + ".json", content=node.instruct_content.model_dump_json()
|
||||
)
|
||||
await self._save_competitive_analysis(new_prd_doc)
|
||||
await self.repo.resources.prd.save_pdf(doc=new_prd_doc)
|
||||
return Documents.from_iterable(documents=[new_prd_doc]).to_action_output()
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "prd"}, "meta")
|
||||
node = await self._new_prd(req.content)
|
||||
await self._rename_workspace(node)
|
||||
new_prd_doc = await self.repo.docs.prd.save(
|
||||
filename=FileRepository.new_filename() + ".json", content=node.instruct_content.model_dump_json()
|
||||
)
|
||||
await self._save_competitive_analysis(new_prd_doc)
|
||||
md = await self.repo.resources.prd.save_pdf(doc=new_prd_doc)
|
||||
await reporter.async_report(self.repo.workdir / md.root_relative_path, "path")
|
||||
return Documents.from_iterable(documents=[new_prd_doc]).to_action_output()
|
||||
|
||||
async def _handle_requirement_update(self, req: Document, related_docs: list[Document]) -> ActionOutput:
|
||||
# ... requirement update logic ...
|
||||
for doc in related_docs:
|
||||
await self._update_prd(req, doc)
|
||||
await self._update_prd(req=req, prd_doc=doc)
|
||||
return Documents.from_iterable(documents=related_docs).to_action_output()
|
||||
|
||||
async def _is_bugfix(self, context: str) -> bool:
|
||||
if not self.repo.code_files_exists():
|
||||
return False
|
||||
node = await WP_ISSUE_TYPE_NODE.fill(context, self.llm)
|
||||
node = await WP_ISSUE_TYPE_NODE.fill(req=context, llm=self.llm)
|
||||
return node.get("issue_type") == "BUG"
|
||||
|
||||
async def get_related_docs(self, req: Document, docs: list[Document]) -> list[Document]:
|
||||
|
|
@ -133,33 +248,39 @@ class WritePRD(Action):
|
|||
|
||||
async def _is_related(self, req: Document, old_prd: Document) -> bool:
|
||||
context = NEW_REQ_TEMPLATE.format(old_prd=old_prd.content, requirements=req.content)
|
||||
node = await WP_IS_RELATIVE_NODE.fill(context, self.llm)
|
||||
node = await WP_IS_RELATIVE_NODE.fill(req=context, llm=self.llm)
|
||||
return node.get("is_relative") == "YES"
|
||||
|
||||
async def _merge(self, req: Document, related_doc: Document) -> Document:
|
||||
if not self.project_name:
|
||||
self.project_name = Path(self.project_path).name
|
||||
prompt = NEW_REQ_TEMPLATE.format(requirements=req.content, old_prd=related_doc.content)
|
||||
node = await REFINED_PRD_NODE.fill(context=prompt, llm=self.llm, schema=self.prompt_schema)
|
||||
node = await REFINED_PRD_NODE.fill(req=prompt, llm=self.llm, schema=self.prompt_schema)
|
||||
related_doc.content = node.instruct_content.model_dump_json()
|
||||
await self._rename_workspace(node)
|
||||
return related_doc
|
||||
|
||||
async def _update_prd(self, req: Document, prd_doc: Document) -> Document:
|
||||
new_prd_doc: Document = await self._merge(req, prd_doc)
|
||||
await self.repo.docs.prd.save_doc(doc=new_prd_doc)
|
||||
await self._save_competitive_analysis(new_prd_doc)
|
||||
await self.repo.resources.prd.save_pdf(doc=new_prd_doc)
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "prd"}, "meta")
|
||||
new_prd_doc: Document = await self._merge(req=req, related_doc=prd_doc)
|
||||
await self.repo.docs.prd.save_doc(doc=new_prd_doc)
|
||||
await self._save_competitive_analysis(new_prd_doc)
|
||||
md = await self.repo.resources.prd.save_pdf(doc=new_prd_doc)
|
||||
await reporter.async_report(self.repo.workdir / md.root_relative_path, "path")
|
||||
return new_prd_doc
|
||||
|
||||
async def _save_competitive_analysis(self, prd_doc: Document):
|
||||
async def _save_competitive_analysis(self, prd_doc: Document, output_filename: Path = None):
|
||||
m = json.loads(prd_doc.content)
|
||||
quadrant_chart = m.get(COMPETITIVE_QUADRANT_CHART.key)
|
||||
if not quadrant_chart:
|
||||
return
|
||||
pathname = self.repo.workdir / COMPETITIVE_ANALYSIS_FILE_REPO / Path(prd_doc.filename).stem
|
||||
pathname = output_filename or self.repo.workdir / COMPETITIVE_ANALYSIS_FILE_REPO / Path(prd_doc.filename).stem
|
||||
pathname.parent.mkdir(parents=True, exist_ok=True)
|
||||
await mermaid_to_file(self.config.mermaid.engine, quadrant_chart, pathname)
|
||||
image_path = pathname.parent / f"{pathname.name}.svg"
|
||||
if image_path.exists():
|
||||
await GalleryReporter().async_report(image_path, "path")
|
||||
|
||||
async def _rename_workspace(self, prd):
|
||||
if not self.project_name:
|
||||
|
|
@ -169,4 +290,36 @@ class WritePRD(Action):
|
|||
ws_name = CodeParser.parse_str(block="Project Name", text=prd)
|
||||
if ws_name:
|
||||
self.project_name = ws_name
|
||||
self.repo.git_repo.rename_root(self.project_name)
|
||||
if self.repo:
|
||||
self.repo.git_repo.rename_root(self.project_name)
|
||||
|
||||
async def _execute_api(
|
||||
self, user_requirement: str, output_pathname: str, legacy_prd_filename: str, extra_info: str
|
||||
) -> str:
|
||||
content = "#### User Requirements\n{user_requirement}\n#### Extra Info\n{extra_info}\n".format(
|
||||
user_requirement=to_markdown_code_block(val=user_requirement),
|
||||
extra_info=to_markdown_code_block(val=extra_info),
|
||||
)
|
||||
async with DocsReporter(enable_llm_stream=True) as reporter:
|
||||
await reporter.async_report({"type": "prd"}, "meta")
|
||||
req = Document(content=content)
|
||||
if not legacy_prd_filename:
|
||||
node = await self._new_prd(requirement=req.content)
|
||||
new_prd = Document(content=node.instruct_content.model_dump_json())
|
||||
else:
|
||||
content = await aread(filename=legacy_prd_filename)
|
||||
old_prd = Document(content=content)
|
||||
new_prd = await self._merge(req=req, related_doc=old_prd)
|
||||
|
||||
if not output_pathname:
|
||||
output_pathname = self.config.workspace.path / "docs" / "prd.json"
|
||||
elif not Path(output_pathname).is_absolute():
|
||||
output_pathname = self.config.workspace.path / output_pathname
|
||||
output_pathname = rectify_pathname(path=output_pathname, default_filename="prd.json")
|
||||
await awrite(filename=output_pathname, data=new_prd.content)
|
||||
competitive_analysis_filename = output_pathname.parent / f"{output_pathname.stem}-competitive-analysis"
|
||||
await self._save_competitive_analysis(prd_doc=new_prd, output_filename=Path(competitive_analysis_filename))
|
||||
md_output_filename = output_pathname.with_suffix(".md")
|
||||
await save_json_to_markdown(content=new_prd.content, output_filename=md_output_filename)
|
||||
await reporter.async_report(md_output_filename, "path")
|
||||
return f'PRD filename: "{str(output_pathname)}". The product requirement document (PRD) has been completed.'
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
@Author : alexanderwu
|
||||
@File : write_prd_an.py
|
||||
"""
|
||||
from typing import List
|
||||
from typing import List, Union
|
||||
|
||||
from metagpt.actions.action_node import ActionNode
|
||||
|
||||
|
|
@ -19,8 +19,8 @@ LANGUAGE = ActionNode(
|
|||
PROGRAMMING_LANGUAGE = ActionNode(
|
||||
key="Programming Language",
|
||||
expected_type=str,
|
||||
instruction="Python/JavaScript or other mainstream programming language.",
|
||||
example="Python",
|
||||
instruction="Mainstream programming language. If not specified in the requirements, use Vite, React, MUI, Tailwind CSS.",
|
||||
example="Vite, React, MUI, Tailwind CSS",
|
||||
)
|
||||
|
||||
ORIGINAL_REQUIREMENTS = ActionNode(
|
||||
|
|
@ -132,7 +132,7 @@ REQUIREMENT_ANALYSIS = ActionNode(
|
|||
|
||||
REFINED_REQUIREMENT_ANALYSIS = ActionNode(
|
||||
key="Refined Requirement Analysis",
|
||||
expected_type=List[str],
|
||||
expected_type=Union[List[str], str],
|
||||
instruction="Review and refine the existing requirement analysis into a string list to align with the evolving needs of the project "
|
||||
"due to incremental development. Ensure the analysis comprehensively covers the new features and enhancements "
|
||||
"required for the refined project scope.",
|
||||
|
|
@ -165,7 +165,7 @@ ANYTHING_UNCLEAR = ActionNode(
|
|||
key="Anything UNCLEAR",
|
||||
expected_type=str,
|
||||
instruction="Mention any aspects of the project that are unclear and try to clarify them.",
|
||||
example="",
|
||||
example="Currently, all aspects of the project are clear.",
|
||||
)
|
||||
|
||||
ISSUE_TYPE = ActionNode(
|
||||
|
|
|
|||
|
|
@ -36,4 +36,4 @@ class WriteReview(Action):
|
|||
name: str = "WriteReview"
|
||||
|
||||
async def run(self, context):
|
||||
return await WRITE_REVIEW_NODE.fill(context=context, llm=self.llm, schema="json")
|
||||
return await WRITE_REVIEW_NODE.fill(req=context, llm=self.llm, schema="json")
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ class WriteTest(Action):
|
|||
code_rsp = await self._aask(prompt)
|
||||
|
||||
try:
|
||||
code = CodeParser.parse_code(block="", text=code_rsp)
|
||||
code = CodeParser.parse_code(text=code_rsp)
|
||||
except Exception:
|
||||
# Handle the exception if needed
|
||||
logger.error(f"Can't parse the code: {code_rsp}")
|
||||
|
|
|
|||
8
metagpt/base/__init__.py
Normal file
8
metagpt/base/__init__.py
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
from metagpt.base.base_env import BaseEnvironment
|
||||
from metagpt.base.base_role import BaseRole
|
||||
|
||||
|
||||
__all__ = [
|
||||
"BaseEnvironment",
|
||||
"BaseRole",
|
||||
]
|
||||
42
metagpt/base/base_env.py
Normal file
42
metagpt/base/base_env.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
# @Desc : base environment
|
||||
|
||||
import typing
|
||||
from abc import abstractmethod
|
||||
from typing import Any, Optional
|
||||
|
||||
from metagpt.base.base_env_space import BaseEnvAction, BaseEnvObsParams
|
||||
from metagpt.base.base_serialization import BaseSerialization
|
||||
|
||||
if typing.TYPE_CHECKING:
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
||||
class BaseEnvironment(BaseSerialization):
|
||||
"""Base environment"""
|
||||
|
||||
@abstractmethod
|
||||
def reset(
|
||||
self,
|
||||
*,
|
||||
seed: Optional[int] = None,
|
||||
options: Optional[dict[str, Any]] = None,
|
||||
) -> tuple[dict[str, Any], dict[str, Any]]:
|
||||
"""Implement this to get init observation"""
|
||||
|
||||
@abstractmethod
|
||||
def observe(self, obs_params: Optional[BaseEnvObsParams] = None) -> Any:
|
||||
"""Implement this if you want to get partial observation from the env"""
|
||||
|
||||
@abstractmethod
|
||||
def step(self, action: BaseEnvAction) -> tuple[dict[str, Any], float, bool, bool, dict[str, Any]]:
|
||||
"""Implement this to feed a action and then get new observation from the env"""
|
||||
|
||||
@abstractmethod
|
||||
def publish_message(self, message: "Message", peekable: bool = True) -> bool:
|
||||
"""Distribute the message to the recipients."""
|
||||
|
||||
@abstractmethod
|
||||
async def run(self, k=1):
|
||||
"""Process all task at once"""
|
||||
36
metagpt/base/base_role.py
Normal file
36
metagpt/base/base_role.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
from abc import abstractmethod
|
||||
from typing import Optional, Union
|
||||
|
||||
from metagpt.base.base_serialization import BaseSerialization
|
||||
|
||||
|
||||
class BaseRole(BaseSerialization):
|
||||
"""Abstract base class for all roles."""
|
||||
|
||||
name: str
|
||||
|
||||
@property
|
||||
def is_idle(self) -> bool:
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def think(self):
|
||||
"""Consider what to do and decide on the next course of action."""
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def act(self):
|
||||
"""Perform the current action."""
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
async def react(self) -> "Message":
|
||||
"""Entry to one of three strategies by which Role reacts to the observed Message."""
|
||||
|
||||
@abstractmethod
|
||||
async def run(self, with_message: Optional[Union[str, "Message", list[str]]] = None) -> Optional["Message"]:
|
||||
"""Observe, and think and act based on the results of the observation."""
|
||||
|
||||
@abstractmethod
|
||||
def get_memories(self, k: int = 0) -> list["Message"]:
|
||||
"""Return the most recent k memories of this role."""
|
||||
67
metagpt/base/base_serialization.py
Normal file
67
metagpt/base/base_serialization.py
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, model_serializer, model_validator
|
||||
|
||||
|
||||
class BaseSerialization(BaseModel, extra="forbid"):
|
||||
"""
|
||||
PolyMorphic subclasses Serialization / Deserialization Mixin
|
||||
- First of all, we need to know that pydantic is not designed for polymorphism.
|
||||
- If Engineer is subclass of Role, it would be serialized as Role. If we want to serialize it as Engineer, we need
|
||||
to add `class name` to Engineer. So we need Engineer inherit SerializationMixin.
|
||||
|
||||
More details:
|
||||
- https://docs.pydantic.dev/latest/concepts/serialization/
|
||||
- https://github.com/pydantic/pydantic/discussions/7008 discuss about avoid `__get_pydantic_core_schema__`
|
||||
"""
|
||||
|
||||
__is_polymorphic_base = False
|
||||
__subclasses_map__ = {}
|
||||
|
||||
@model_serializer(mode="wrap")
|
||||
def __serialize_with_class_type__(self, default_serializer) -> Any:
|
||||
# default serializer, then append the `__module_class_name` field and return
|
||||
ret = default_serializer(self)
|
||||
ret["__module_class_name"] = f"{self.__class__.__module__}.{self.__class__.__qualname__}"
|
||||
return ret
|
||||
|
||||
@model_validator(mode="wrap")
|
||||
@classmethod
|
||||
def __convert_to_real_type__(cls, value: Any, handler):
|
||||
if isinstance(value, dict) is False:
|
||||
return handler(value)
|
||||
|
||||
# it is a dict so make sure to remove the __module_class_name
|
||||
# because we don't allow extra keywords but want to ensure
|
||||
# e.g Cat.model_validate(cat.model_dump()) works
|
||||
class_full_name = value.pop("__module_class_name", None)
|
||||
|
||||
# if it's not the polymorphic base we construct via default handler
|
||||
if not cls.__is_polymorphic_base:
|
||||
if class_full_name is None:
|
||||
return handler(value)
|
||||
elif str(cls) == f"<class '{class_full_name}'>":
|
||||
return handler(value)
|
||||
else:
|
||||
# f"Trying to instantiate {class_full_name} but this is not the polymorphic base class")
|
||||
pass
|
||||
|
||||
# otherwise we lookup the correct polymorphic type and construct that
|
||||
# instead
|
||||
if class_full_name is None:
|
||||
raise ValueError("Missing __module_class_name field")
|
||||
|
||||
class_type = cls.__subclasses_map__.get(class_full_name, None)
|
||||
|
||||
if class_type is None:
|
||||
# TODO could try dynamic import
|
||||
raise TypeError(f"Trying to instantiate {class_full_name}, which has not yet been defined!")
|
||||
|
||||
return class_type(**value)
|
||||
|
||||
def __init_subclass__(cls, is_polymorphic_base: bool = False, **kwargs):
|
||||
cls.__is_polymorphic_base = is_polymorphic_base
|
||||
cls.__subclasses_map__[f"{cls.__module__}.{cls.__qualname__}"] = cls
|
||||
super().__init_subclass__(**kwargs)
|
||||
|
|
@ -9,14 +9,17 @@ import os
|
|||
from pathlib import Path
|
||||
from typing import Dict, Iterable, List, Literal, Optional
|
||||
|
||||
from pydantic import BaseModel, model_validator
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from metagpt.configs.browser_config import BrowserConfig
|
||||
from metagpt.configs.embedding_config import EmbeddingConfig
|
||||
from metagpt.configs.file_parser_config import OmniParseConfig
|
||||
from metagpt.configs.exp_pool_config import ExperiencePoolConfig
|
||||
from metagpt.configs.llm_config import LLMConfig, LLMType
|
||||
from metagpt.configs.mermaid_config import MermaidConfig
|
||||
from metagpt.configs.omniparse_config import OmniParseConfig
|
||||
from metagpt.configs.redis_config import RedisConfig
|
||||
from metagpt.configs.role_custom_config import RoleCustomConfig
|
||||
from metagpt.configs.role_zero_config import RoleZeroConfig
|
||||
from metagpt.configs.s3_config import S3Config
|
||||
from metagpt.configs.search_config import SearchConfig
|
||||
from metagpt.configs.workspace_config import WorkspaceConfig
|
||||
|
|
@ -60,6 +63,7 @@ class Config(CLIParams, YamlModel):
|
|||
|
||||
# Tool Parameters
|
||||
search: SearchConfig = SearchConfig()
|
||||
enable_search: bool = False
|
||||
browser: BrowserConfig = BrowserConfig()
|
||||
mermaid: MermaidConfig = MermaidConfig()
|
||||
|
||||
|
|
@ -70,10 +74,12 @@ class Config(CLIParams, YamlModel):
|
|||
# Misc Parameters
|
||||
repair_llm_output: bool = False
|
||||
prompt_schema: Literal["json", "markdown", "raw"] = "json"
|
||||
workspace: WorkspaceConfig = WorkspaceConfig()
|
||||
workspace: WorkspaceConfig = Field(default_factory=WorkspaceConfig)
|
||||
enable_longterm_memory: bool = False
|
||||
code_review_k_times: int = 2
|
||||
agentops_api_key: str = ""
|
||||
code_validate_k_times: int = 2
|
||||
|
||||
# Experience Pool Parameters
|
||||
exp_pool: ExperiencePoolConfig = Field(default_factory=ExperiencePoolConfig)
|
||||
|
||||
# Will be removed in the future
|
||||
metagpt_tti_url: str = ""
|
||||
|
|
@ -86,6 +92,12 @@ class Config(CLIParams, YamlModel):
|
|||
azure_tts_region: str = ""
|
||||
_extra: dict = dict() # extra config dict
|
||||
|
||||
# Role's custom configuration
|
||||
roles: Optional[List[RoleCustomConfig]] = None
|
||||
|
||||
# RoleZero's configuration
|
||||
role_zero: RoleZeroConfig = Field(default_factory=RoleZeroConfig)
|
||||
|
||||
@classmethod
|
||||
def from_home(cls, path):
|
||||
"""Load config from ~/.metagpt/config2.yaml"""
|
||||
|
|
@ -95,20 +107,20 @@ class Config(CLIParams, YamlModel):
|
|||
return Config.from_yaml_file(pathname)
|
||||
|
||||
@classmethod
|
||||
def default(cls):
|
||||
def default(cls, reload: bool = False, **kwargs) -> "Config":
|
||||
"""Load default config
|
||||
- Priority: env < default_config_paths
|
||||
- Inside default_config_paths, the latter one overwrites the former one
|
||||
"""
|
||||
default_config_paths: List[Path] = [
|
||||
default_config_paths = (
|
||||
METAGPT_ROOT / "config/config2.yaml",
|
||||
CONFIG_ROOT / "config2.yaml",
|
||||
]
|
||||
|
||||
dicts = [dict(os.environ)]
|
||||
dicts += [Config.read_yaml(path) for path in default_config_paths]
|
||||
final = merge_dict(dicts)
|
||||
return Config(**final)
|
||||
)
|
||||
if reload or default_config_paths not in _CONFIG_CACHE:
|
||||
dicts = [dict(os.environ), *(Config.read_yaml(path) for path in default_config_paths), kwargs]
|
||||
final = merge_dict(dicts)
|
||||
_CONFIG_CACHE[default_config_paths] = Config(**final)
|
||||
return _CONFIG_CACHE[default_config_paths]
|
||||
|
||||
@classmethod
|
||||
def from_llm_config(cls, llm_config: dict):
|
||||
|
|
@ -166,4 +178,5 @@ def merge_dict(dicts: Iterable[Dict]) -> Dict:
|
|||
return result
|
||||
|
||||
|
||||
_CONFIG_CACHE = {}
|
||||
config = Config.default()
|
||||
|
|
|
|||
|
|
@ -5,12 +5,23 @@
|
|||
@Author : alexanderwu
|
||||
@File : browser_config.py
|
||||
"""
|
||||
from enum import Enum
|
||||
from typing import Literal
|
||||
|
||||
from metagpt.tools import WebBrowserEngineType
|
||||
from metagpt.utils.yaml_model import YamlModel
|
||||
|
||||
|
||||
class WebBrowserEngineType(Enum):
|
||||
PLAYWRIGHT = "playwright"
|
||||
SELENIUM = "selenium"
|
||||
CUSTOM = "custom"
|
||||
|
||||
@classmethod
|
||||
def __missing__(cls, key):
|
||||
"""Default type conversion"""
|
||||
return cls.CUSTOM
|
||||
|
||||
|
||||
class BrowserConfig(YamlModel):
|
||||
"""Config for Browser"""
|
||||
|
||||
|
|
|
|||
32
metagpt/configs/compress_msg_config.py
Normal file
32
metagpt/configs/compress_msg_config.py
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
from enum import Enum
|
||||
|
||||
|
||||
class CompressType(Enum):
|
||||
"""
|
||||
Compression Type for messages. Used to compress messages under token limit.
|
||||
- "": No compression. Default value.
|
||||
- "post_cut_by_msg": Keep as many latest messages as possible.
|
||||
- "post_cut_by_token": Keep as many latest messages as possible and truncate the earliest fit-in message.
|
||||
- "pre_cut_by_msg": Keep as many earliest messages as possible.
|
||||
- "pre_cut_by_token": Keep as many earliest messages as possible and truncate the latest fit-in message.
|
||||
"""
|
||||
|
||||
NO_COMPRESS = ""
|
||||
POST_CUT_BY_MSG = "post_cut_by_msg"
|
||||
POST_CUT_BY_TOKEN = "post_cut_by_token"
|
||||
PRE_CUT_BY_MSG = "pre_cut_by_msg"
|
||||
PRE_CUT_BY_TOKEN = "pre_cut_by_token"
|
||||
|
||||
def __missing__(self, key):
|
||||
return self.NO_COMPRESS
|
||||
|
||||
@classmethod
|
||||
def get_type(cls, type_name):
|
||||
for member in cls:
|
||||
if member.value == type_name:
|
||||
return member
|
||||
return cls.NO_COMPRESS
|
||||
|
||||
@classmethod
|
||||
def cut_types(cls):
|
||||
return [member for member in cls if "cut" in member.value]
|
||||
25
metagpt/configs/exp_pool_config.py
Normal file
25
metagpt/configs/exp_pool_config.py
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
from enum import Enum
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from metagpt.utils.yaml_model import YamlModel
|
||||
|
||||
|
||||
class ExperiencePoolRetrievalType(Enum):
|
||||
BM25 = "bm25"
|
||||
CHROMA = "chroma"
|
||||
|
||||
|
||||
class ExperiencePoolConfig(YamlModel):
|
||||
enabled: bool = Field(
|
||||
default=False,
|
||||
description="Flag to enable or disable the experience pool. When disabled, both reading and writing are ineffective.",
|
||||
)
|
||||
enable_read: bool = Field(default=False, description="Enable to read from experience pool.")
|
||||
enable_write: bool = Field(default=False, description="Enable to write to experience pool.")
|
||||
persist_path: str = Field(default=".chroma_exp_data", description="The persist path for experience pool.")
|
||||
retrieval_type: ExperiencePoolRetrievalType = Field(
|
||||
default=ExperiencePoolRetrievalType.BM25, description="The retrieval type for experience pool."
|
||||
)
|
||||
use_llm_ranker: bool = Field(default=True, description="Use LLM Reranker to get better result.")
|
||||
collection_name: str = Field(default="experience_pool", description="The collection name in chromadb")
|
||||
|
|
@ -11,6 +11,7 @@ from typing import Optional
|
|||
|
||||
from pydantic import field_validator
|
||||
|
||||
from metagpt.configs.compress_msg_config import CompressType
|
||||
from metagpt.const import CONFIG_ROOT, LLM_API_TIMEOUT, METAGPT_ROOT
|
||||
from metagpt.utils.yaml_model import YamlModel
|
||||
|
||||
|
|
@ -35,6 +36,9 @@ class LLMType(Enum):
|
|||
MOONSHOT = "moonshot"
|
||||
MISTRAL = "mistral"
|
||||
YI = "yi" # lingyiwanwu
|
||||
OPEN_ROUTER = "open_router"
|
||||
DEEPSEEK = "deepseek"
|
||||
SILICONFLOW = "siliconflow"
|
||||
OPENROUTER = "openrouter"
|
||||
OPENROUTER_REASONING = "openrouter_reasoning"
|
||||
BEDROCK = "bedrock"
|
||||
|
|
@ -98,6 +102,9 @@ class LLMConfig(YamlModel):
|
|||
# Cost Control
|
||||
calc_usage: bool = True
|
||||
|
||||
# Compress request messages under token limit
|
||||
compress_type: CompressType = CompressType.NO_COMPRESS
|
||||
|
||||
# For Messages Control
|
||||
use_system_prompt: bool = True
|
||||
|
||||
|
|
|
|||
|
|
@ -4,3 +4,4 @@ from metagpt.utils.yaml_model import YamlModel
|
|||
class OmniParseConfig(YamlModel):
|
||||
api_key: str = ""
|
||||
base_url: str = ""
|
||||
timeout: int = 600
|
||||
19
metagpt/configs/role_custom_config.py
Normal file
19
metagpt/configs/role_custom_config.py
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/4/22 16:33
|
||||
@Author : Justin
|
||||
@File : role_custom_config.py
|
||||
"""
|
||||
from metagpt.configs.llm_config import LLMConfig
|
||||
from metagpt.utils.yaml_model import YamlModel
|
||||
|
||||
|
||||
class RoleCustomConfig(YamlModel):
|
||||
"""custom config for roles
|
||||
role: role's className or role's role_id
|
||||
To be expanded
|
||||
"""
|
||||
|
||||
role: str = ""
|
||||
llm: LLMConfig
|
||||
11
metagpt/configs/role_zero_config.py
Normal file
11
metagpt/configs/role_zero_config.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
from pydantic import Field
|
||||
|
||||
from metagpt.utils.yaml_model import YamlModel
|
||||
|
||||
|
||||
class RoleZeroConfig(YamlModel):
|
||||
enable_longterm_memory: bool = Field(default=False, description="Whether to use long-term memory.")
|
||||
longterm_memory_persist_path: str = Field(default=".role_memory_data", description="The directory to save data.")
|
||||
memory_k: int = Field(default=200, description="The capacity of short-term memory.")
|
||||
similarity_top_k: int = Field(default=5, description="The number of long-term memories to retrieve.")
|
||||
use_llm_ranker: bool = Field(default=False, description="Whether to use LLM Reranker to get better result.")
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue