mirror of
https://github.com/FoundationAgents/MetaGPT.git
synced 2026-04-27 09:46:24 +02:00
1. SummarizeCode动作:用于基于代码进行总结,思考bug、逻辑、todo
2. CodeReview动作优化:目前强制要求回答问题,有更高的成功率了
2. 数据结构
1. Document的标准化:Env->Repo->Document,其中Document/Asset/Code都只用Document
1. 原用于检索的Document改为IndexableDocument
2. Repo结构引入:用于Document装载与元数据装载
3. RepoParser引入:写了一个简单的AST parser(后续可能要换tree-sitter),给出了整库symbol
3. 配置优化
1. 默认更换为gpt-4-1106-preview,以获得最好的效果与成本
2. 提供~/.metagpt作为配置最高优先级目录,从中读取config.yaml
3. workspace可以灵活指定了,在config中配置
4. metagpt作为默认命令行,而非python startup.py
1. 使用新的METAGPT_ROOT生成方式,而非寻找git,以便cli安装
2. 命令行由fire换为了typer,它会带来相对更好的体验
3. project_name可以灵活指定了,在metagpt命令行输入中配置
5. 其他
1. BossRequirement -> UserRequirement
2. 大量错误文本的修正,增加了可读性
3. 中量提示词优化,稍微提升了一些准确率
4. 暂时屏蔽了LongtermMemory相关逻辑,这个逻辑底层调用了langchain的FAISS,会带来~5秒加载耗时
5. 修复了安装包中的部分描述错误
66 lines
2.3 KiB
Python
66 lines
2.3 KiB
Python
#!/usr/bin/env python
|
|
# -*- coding: utf-8 -*-
|
|
"""
|
|
@Time : 2023/5/11 14:42
|
|
@Author : alexanderwu
|
|
@File : manager.py
|
|
"""
|
|
from metagpt.llm import LLM
|
|
from metagpt.logs import logger
|
|
from metagpt.schema import Message
|
|
|
|
|
|
class Manager:
|
|
def __init__(self, llm: LLM = LLM()):
|
|
self.llm = llm # Large Language Model
|
|
self.role_directions = {
|
|
"User": "Product Manager",
|
|
"Product Manager": "Architect",
|
|
"Architect": "Engineer",
|
|
"Engineer": "QA Engineer",
|
|
"QA Engineer": "Product Manager"
|
|
}
|
|
self.prompt_template = """
|
|
Given the following message:
|
|
{message}
|
|
|
|
And the current status of roles:
|
|
{roles}
|
|
|
|
Which role should handle this message?
|
|
"""
|
|
|
|
async def handle(self, message: Message, environment):
|
|
"""
|
|
管理员处理信息,现在简单的将信息递交给下一个人
|
|
The administrator processes the information, now simply passes the information on to the next person
|
|
:param message:
|
|
:param environment:
|
|
:return:
|
|
"""
|
|
# Get all roles from the environment
|
|
roles = environment.get_roles()
|
|
# logger.debug(f"{roles=}, {message=}")
|
|
|
|
# Build a context for the LLM to understand the situation
|
|
# context = {
|
|
# "message": str(message),
|
|
# "roles": {role.name: role.get_info() for role in roles},
|
|
# }
|
|
# Ask the LLM to decide which role should handle the message
|
|
# chosen_role_name = self.llm.ask(self.prompt_template.format(context))
|
|
|
|
# FIXME: 现在通过简单的字典决定流向,但之后还是应该有思考过程
|
|
#The direction of flow is now determined by a simple dictionary, but there should still be a thought process afterwards
|
|
next_role_profile = self.role_directions[message.role]
|
|
# logger.debug(f"{next_role_profile}")
|
|
for _, role in roles.items():
|
|
if next_role_profile == role.profile:
|
|
next_role = role
|
|
break
|
|
else:
|
|
logger.error(f"No available role can handle message: {message}.")
|
|
return
|
|
|
|
# Find the chosen role and handle the message
|
|
return await next_role.handle(message)
|