Merge branch 'dev' of https://github.com/geekan/MetaGPT into geekan/dev

This commit is contained in:
莘权 马 2024-01-11 23:16:39 +08:00
commit 2ed7c50822
26 changed files with 282 additions and 117 deletions

34
.github/workflows/build-package.yaml vendored Normal file
View file

@ -0,0 +1,34 @@
name: Build and upload python package
on:
release:
types: [created]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
cache: 'pip'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -e.
pip install setuptools wheel twine
- name: Set package version
run: |
export VERSION="${GITHUB_REF#refs/tags/v}"
sed -i "s/version=.*/version=\"${VERSION}\",/" setup.py
- name: Build and publish
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: |
python setup.py bdist_wheel sdist
twine upload dist/*

View file

@ -1,13 +1,10 @@
name: Python application test
name: Unit Tests
on:
workflow_dispatch:
pull_request_target:
push:
branches:
- 'main'
- 'dev'
- '*-release'
branches:
- '*-debugger'
jobs:
@ -79,3 +76,4 @@ jobs:
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
if: ${{ always() }}

View file

@ -9,24 +9,22 @@ ### Short-term Objective
1. Become the multi-agent framework with the highest ROI.
2. Support fully automatic implementation of medium-sized projects (around 2000 lines of code).
3. Implement most identified tasks, reaching version 0.5.
3. Implement most identified tasks, reaching version 1.0.
### Tasks
To reach version v0.5, approximately 70% of the following tasks need to be completed.
1. Usability
1. ~~Release v0.01 pip package to try to solve issues like npm installation (though not necessarily successfully)~~ (v0.3.0)
2. Support for overall save and recovery of software companies
2. ~~Support for overall save and recovery of software companies~~ (v0.6.0)
3. ~~Support human confirmation and modification during the process~~ (v0.3.0) New: Support human confirmation and modification with fewer constrainsts and a more user-friendly interface
4. Support process caching: Consider carefully whether to add server caching mechanism
5. ~~Resolve occasional failure to follow instruction under current prompts, causing code parsing errors, through stricter system prompts~~ (v0.4.0, with function call)
6. Write documentation, describing the current features and usage at all levels (ongoing, continuously adding contents to [documentation site](https://docs.deepwisdom.ai/main/en/guide/get_started/introduction.html))
7. ~~Support Docker~~
2. Features
1. Support a more standard and stable parser (need to analyze the format that the current LLM is better at)
2. ~~Establish a separate output queue, differentiated from the message queue~~
3. Attempt to atomize all role work, but this may significantly increase token overhead
1. ~~Support a more standard and stable parser (need to analyze the format that the current LLM is better at)~~ (v0.5.0)
2. ~~Establish a separate output queue, differentiated from the message queue~~ (v0.5.0)
3. ~~Attempt to atomize all role work, but this may significantly increase token overhead~~ (v0.5.0)
4. Complete the design and implementation of module breakdown
5. Support various modes of memory: clearly distinguish between long-term and short-term memory
6. Perfect the test role, and carry out necessary interactions with humans
@ -43,10 +41,10 @@ ### Tasks
4. Actions
1. ~~Implementation: Search~~ (v0.2.1)
2. Implementation: Knowledge search, supporting 10+ data formats
3. Implementation: Data EDA (expected v0.6.0)
4. Implementation: Review
5. ~~Implementation~~: Add Document (v0.5.0)
6. ~~Implementation~~: Delete Document (v0.5.0)
3. Implementation: Data EDA (expected v0.7.0)
4. Implementation: Review & Revise (expected v0.7.0)
5. ~~Implementation: Add Document~~ (v0.5.0)
6. ~~Implementation: Delete Document~~ (v0.5.0)
7. Implementation: Self-training
8. ~~Implementation: DebugError~~ (v0.2.1)
9. Implementation: Generate reliable unit tests based on YAPI
@ -64,23 +62,23 @@ ### Tasks
3. ~~Support Playwright apis~~
7. Roles
1. Perfect the action pool/skill pool for each role
2. Red Book blogger
3. E-commerce seller
4. Data analyst (expected v0.6.0)
5. News observer
6. ~~Institutional researcher~~ (v0.2.1)
2. E-commerce seller
3. Data analyst (expected v0.7.0)
4. News observer
5. ~~Institutional researcher~~ (v0.2.1)
8. Evaluation
1. Support an evaluation on a game dataset (experimentation done with game agents)
2. Reproduce papers, implement full skill acquisition for a single game role, achieving SOTA results (experimentation done with game agents)
3. Support an evaluation on a math dataset (expected v0.6.0)
3. Support an evaluation on a math dataset (expected v0.7.0)
4. Reproduce papers, achieving SOTA results for current mathematical problem solving process
9. LLM
1. Support Claude underlying API
2. ~~Support Azure asynchronous API~~
3. Support streaming version of all APIs
4. ~~Make gpt-3.5-turbo available (HARD)~~
5. Support
10. Other
1. Clean up existing unused code
1. ~~Clean up existing unused code~~
2. Unify all code styles and establish contribution standards
3. Multi-language support
4. Multi-programming-language support
3. ~~Multi-language support~~
4. ~~Multi-programming-language support~~

View file

@ -13,7 +13,9 @@ from metagpt.roles import Role
from metagpt.team import Team
action1 = Action(name="AlexSay", instruction="Express your opinion with emotion and don't repeat it")
action1.llm.model = "gpt-4-1106-preview"
action2 = Action(name="BobSay", instruction="Express your opinion with emotion and don't repeat it")
action2.llm.model = "gpt-3.5-turbo-1106"
alex = Role(name="Alex", profile="Democratic candidate", goal="Win the election", actions=[action1], watch=[action2])
bob = Role(name="Bob", profile="Republican candidate", goal="Win the election", actions=[action2], watch=[action1])
env = Environment(desc="US election live broadcast")

Binary file not shown.

View file

@ -25,7 +25,7 @@ from metagpt.utils.project_repo import ProjectRepo
class Action(SerializationMixin, ContextMixin, BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True, exclude=["llm"])
model_config = ConfigDict(arbitrary_types_allowed=True)
name: str = ""
i_context: Union[dict, CodingContext, CodeSummarizeContext, TestingContext, RunCodeContext, str, None] = ""

View file

@ -18,7 +18,7 @@ from metagpt.configs.redis_config import RedisConfig
from metagpt.configs.s3_config import S3Config
from metagpt.configs.search_config import SearchConfig
from metagpt.configs.workspace_config import WorkspaceConfig
from metagpt.const import METAGPT_ROOT
from metagpt.const import CONFIG_ROOT, METAGPT_ROOT
from metagpt.utils.yaml_model import YamlModel
@ -81,6 +81,11 @@ class Config(CLIParams, YamlModel):
AZURE_TTS_REGION: str = ""
mermaid_engine: str = "nodejs"
@classmethod
def from_home(cls, path):
"""Load config from ~/.metagpt/config.yaml"""
return Config.model_validate_yaml(CONFIG_ROOT / path)
@classmethod
def default(cls):
"""Load default config

View file

@ -47,7 +47,7 @@ def get_metagpt_root():
# METAGPT PROJECT ROOT AND VARS
CONFIG_ROOT = Path.home() / ".metagpt"
METAGPT_ROOT = get_metagpt_root() # Dependent on METAGPT_PROJECT_ROOT
DEFAULT_WORKSPACE_ROOT = METAGPT_ROOT / "workspace"

View file

@ -19,6 +19,11 @@ class ContextMixin(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
# Pydantic has bug on _private_attr when using inheritance, so we use private_* instead
# - https://github.com/pydantic/pydantic/issues/7142
# - https://github.com/pydantic/pydantic/issues/7083
# - https://github.com/pydantic/pydantic/issues/7091
# Env/Role/Action will use this context as private context, or use self.context as public context
private_context: Optional[Context] = Field(default=None, exclude=True)
# Env/Role/Action will use this config as private config, or use self.context.config as public config
@ -52,6 +57,8 @@ class ContextMixin(BaseModel):
def set_config(self, config: Config, override=False):
"""Set config"""
self.set("private_config", config, override)
if config is not None:
_ = self.llm # init llm
def set_llm(self, llm: BaseLLM, override=False):
"""Set llm"""

View file

@ -5,13 +5,15 @@
@Author : alexanderwu
@File : llm.py
"""
from typing import Optional
from metagpt.configs.llm_config import LLMConfig
from metagpt.context import CONTEXT
from metagpt.provider.base_llm import BaseLLM
def LLM() -> BaseLLM:
def LLM(llm_config: Optional[LLMConfig] = None) -> BaseLLM:
"""get the default llm provider if name is None"""
# context.use_llm(name=name, provider=provider)
if llm_config is not None:
CONTEXT.llm_with_cost_manager_from_llm_config(llm_config)
return CONTEXT.llm()

View file

@ -15,6 +15,7 @@ from metagpt.provider.zhipuai_api import ZhiPuAILLM
from metagpt.provider.azure_openai_api import AzureOpenAILLM
from metagpt.provider.metagpt_api import MetaGPTLLM
from metagpt.provider.human_provider import HumanProvider
from metagpt.provider.spark_api import SparkLLM
__all__ = [
"FireworksLLM",
@ -26,4 +27,5 @@ __all__ = [
"MetaGPTLLM",
"OllamaLLM",
"HumanProvider",
"SparkLLM",
]

View file

@ -59,7 +59,9 @@ class BaseLLM(ABC):
if system_msgs:
message = self._system_msgs(system_msgs)
else:
message = [self._default_system_msg()] if self.use_system_prompt else []
message = [self._default_system_msg()]
if not self.use_system_prompt:
message = []
if format_msgs:
message.extend(format_msgs)
message.append(self._user_msg(msg))

View file

@ -220,10 +220,12 @@ class OpenAILLM(BaseLLM):
@handle_exception
def _update_costs(self, usage: CompletionUsage):
if self.config.calc_usage and usage:
if self.config.calc_usage and usage and self.cost_manager:
self.cost_manager.update_cost(usage.prompt_tokens, usage.completion_tokens, self.model)
def get_costs(self) -> Costs:
if not self.cost_manager:
return Costs()
return self.cost_manager.get_costs()
def _get_max_tokens(self, messages: list[dict]):

View file

@ -26,14 +26,14 @@ from metagpt.provider.llm_provider_registry import register_provider
class SparkLLM(BaseLLM):
def __init__(self, config: LLMConfig):
self.config = config
logger.warning("当前方法无法支持异步运行。当你使用acompletion时并不能并行访问。")
logger.warning("SparkLLM当前方法无法支持异步运行。当你使用acompletion时并不能并行访问。")
def get_choice_text(self, rsp: dict) -> str:
return rsp["payload"]["choices"]["text"][-1]["content"]
async def acompletion_text(self, messages: list[dict], stream=False, timeout: int = 3) -> str:
# 不支持
logger.warning("当前方法无法支持异步运行。当你使用acompletion时并不能并行访问。")
# logger.warning("当前方法无法支持异步运行。当你使用acompletion时并不能并行访问。")
w = GetMessageFromWeb(messages, self.config)
return w.run()

View file

@ -104,7 +104,7 @@ class Engineer(Role):
# Code review
if review:
action = WriteCodeReview(i_context=coding_context, context=self.context, llm=self.llm)
self._init_action_system_message(action)
self._init_action(action)
coding_context = await action.run()
await self.project_repo.srcs.save(
filename=coding_context.filename,

View file

@ -132,6 +132,13 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
role_id: str = ""
states: list[str] = []
# scenarios to set action system_prompt:
# 1. `__init__` while using Role(actions=[...])
# 2. add action to role while using `role.set_action(action)`
# 3. set_todo while using `role.set_todo(action)`
# 4. when role.system_prompt is being updated (e.g. by `role.system_prompt = "..."`)
# Additional, if llm is not set, we will use role's llm
actions: list[SerializeAsAny[Action]] = Field(default=[], validate_default=True)
rc: RoleContext = Field(default_factory=RoleContext)
addresses: set[str] = set()
@ -146,6 +153,10 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
self.pydantic_rebuild_model()
super().__init__(**data)
if self.is_human:
self.llm = HumanProvider(None)
self._check_actions()
self.llm.system_prompt = self._get_prefix()
self._watch(data.get("watch") or [UserRequirement])
@ -225,7 +236,16 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
def _setting(self):
return f"{self.name}({self.profile})"
def _init_action_system_message(self, action: Action):
def _check_actions(self):
"""Check actions and set llm and prefix for each action."""
self.set_actions(self.actions)
return self
def _init_action(self, action: Action):
if not action.private_config:
action.set_llm(self.llm, override=True)
else:
action.set_llm(self.llm, override=False)
action.set_prefix(self._get_prefix())
def set_action(self, action: Action):
@ -241,7 +261,7 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
self._reset()
for action in actions:
if not isinstance(action, Action):
i = action(name="", llm=self.llm)
i = action()
else:
if self.is_human and not isinstance(action.llm, HumanProvider):
logger.warning(
@ -250,7 +270,7 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
f"try passing in Action classes instead of initialized instances"
)
i = action
self._init_action_system_message(i)
self._init_action(i)
self.actions.append(i)
self.states.append(f"{len(self.actions)}. {action}")
@ -308,6 +328,7 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
if env:
env.set_addresses(self, self.addresses)
self.llm.system_prompt = self._get_prefix()
self.set_actions(self.actions) # reset actions to update llm and prefix
def _get_prefix(self):
"""Get the role prefix"""
@ -320,7 +341,8 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
prefix += CONSTRAINT_TEMPLATE.format(**{"constraints": self.constraints})
if self.rc.env and self.rc.env.desc:
other_role_names = ", ".join(self.rc.env.role_names())
all_roles = self.rc.env.role_names()
other_role_names = ", ".join([r for r in all_roles if r != self.name])
env_desc = f"You are in {self.rc.env.desc} with roles({other_role_names})."
prefix += env_desc
return prefix
@ -480,7 +502,6 @@ class Role(SerializationMixin, ContextMixin, BaseModel):
if not msg.cause_by:
msg.cause_by = UserRequirement
self.put_message(msg)
if not await self._observe():
# If there is no new information, suspend and wait
logger.debug(f"{self._setting}: no news. waiting.")

View file

@ -7,7 +7,7 @@ from pathlib import Path
import typer
from metagpt.config2 import config
from metagpt.const import METAGPT_ROOT
from metagpt.const import CONFIG_ROOT, METAGPT_ROOT
app = typer.Typer(add_completion=False, pretty_exceptions_show_locals=False)
@ -118,7 +118,7 @@ def startup(
def copy_config_to(config_path=METAGPT_ROOT / "config" / "config2.yaml"):
"""Initialize the configuration file for MetaGPT."""
target_path = Path.home() / ".metagpt" / "config2.yaml"
target_path = CONFIG_ROOT / "config2.yaml"
# 创建目标目录(如果不存在)
target_path.parent.mkdir(parents=True, exist_ok=True)

File diff suppressed because one or more lines are too long

View file

@ -23,14 +23,12 @@ from metagpt.team import Team
async def test_debate_two_roles():
action1 = Action(name="AlexSay", instruction="Express your opinion with emotion and don't repeat it")
action2 = Action(name="BobSay", instruction="Express your opinion with emotion and don't repeat it")
biden = Role(
alex = Role(
name="Alex", profile="Democratic candidate", goal="Win the election", actions=[action1], watch=[action2]
)
trump = Role(
name="Bob", profile="Republican candidate", goal="Win the election", actions=[action2], watch=[action1]
)
bob = Role(name="Bob", profile="Republican candidate", goal="Win the election", actions=[action2], watch=[action1])
env = Environment(desc="US election live broadcast")
team = Team(investment=10.0, env=env, roles=[biden, trump])
team = Team(investment=10.0, env=env, roles=[alex, bob])
history = await team.run(idea="Topic: climate change. Under 80 words per message.", send_to="Alex", n_round=3)
assert "Alex" in history
@ -39,9 +37,9 @@ async def test_debate_two_roles():
@pytest.mark.asyncio
async def test_debate_one_role_in_env():
action = Action(name="Debate", instruction="Express your opinion with emotion and don't repeat it")
biden = Role(name="Alex", profile="Democratic candidate", goal="Win the election", actions=[action])
alex = Role(name="Alex", profile="Democratic candidate", goal="Win the election", actions=[action])
env = Environment(desc="US election live broadcast")
team = Team(investment=10.0, env=env, roles=[biden])
team = Team(investment=10.0, env=env, roles=[alex])
history = await team.run(idea="Topic: climate change. Under 80 words per message.", send_to="Alex", n_round=3)
assert "Alex" in history
@ -49,8 +47,8 @@ async def test_debate_one_role_in_env():
@pytest.mark.asyncio
async def test_debate_one_role():
action = Action(name="Debate", instruction="Express your opinion with emotion and don't repeat it")
biden = Role(name="Alex", profile="Democratic candidate", goal="Win the election", actions=[action])
msg: Message = await biden.run("Topic: climate change. Under 80 words per message.")
alex = Role(name="Alex", profile="Democratic candidate", goal="Win the election", actions=[action])
msg: Message = await alex.run("Topic: climate change. Under 80 words per message.")
assert len(msg.content) > 10
assert msg.sent_from == "metagpt.roles.role.Role"

View file

@ -39,5 +39,6 @@ mock_llm_config_zhipu = LLMConfig(
llm_type="zhipu",
api_key="mock_api_key.zhipu",
base_url="mock_base_url",
model="mock_zhipu_model",
proxy="http://localhost:8080",
)

View file

@ -4,6 +4,7 @@
import pytest
from metagpt.config2 import Config
from metagpt.provider.spark_api import GetMessageFromWeb, SparkLLM
from tests.metagpt.provider.mock_llm_config import mock_llm_config
@ -33,6 +34,14 @@ def mock_spark_get_msg_from_web_run(self) -> str:
return resp_content
@pytest.mark.asyncio
async def test_spark_aask():
llm = SparkLLM(Config.from_home("spark.yaml").llm)
resp = await llm.aask("Hello!")
print(resp)
@pytest.mark.asyncio
async def test_spark_acompletion(mocker):
mocker.patch("metagpt.provider.spark_api.GetMessageFromWeb.run", mock_spark_get_msg_from_web_run)

View file

@ -3,6 +3,7 @@
# @Desc : unittest of Role
import pytest
from metagpt.llm import HumanProvider
from metagpt.roles.role import Role
@ -12,5 +13,10 @@ def test_role_desc():
assert role.desc == "Best Seller"
def test_role_human():
role = Role(is_human=True)
assert isinstance(role.llm, HumanProvider)
if __name__ == "__main__":
pytest.main([__file__, "-s"])

View file

@ -5,15 +5,10 @@
@Author : alexanderwu
@File : test_config.py
"""
from pydantic import BaseModel
from metagpt.config2 import Config
from metagpt.configs.llm_config import LLMType
from metagpt.context_mixin import ContextMixin
from tests.metagpt.provider.mock_llm_config import (
mock_llm_config,
mock_llm_config_proxy,
)
from tests.metagpt.provider.mock_llm_config import mock_llm_config
def test_config_1():
@ -27,57 +22,3 @@ def test_config_from_dict():
cfg = Config(llm=mock_llm_config)
assert cfg
assert cfg.llm.api_key == "mock_api_key"
class ModelX(ContextMixin, BaseModel):
a: str = "a"
b: str = "b"
class WTFMixin(BaseModel):
c: str = "c"
d: str = "d"
class ModelY(WTFMixin, ModelX):
pass
def test_config_mixin_1():
new_model = ModelX()
assert new_model.a == "a"
assert new_model.b == "b"
def test_config_mixin_2():
i = Config(llm=mock_llm_config)
j = Config(llm=mock_llm_config_proxy)
obj = ModelX(config=i)
assert obj.config == i
assert obj.config.llm == mock_llm_config
obj.set_config(j)
# obj already has a config, so it will not be set
assert obj.config == i
def test_config_mixin_3():
"""Test config mixin with multiple inheritance"""
i = Config(llm=mock_llm_config)
j = Config(llm=mock_llm_config_proxy)
obj = ModelY(config=i)
assert obj.private_config == i
assert obj.private_config.llm == mock_llm_config
obj.set_config(j)
# obj already has a config, so it will not be set
assert obj.private_config == i
assert obj.private_config.llm == mock_llm_config
assert obj.a == "a"
assert obj.b == "b"
assert obj.c == "c"
assert obj.d == "d"
print(obj.__dict__.keys())
assert "_config" in obj.__dict__.keys()

View file

@ -0,0 +1,128 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@Time : 2024/1/11 19:24
@Author : alexanderwu
@File : test_context_mixin.py
"""
import pytest
from pydantic import BaseModel
from metagpt.actions import Action
from metagpt.config2 import Config
from metagpt.context_mixin import ContextMixin
from metagpt.environment import Environment
from metagpt.roles import Role
from metagpt.team import Team
from tests.metagpt.provider.mock_llm_config import (
mock_llm_config,
mock_llm_config_proxy,
mock_llm_config_zhipu,
)
class ModelX(ContextMixin, BaseModel):
a: str = "a"
b: str = "b"
class WTFMixin(BaseModel):
c: str = "c"
d: str = "d"
class ModelY(WTFMixin, ModelX):
pass
def test_config_mixin_1():
new_model = ModelX()
assert new_model.a == "a"
assert new_model.b == "b"
def test_config_mixin_2():
i = Config(llm=mock_llm_config)
j = Config(llm=mock_llm_config_proxy)
obj = ModelX(config=i)
assert obj.config == i
assert obj.config.llm == mock_llm_config
obj.set_config(j)
# obj already has a config, so it will not be set
assert obj.config == i
def test_config_mixin_3_multi_inheritance_not_override_config():
"""Test config mixin with multiple inheritance"""
i = Config(llm=mock_llm_config)
j = Config(llm=mock_llm_config_proxy)
obj = ModelY(config=i)
assert obj.config == i
assert obj.config.llm == mock_llm_config
obj.set_config(j)
# obj already has a config, so it will not be set
assert obj.config == i
assert obj.config.llm == mock_llm_config
assert obj.a == "a"
assert obj.b == "b"
assert obj.c == "c"
assert obj.d == "d"
print(obj.__dict__.keys())
assert "private_config" in obj.__dict__.keys()
def test_config_mixin_4_multi_inheritance_override_config():
"""Test config mixin with multiple inheritance"""
i = Config(llm=mock_llm_config)
j = Config(llm=mock_llm_config_zhipu)
obj = ModelY(config=i)
assert obj.config == i
assert obj.config.llm == mock_llm_config
obj.set_config(j, override=True)
# override obj.config
assert obj.config == j
assert obj.config.llm == mock_llm_config_zhipu
assert obj.a == "a"
assert obj.b == "b"
assert obj.c == "c"
assert obj.d == "d"
print(obj.__dict__.keys())
assert "private_config" in obj.__dict__.keys()
assert obj.llm.model == "mock_zhipu_model"
@pytest.mark.asyncio
async def test_config_priority():
"""If action's config is set, then its llm will be set, otherwise, it will use the role's llm"""
gpt4t = Config.from_home("gpt-4-1106-preview.yaml")
gpt35 = Config.default()
gpt4 = Config.default()
gpt4.llm.model = "gpt-4-0613"
a1 = Action(config=gpt4t, name="Say", instruction="Say your opinion with emotion and don't repeat it")
a2 = Action(name="Say", instruction="Say your opinion with emotion and don't repeat it")
a3 = Action(name="Vote", instruction="Vote for the candidate, and say why you vote for him/her")
# it will not work for a1 because the config is already set
A = Role(name="A", profile="Democratic candidate", goal="Win the election", actions=[a1], watch=[a2], config=gpt4)
# it will work for a2 because the config is not set
B = Role(name="B", profile="Republican candidate", goal="Win the election", actions=[a2], watch=[a1], config=gpt4)
# ditto
C = Role(name="C", profile="Voter", goal="Vote for the candidate", actions=[a3], watch=[a1, a2], config=gpt35)
env = Environment(desc="US election live broadcast")
Team(investment=10.0, env=env, roles=[A, B, C])
assert a1.llm.model == "gpt-4-1106-preview"
assert a2.llm.model == "gpt-4-0613"
assert a3.llm.model == "gpt-3.5-turbo-1106"
# history = await team.run(idea="Topic: climate change. Under 80 words per message.", send_to="a1", n_round=3)
# assert "Alex" in history

View file

@ -5,25 +5,26 @@
@Author : mashenquan
@File : test_redis.py
"""
from unittest.mock import AsyncMock
import mock
import pytest
from pytest_mock import mocker
from metagpt.config2 import Config
from metagpt.utils.redis import Redis
async def async_mock_from_url(*args, **kwargs):
mock_client = mock.AsyncMock()
mock_client = AsyncMock()
mock_client.set.return_value = None
mock_client.get.side_effect = [b"test", b""]
return mock_client
@pytest.mark.asyncio
@mock.patch("aioredis.from_url", return_value=async_mock_from_url())
async def test_redis(i):
redis = Config.default().redis
mocker.patch("aioredis.from_url", return_value=async_mock_from_url())
conn = Redis(redis)
await conn.set("test", "test", timeout_sec=0)

View file

@ -42,7 +42,9 @@ class MockLLM(OpenAILLM):
if system_msgs:
message = self._system_msgs(system_msgs)
else:
message = [self._default_system_msg()] if self.use_system_prompt else []
message = [self._default_system_msg()]
if not self.use_system_prompt:
message = []
if format_msgs:
message.extend(format_msgs)
message.append(self._user_msg(msg))