mirror of
https://github.com/FoundationAgents/MetaGPT.git
synced 2026-05-01 20:03:28 +02:00
Merge branch 'fixbug/issues/1016' into HEAD
This commit is contained in:
commit
a6f31bf3e6
16 changed files with 178 additions and 93 deletions
76
README.md
76
README.md
|
|
@ -26,7 +26,7 @@ # MetaGPT: The Multi-Agent Framework
|
|||
</p>
|
||||
|
||||
## News
|
||||
🚀 Mar. 14, 2024: Our Data Interpreter paper is on [arxiv](https://arxiv.org/abs/2402.18679). Check the [example](https://docs.deepwisdom.ai/main/en/DataInterpreter/) and [code](https://github.com/geekan/MetaGPT/tree/main/examples/di)!
|
||||
🚀 Mar. 14, 2024: Our **Data Interpreter** paper is on [arxiv](https://arxiv.org/abs/2402.18679). Check the [example](https://docs.deepwisdom.ai/main/en/DataInterpreter/) and [code](https://github.com/geekan/MetaGPT/tree/main/examples/di)!
|
||||
|
||||
🚀 Feb. 08, 2024: [v0.7.0](https://github.com/geekan/MetaGPT/releases/tag/v0.7.0) released, supporting assigning different LLMs to different Roles. We also introduced [Data Interpreter](https://github.com/geekan/MetaGPT/blob/main/examples/di/README.md), a powerful agent capable of solving a wide range of real-world problems.
|
||||
|
||||
|
|
@ -55,21 +55,30 @@ ## Software Company as Multi-Agent System
|
|||
|
||||
<p align="center">Software Company Multi-Agent Schematic (Gradually Implementing)</p>
|
||||
|
||||
## Install
|
||||
## Get Started
|
||||
|
||||
### Pip installation
|
||||
### Installation
|
||||
|
||||
> Ensure that Python 3.9+ is installed on your system. You can check this by using: `python --version`.
|
||||
> You can use conda like this: `conda create -n metagpt python=3.9 && conda activate metagpt`
|
||||
|
||||
```bash
|
||||
pip install metagpt
|
||||
# https://docs.deepwisdom.ai/main/en/guide/get_started/configuration.html
|
||||
metagpt --init-config # it will create ~/.metagpt/config2.yaml, just modify it to your needs
|
||||
pip install --upgrade metagpt
|
||||
# or `pip install --upgrade git+https://github.com/geekan/MetaGPT.git`
|
||||
# or `git clone https://github.com/geekan/MetaGPT && cd MetaGPT && pip install --upgrade -e .`
|
||||
```
|
||||
|
||||
For detailed installation guidance, please refer to [cli_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-stable-version)
|
||||
or [docker_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-with-docker)
|
||||
|
||||
### Configuration
|
||||
|
||||
You can init the config of MetaGPT by running the following command, or manually create `~/.metagpt/config2.yaml` file:
|
||||
```bash
|
||||
# Check https://docs.deepwisdom.ai/main/en/guide/get_started/configuration.html for more details
|
||||
metagpt --init-config # it will create ~/.metagpt/config2.yaml, just modify it to your needs
|
||||
```
|
||||
|
||||
You can configure `~/.metagpt/config2.yaml` according to the [example](https://github.com/geekan/MetaGPT/blob/main/config/config2.example.yaml) and [doc](https://docs.deepwisdom.ai/main/en/guide/get_started/configuration.html):
|
||||
|
||||
```yaml
|
||||
|
|
@ -82,13 +91,13 @@ ### Configuration
|
|||
|
||||
### Usage
|
||||
|
||||
After installation, you can use it as CLI
|
||||
After installation, you can use MetaGPT at CLI
|
||||
|
||||
```bash
|
||||
metagpt "Create a 2048 game" # this will create a repo in ./workspace
|
||||
```
|
||||
|
||||
or you can use it as library
|
||||
or use it as library
|
||||
|
||||
```python
|
||||
from metagpt.software_company import generate_repo, ProjectRepo
|
||||
|
|
@ -96,47 +105,19 @@ ### Usage
|
|||
print(repo) # it will print the repo structure with files
|
||||
```
|
||||
|
||||
detail installation please refer to [cli_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-stable-version)
|
||||
or [docker_install](https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-with-docker)
|
||||
You can also use its [Data Interpreter](https://github.com/geekan/MetaGPT/tree/main/examples/di)
|
||||
|
||||
### Docker installation
|
||||
<details><summary><strong>⏬ Step 1: Download metagpt image and prepare config2.yaml </strong><i>:: click to expand ::</i></summary>
|
||||
<div>
|
||||
```python
|
||||
import asyncio
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
```bash
|
||||
docker pull metagpt/metagpt:latest
|
||||
mkdir -p /opt/metagpt/{config,workspace}
|
||||
docker run --rm metagpt/metagpt:latest cat /app/metagpt/config/config2.yaml > /opt/metagpt/config/config2.yaml
|
||||
vim /opt/metagpt/config/config2.yaml # Change the config
|
||||
async def main():
|
||||
di = DataInterpreter()
|
||||
await di.run("Run data analysis on sklearn Iris dataset, include a plot")
|
||||
|
||||
asyncio.run(main()) # or await main() in a jupyter notebook setting
|
||||
```
|
||||
|
||||
</div>
|
||||
</details>
|
||||
|
||||
<details><summary><strong>⏬ Step 2: Run metagpt container </strong><i>:: click to expand ::</i></summary>
|
||||
<div>
|
||||
|
||||
```bash
|
||||
docker run --name metagpt -d \
|
||||
--privileged \
|
||||
-v /opt/metagpt/config/config2.yaml:/app/metagpt/config/config2.yaml \
|
||||
-v /opt/metagpt/workspace:/app/metagpt/workspace \
|
||||
metagpt/metagpt:latest
|
||||
```
|
||||
|
||||
</div>
|
||||
</details>
|
||||
|
||||
<details><summary><strong>⏬ Step 3: Use metagpt </strong><i>:: click to expand ::</i></summary>
|
||||
<div>
|
||||
|
||||
```bash
|
||||
docker exec -it metagpt /bin/bash
|
||||
$ metagpt "Create a 2048 game" # this will create a repo in ./workspace
|
||||
```
|
||||
|
||||
</div>
|
||||
</details>
|
||||
|
||||
### QuickStart & Demo Video
|
||||
- Try it on [MetaGPT Huggingface Space](https://huggingface.co/spaces/deepwisdom/MetaGPT)
|
||||
|
|
@ -156,6 +137,7 @@ ## Tutorial
|
|||
- 🧑💻 Contribution
|
||||
- [Develop Roadmap](docs/ROADMAP.md)
|
||||
- 🔖 Use Cases
|
||||
- [Data Interpreter](https://docs.deepwisdom.ai/main/en/guide/use_cases/agent/interpreter/intro.html)
|
||||
- [Debate](https://docs.deepwisdom.ai/main/en/guide/use_cases/multi_agent/debate.html)
|
||||
- [Researcher](https://docs.deepwisdom.ai/main/en/guide/use_cases/agent/researcher.html)
|
||||
- [Recepit Assistant](https://docs.deepwisdom.ai/main/en/guide/use_cases/agent/receipt_assistant.html)
|
||||
|
|
@ -179,7 +161,9 @@ ### Contact Information
|
|||
|
||||
## Citation
|
||||
|
||||
If you use MetaGPT or Data Interpreter in a research paper, please cite our work as follows:
|
||||
To stay updated with the latest research and development, follow [@MetaGPT_](https://twitter.com/MetaGPT_) on Twitter.
|
||||
|
||||
To cite [MetaGPT](https://arxiv.org/abs/2308.00352) or [Data Interpreter](https://arxiv.org/abs/2402.18679) in publications, please use the following BibTeX entries.
|
||||
|
||||
```bibtex
|
||||
@misc{hong2023metagpt,
|
||||
|
|
|
|||
26
examples/di/arxiv_reader.py
Normal file
26
examples/di/arxiv_reader.py
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
@Time : 2024/01/15
|
||||
@Author : mannaandpoem
|
||||
@File : imitate_webpage.py
|
||||
"""
|
||||
from metagpt.roles.di.data_interpreter import DataInterpreter
|
||||
|
||||
|
||||
async def main():
|
||||
template = "https://arxiv.org/list/{tag}/pastweek?skip=0&show=300"
|
||||
tags = ["cs.ai", "cs.cl", "cs.lg", "cs.se"]
|
||||
urls = [template.format(tag=tag) for tag in tags]
|
||||
prompt = f"""This is a collection of arxiv urls: '{urls}' .
|
||||
Record each article, remove duplicates by title (they may have multiple tags), filter out papers related to
|
||||
large language model / agent / llm, print top 100 and visualize the word count of the titles"""
|
||||
di = DataInterpreter(react_mode="react", tools=["scrape_web_playwright"])
|
||||
|
||||
await di.run(prompt)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import asyncio
|
||||
|
||||
asyncio.run(main())
|
||||
|
|
@ -18,7 +18,7 @@ from metagpt.prompts.di.write_analysis_code import (
|
|||
STRUCTUAL_PROMPT,
|
||||
)
|
||||
from metagpt.schema import Message, Plan
|
||||
from metagpt.utils.common import CodeParser, process_message, remove_comments
|
||||
from metagpt.utils.common import CodeParser, remove_comments
|
||||
|
||||
|
||||
class WriteAnalysisCode(Action):
|
||||
|
|
@ -50,7 +50,7 @@ class WriteAnalysisCode(Action):
|
|||
)
|
||||
|
||||
working_memory = working_memory or []
|
||||
context = process_message([Message(content=structual_prompt, role="user")] + working_memory)
|
||||
context = self.llm.format_msg([Message(content=structual_prompt, role="user")] + working_memory)
|
||||
|
||||
# LLM call
|
||||
if use_reflection:
|
||||
|
|
|
|||
|
|
@ -73,6 +73,28 @@ class BaseLLM(ABC):
|
|||
def _system_msg(self, msg: str) -> dict[str, str]:
|
||||
return {"role": "system", "content": msg}
|
||||
|
||||
def format_msg(self, messages: Union[str, Message, list[dict], list[Message], list[str]]) -> list[dict]:
|
||||
"""convert messages to list[dict]."""
|
||||
from metagpt.schema import Message
|
||||
|
||||
if not isinstance(messages, list):
|
||||
messages = [messages]
|
||||
|
||||
processed_messages = []
|
||||
for msg in messages:
|
||||
if isinstance(msg, str):
|
||||
processed_messages.append({"role": "user", "content": msg})
|
||||
elif isinstance(msg, dict):
|
||||
assert set(msg.keys()) == set(["role", "content"])
|
||||
processed_messages.append(msg)
|
||||
elif isinstance(msg, Message):
|
||||
processed_messages.append(msg.to_dict())
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Only support message type are: str, Message, dict, but got {type(messages).__name__}!"
|
||||
)
|
||||
return processed_messages
|
||||
|
||||
def _system_msgs(self, msgs: list[str]) -> list[dict[str, str]]:
|
||||
return [self._system_msg(msg) for msg in msgs]
|
||||
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ from metagpt.configs.llm_config import LLMConfig, LLMType
|
|||
from metagpt.logs import log_llm_stream
|
||||
from metagpt.provider.base_llm import BaseLLM
|
||||
from metagpt.provider.llm_provider_registry import register_provider
|
||||
from metagpt.schema import Message
|
||||
|
||||
|
||||
class GeminiGenerativeModel(GenerativeModel):
|
||||
|
|
@ -61,6 +62,35 @@ class GeminiLLM(BaseLLM):
|
|||
def _assistant_msg(self, msg: str) -> dict[str, str]:
|
||||
return {"role": "model", "parts": [msg]}
|
||||
|
||||
def _system_msg(self, msg: str) -> dict[str, str]:
|
||||
return {"role": "user", "parts": [msg]}
|
||||
|
||||
def format_msg(self, messages: Union[str, Message, list[dict], list[Message], list[str]]) -> list[dict]:
|
||||
"""convert messages to list[dict]."""
|
||||
from metagpt.schema import Message
|
||||
|
||||
if not isinstance(messages, list):
|
||||
messages = [messages]
|
||||
|
||||
# REF: https://ai.google.dev/tutorials/python_quickstart
|
||||
# As a dictionary, the message requires `role` and `parts` keys.
|
||||
# The role in a conversation can either be the `user`, which provides the prompts,
|
||||
# or `model`, which provides the responses.
|
||||
processed_messages = []
|
||||
for msg in messages:
|
||||
if isinstance(msg, str):
|
||||
processed_messages.append({"role": "user", "parts": [msg]})
|
||||
elif isinstance(msg, dict):
|
||||
assert set(msg.keys()) == set(["role", "parts"])
|
||||
processed_messages.append(msg)
|
||||
elif isinstance(msg, Message):
|
||||
processed_messages.append({"role": "user" if msg.role == "user" else "model", "parts": [msg.content]})
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Only support message type are: str, Message, dict, but got {type(messages).__name__}!"
|
||||
)
|
||||
return processed_messages
|
||||
|
||||
def _const_kwargs(self, messages: list[dict], stream: bool = False) -> dict:
|
||||
kwargs = {"contents": messages, "generation_config": GenerationConfig(temperature=0.3), "stream": stream}
|
||||
return kwargs
|
||||
|
|
|
|||
|
|
@ -29,12 +29,7 @@ from metagpt.logs import log_llm_stream, logger
|
|||
from metagpt.provider.base_llm import BaseLLM
|
||||
from metagpt.provider.constant import GENERAL_FUNCTION_SCHEMA
|
||||
from metagpt.provider.llm_provider_registry import register_provider
|
||||
from metagpt.utils.common import (
|
||||
CodeParser,
|
||||
decode_image,
|
||||
log_and_reraise,
|
||||
process_message,
|
||||
)
|
||||
from metagpt.utils.common import CodeParser, decode_image, log_and_reraise
|
||||
from metagpt.utils.cost_manager import CostManager
|
||||
from metagpt.utils.exceptions import handle_exception
|
||||
from metagpt.utils.token_counter import (
|
||||
|
|
@ -150,7 +145,7 @@ class OpenAILLM(BaseLLM):
|
|||
async def _achat_completion_function(
|
||||
self, messages: list[dict], timeout: int = 3, **chat_configs
|
||||
) -> ChatCompletion:
|
||||
messages = process_message(messages)
|
||||
messages = self.format_msg(messages)
|
||||
kwargs = self._cons_kwargs(messages=messages, timeout=timeout, **chat_configs)
|
||||
rsp: ChatCompletion = await self.aclient.chat.completions.create(**kwargs)
|
||||
self._update_costs(rsp.usage)
|
||||
|
|
|
|||
|
|
@ -240,8 +240,8 @@ class Engineer(Role):
|
|||
async def _think(self) -> Action | None:
|
||||
if not self.src_workspace:
|
||||
self.src_workspace = self.git_repo.workdir / self.git_repo.workdir.name
|
||||
write_plan_and_change_filters = any_to_str_set([WriteTasks])
|
||||
write_code_filters = any_to_str_set([WriteTasks, WriteCodePlanAndChange, SummarizeCode, FixBug])
|
||||
write_plan_and_change_filters = any_to_str_set([WriteTasks, FixBug])
|
||||
write_code_filters = any_to_str_set([WriteTasks, WriteCodePlanAndChange, SummarizeCode])
|
||||
summarize_code_filters = any_to_str_set([WriteCode, WriteCodeReview])
|
||||
if not self.rc.news:
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -164,8 +164,9 @@ class Planner(BaseModel):
|
|||
code_written = "\n\n".join(code_written)
|
||||
task_results = [task.result for task in finished_tasks]
|
||||
task_results = "\n\n".join(task_results)
|
||||
task_type_name = self.current_task.task_type.upper()
|
||||
guidance = TaskType[task_type_name].value.guidance if hasattr(TaskType, task_type_name) else ""
|
||||
task_type_name = self.current_task.task_type
|
||||
task_type = TaskType.get_type(task_type_name)
|
||||
guidance = task_type.guidance if task_type else ""
|
||||
|
||||
# combine components in a prompt
|
||||
prompt = PLAN_STATUS.format(
|
||||
|
|
|
|||
|
|
@ -71,3 +71,10 @@ class TaskType(Enum):
|
|||
@property
|
||||
def type_name(self):
|
||||
return self.value.name
|
||||
|
||||
@classmethod
|
||||
def get_type(cls, type_name):
|
||||
for member in cls:
|
||||
if member.type_name == type_name:
|
||||
return member.value
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -802,29 +802,6 @@ def decode_image(img_url_or_b64: str) -> Image:
|
|||
return img
|
||||
|
||||
|
||||
def process_message(messages: Union[str, Message, list[dict], list[Message], list[str]]) -> list[dict]:
|
||||
"""convert messages to list[dict]."""
|
||||
from metagpt.schema import Message
|
||||
|
||||
# 全部转成list
|
||||
if not isinstance(messages, list):
|
||||
messages = [messages]
|
||||
|
||||
# 转成list[dict]
|
||||
processed_messages = []
|
||||
for msg in messages:
|
||||
if isinstance(msg, str):
|
||||
processed_messages.append({"role": "user", "content": msg})
|
||||
elif isinstance(msg, dict):
|
||||
assert set(msg.keys()) == set(["role", "content"])
|
||||
processed_messages.append(msg)
|
||||
elif isinstance(msg, Message):
|
||||
processed_messages.append(msg.to_dict())
|
||||
else:
|
||||
raise ValueError(f"Only support message type are: str, Message, dict, but got {type(messages).__name__}!")
|
||||
return processed_messages
|
||||
|
||||
|
||||
def log_and_reraise(retry_state: RetryCallState):
|
||||
logger.error(f"Retry attempts exhausted. Last exception: {retry_state.outcome.exception()}")
|
||||
logger.warning(
|
||||
|
|
|
|||
|
|
@ -229,7 +229,7 @@ def count_message_tokens(messages, model="gpt-3.5-turbo-0125"):
|
|||
else:
|
||||
raise NotImplementedError(
|
||||
f"num_tokens_from_messages() is not implemented for model {model}. "
|
||||
f"See https://github.com/openai/openai-python/blob/main/chatml.md "
|
||||
f"See https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken "
|
||||
f"for information on how messages are converted to tokens."
|
||||
)
|
||||
num_tokens = 0
|
||||
|
|
|
|||
File diff suppressed because one or more lines are too long
|
|
@ -25,7 +25,6 @@ async def test_interpreter(mocker, auto_run):
|
|||
@pytest.mark.asyncio
|
||||
async def test_interpreter_react_mode(mocker):
|
||||
mocker.patch("metagpt.actions.di.execute_nb_code.ExecuteNbCode.run", return_value=("a successful run", True))
|
||||
mocker.patch("builtins.input", return_value="confirm")
|
||||
|
||||
requirement = "Run data analysis on sklearn Wine recognition dataset, include a plot, and train a model to predict wine class (20% as validation), and show validation accuracy."
|
||||
|
||||
|
|
|
|||
37
tests/metagpt/strategy/test_planner.py
Normal file
37
tests/metagpt/strategy/test_planner.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
from metagpt.schema import Plan, Task
|
||||
from metagpt.strategy.planner import Planner
|
||||
from metagpt.strategy.task_type import TaskType
|
||||
|
||||
MOCK_TASK_MAP = {
|
||||
"1": Task(
|
||||
task_id="1",
|
||||
instruction="test instruction for finished task",
|
||||
task_type=TaskType.EDA.type_name,
|
||||
dependent_task_ids=[],
|
||||
code="some finished test code",
|
||||
result="some finished test result",
|
||||
is_finished=True,
|
||||
),
|
||||
"2": Task(
|
||||
task_id="2",
|
||||
instruction="test instruction for current task",
|
||||
task_type=TaskType.DATA_PREPROCESS.type_name,
|
||||
dependent_task_ids=["1"],
|
||||
),
|
||||
}
|
||||
MOCK_PLAN = Plan(
|
||||
goal="test goal",
|
||||
tasks=list(MOCK_TASK_MAP.values()),
|
||||
task_map=MOCK_TASK_MAP,
|
||||
current_task_id="2",
|
||||
)
|
||||
|
||||
|
||||
def test_planner_get_plan_status():
|
||||
planner = Planner(plan=MOCK_PLAN)
|
||||
status = planner.get_plan_status()
|
||||
|
||||
assert "some finished test code" in status
|
||||
assert "some finished test result" in status
|
||||
assert "test instruction for current task" in status
|
||||
assert TaskType.DATA_PREPROCESS.value.guidance in status # current task guidance
|
||||
|
|
@ -22,7 +22,7 @@ def _paragraphs(n):
|
|||
@pytest.mark.parametrize(
|
||||
"msgs, model_name, system_text, reserved, expected",
|
||||
[
|
||||
(_msgs(), "gpt-3.5-turbo", "System", 1500, 1),
|
||||
(_msgs(), "gpt-3.5-turbo-0613", "System", 1500, 1),
|
||||
(_msgs(), "gpt-3.5-turbo-16k", "System", 3000, 6),
|
||||
(_msgs(), "gpt-3.5-turbo-16k", "Hello," * 1000, 3000, 5),
|
||||
(_msgs(), "gpt-4", "System", 2000, 3),
|
||||
|
|
@ -32,22 +32,23 @@ def _paragraphs(n):
|
|||
],
|
||||
)
|
||||
def test_reduce_message_length(msgs, model_name, system_text, reserved, expected):
|
||||
assert len(reduce_message_length(msgs, model_name, system_text, reserved)) / (len("Hello,")) / 1000 == expected
|
||||
length = len(reduce_message_length(msgs, model_name, system_text, reserved)) / (len("Hello,")) / 1000
|
||||
assert length == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"text, prompt_template, model_name, system_text, reserved, expected",
|
||||
[
|
||||
(" ".join("Hello World." for _ in range(1000)), "Prompt: {}", "gpt-3.5-turbo", "System", 1500, 2),
|
||||
(" ".join("Hello World." for _ in range(1000)), "Prompt: {}", "gpt-3.5-turbo-0613", "System", 1500, 2),
|
||||
(" ".join("Hello World." for _ in range(1000)), "Prompt: {}", "gpt-3.5-turbo-16k", "System", 3000, 1),
|
||||
(" ".join("Hello World." for _ in range(4000)), "Prompt: {}", "gpt-4", "System", 2000, 2),
|
||||
(" ".join("Hello World." for _ in range(8000)), "Prompt: {}", "gpt-4-32k", "System", 4000, 1),
|
||||
(" ".join("Hello World" for _ in range(8000)), "Prompt: {}", "gpt-3.5-turbo", "System", 1000, 8),
|
||||
(" ".join("Hello World" for _ in range(8000)), "Prompt: {}", "gpt-3.5-turbo-0613", "System", 1000, 8),
|
||||
],
|
||||
)
|
||||
def test_generate_prompt_chunk(text, prompt_template, model_name, system_text, reserved, expected):
|
||||
ret = list(generate_prompt_chunk(text, prompt_template, model_name, system_text, reserved))
|
||||
assert len(ret) == expected
|
||||
chunk = len(list(generate_prompt_chunk(text, prompt_template, model_name, system_text, reserved)))
|
||||
assert chunk == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ from metagpt.provider.azure_openai_api import AzureOpenAILLM
|
|||
from metagpt.provider.constant import GENERAL_FUNCTION_SCHEMA
|
||||
from metagpt.provider.openai_api import OpenAILLM
|
||||
from metagpt.schema import Message
|
||||
from metagpt.utils.common import process_message
|
||||
|
||||
OriginalLLM = OpenAILLM if config.llm.api_type == LLMType.OPENAI else AzureOpenAILLM
|
||||
|
||||
|
|
@ -105,7 +104,7 @@ class MockLLM(OriginalLLM):
|
|||
return rsp
|
||||
|
||||
async def aask_code(self, messages: Union[str, Message, list[dict]], **kwargs) -> dict:
|
||||
msg_key = json.dumps(process_message(messages), ensure_ascii=False)
|
||||
msg_key = json.dumps(self.format_msg(messages), ensure_ascii=False)
|
||||
rsp = await self._mock_rsp(msg_key, self.original_aask_code, messages, **kwargs)
|
||||
return rsp
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue