geekan
|
9531dbf3ff
|
fix bug in test
|
2023-12-26 19:19:32 +08:00 |
|
geekan
|
0435b1321f
|
refine code
|
2023-12-26 17:54:52 +08:00 |
|
geekan
|
ba8bf01870
|
remove code
|
2023-12-26 16:39:19 +08:00 |
|
geekan
|
4007fc87d6
|
remove sync api in openai
|
2023-12-26 16:33:15 +08:00 |
|
geekan
|
bb1b9823d0
|
remove sync api in openai
|
2023-12-26 15:59:11 +08:00 |
|
geekan
|
e15de55368
|
refactor openai api and brain memory
|
2023-12-26 15:09:37 +08:00 |
|
geekan
|
8351c8ec35
|
remove generator para in acompletion_text
|
2023-12-26 14:31:26 +08:00 |
|
geekan
|
25de7f9a74
|
fix conflicts
|
2023-12-26 01:40:53 +08:00 |
|
geekan
|
d300acbac8
|
fix all conflicts.
|
2023-12-26 01:39:16 +08:00 |
|
shenchucheng
|
b766550a4f
|
update log_llm_stream in log_llm_stream.py/ollama_api.py
|
2023-12-26 01:05:47 +08:00 |
|
shenchucheng
|
1f311aa408
|
not call LLM in global
|
2023-12-26 01:05:47 +08:00 |
|
shenchucheng
|
0eef8a8607
|
add llm stream log
|
2023-12-26 01:05:44 +08:00 |
|
shenchucheng
|
7671935741
|
Lazy Loading WEB_BROWSER_ENGINE
|
2023-12-26 01:01:31 +08:00 |
|
shenchucheng
|
118ab8ac82
|
add options to disable llm provider check
|
2023-12-26 01:01:24 +08:00 |
|
geekan
|
59586f30d6
|
Merge pull request #628 from iorisa/fixbug/role/assistant
fixbug: 修复通用智能体role及其相关的TalkAction和SkillAction
|
2023-12-25 23:14:21 +08:00 |
|
geekan
|
d577597ede
|
refine code
|
2023-12-25 23:13:25 +08:00 |
|
geekan
|
a903cfa8ef
|
fix code bugs
|
2023-12-25 23:13:25 +08:00 |
|
geekan
|
bd1014e19a
|
add sales test
|
2023-12-25 23:13:25 +08:00 |
|
geekan
|
9f653ea60b
|
tuning example and config
|
2023-12-25 23:13:25 +08:00 |
|
莘权 马
|
ef1bc01c99
|
Merge branch 'dev' of https://github.com/geekan/MetaGPT into geekan/dev
|
2023-12-25 22:39:17 +08:00 |
|
莘权 马
|
0fdb552468
|
fixbug: 修复通用智能体role及其相关的TalkAction和SkillAction
|
2023-12-25 22:39:03 +08:00 |
|
geekan
|
f93ccaf0d1
|
Merge pull request #625 from better629/feat_devut
Feat update unittest of provider/examples
|
2023-12-25 19:10:52 +08:00 |
|
better629
|
454e6164fb
|
update provider unittests
|
2023-12-25 18:00:51 +08:00 |
|
geekan
|
8a5f8b7ee0
|
add #TOTEST flag
|
2023-12-25 18:00:49 +08:00 |
|
geekan
|
fa70a70f53
|
add json mock
|
2023-12-25 18:00:48 +08:00 |
|
geekan
|
4f52b47610
|
Merge pull request #623 from femto/fix_sk_agent
fix sk agent
|
2023-12-25 17:53:44 +08:00 |
|
femto
|
90bbf72ae8
|
fix sk agent
|
2023-12-25 17:35:13 +08:00 |
|
shenchucheng
|
b113aa246f
|
update log_llm_stream in log_llm_stream.py/ollama_api.py
|
2023-12-25 17:22:30 +08:00 |
|
莘权 马
|
fe1d60f111
|
Merge branch 'dev' of https://github.com/geekan/MetaGPT into geekan/dev
|
2023-12-25 16:15:04 +08:00 |
|
莘权 马
|
e162fd36fc
|
fixbug: 修改Teacher role相关结构
|
2023-12-25 16:14:50 +08:00 |
|
geekan
|
331958a63c
|
Merge pull request #617 from iorisa/fixbug/geekan/dev_recover
fixbug: 基于全memory数据存储的流程异常恢复
|
2023-12-25 15:22:43 +08:00 |
|
better629
|
6a65639cd7
|
update ltm unittest
|
2023-12-25 14:38:20 +08:00 |
|
better629
|
94a0699ec4
|
add memory unittest
|
2023-12-25 13:50:47 +08:00 |
|
莘权 马
|
29bbe5752d
|
fixbug: WriteTest failed
|
2023-12-25 13:18:45 +08:00 |
|
莘权 马
|
780caf011d
|
fixbug: 基于全memory数据存储的流程异常恢复
|
2023-12-25 12:42:23 +08:00 |
|
geekan
|
0fca7b3b1f
|
fix prompt_schema
|
2023-12-24 15:41:35 +08:00 |
|
geekan
|
3b9e20d056
|
Merge pull request #616 from better629/new_dev_updatellm
update fireworks/open_llm api due to new openai sdk
|
2023-12-24 15:37:47 +08:00 |
|
better629
|
7e0c62a7a9
|
update fireworks/open_llm api due to new openai sdk
|
2023-12-24 15:34:32 +08:00 |
|
莘权 马
|
e6a5e8e4ad
|
feat: merge geekan:dev
|
2023-12-24 12:52:30 +08:00 |
|
莘权 马
|
ad639b9906
|
feat: merge geekan:dev
|
2023-12-24 12:49:08 +08:00 |
|
莘权 马
|
f441c88156
|
fixbug: timeout & prompt_format
|
2023-12-24 12:30:08 +08:00 |
|
geekan
|
a1f39d1269
|
fix main process
|
2023-12-24 11:48:30 +08:00 |
|
geekan
|
6c278bcfd6
|
fix main process
|
2023-12-24 11:46:05 +08:00 |
|
geekan
|
6465b2eaa9
|
fix pep8
|
2023-12-24 10:44:33 +08:00 |
|
shenchucheng
|
ddc0d3faa4
|
not call LLM in global
|
2023-12-24 03:52:29 +08:00 |
|
shenchucheng
|
b4552938e6
|
add llm stream log
|
2023-12-23 22:45:20 +08:00 |
|
shenchucheng
|
011ae46c09
|
Lazy Loading WEB_BROWSER_ENGINE
|
2023-12-23 21:58:54 +08:00 |
|
shenchucheng
|
4b120a932f
|
add options to disable llm provider check
|
2023-12-23 21:56:19 +08:00 |
|
geekan
|
e043694461
|
add test
|
2023-12-23 19:49:09 +08:00 |
|
geekan
|
f136e7bd3d
|
add test case for action node
|
2023-12-23 19:49:08 +08:00 |
|