Commit graph

2480 commits

Author SHA1 Message Date
geekan
3be990ea3f use pathlib to refine setup.py 2023-12-24 20:06:43 +08:00
geekan
618b86ab6a refine pre-commit config 2023-12-24 19:41:44 +08:00
geekan
984ea4dbed add auto --fix 2023-12-24 19:40:56 +08:00
geekan
311d351799 refine setup process 2023-12-24 18:00:57 +08:00
geekan
4fa2b32046 refine setup process 2023-12-24 17:23:59 +08:00
geekan
0fca7b3b1f fix prompt_schema 2023-12-24 15:41:35 +08:00
geekan
3b9e20d056
Merge pull request #616 from better629/new_dev_updatellm
update fireworks/open_llm api due to new openai sdk
2023-12-24 15:37:47 +08:00
geekan
8d1a3ce171
Merge pull request #615 from iorisa/fixbug/geekan/dev
fixbug: timeout & prompt_format
2023-12-24 15:35:28 +08:00
better629
7e0c62a7a9 update fireworks/open_llm api due to new openai sdk 2023-12-24 15:34:32 +08:00
莘权 马
e6a5e8e4ad feat: merge geekan:dev 2023-12-24 12:52:30 +08:00
莘权 马
ad639b9906 feat: merge geekan:dev 2023-12-24 12:49:08 +08:00
莘权 马
f441c88156 fixbug: timeout & prompt_format 2023-12-24 12:30:08 +08:00
geekan
a1f39d1269 fix main process 2023-12-24 11:48:30 +08:00
geekan
6c278bcfd6 fix main process 2023-12-24 11:46:05 +08:00
geekan
6465b2eaa9 fix pep8 2023-12-24 10:44:33 +08:00
shenchucheng
ddc0d3faa4 not call LLM in global 2023-12-24 03:52:29 +08:00
shenchucheng
b4552938e6 add llm stream log 2023-12-23 22:45:20 +08:00
shenchucheng
011ae46c09 Lazy Loading WEB_BROWSER_ENGINE 2023-12-23 21:58:54 +08:00
shenchucheng
4b120a932f add options to disable llm provider check 2023-12-23 21:56:19 +08:00
geekan
e043694461 add test 2023-12-23 19:49:09 +08:00
geekan
f136e7bd3d add test case for action node 2023-12-23 19:49:08 +08:00
geekan
5223c4afa9 refine code 2023-12-23 19:49:08 +08:00
geekan
a5b6d0817d fix conflict 2023-12-23 19:49:05 +08:00
better629
bd119de2c1 format general_api_requestor params type 2023-12-23 19:48:13 +08:00
geekan
2502dd3651 fix conflict 2023-12-23 19:48:01 +08:00
Ikko Eltociear Ashimine
0aac525b29 Update README.md
exisiting -> existing
2023-12-23 19:43:16 +08:00
seehi
74b0a5f725 typing of store 2023-12-23 19:43:16 +08:00
seehi
6f3cc203b1 upgrade langchain and simplify faiss load/save 2023-12-23 19:43:16 +08:00
geekan
53d333ffa9 fix conflict 2023-12-23 19:43:12 +08:00
geekan
c68f882e14 tuning performance 2023-12-23 19:39:16 +08:00
geekan
a7a1195a31 fix bugs and make it perform better 2023-12-23 19:39:16 +08:00
better629
0d1c0f89cc fix 2023-12-23 19:39:16 +08:00
better629
9607059392 fix invoice_ocr 2023-12-23 19:39:16 +08:00
better629
7a1252e356 fix invoice_ocr 2023-12-23 19:39:16 +08:00
better629
dc2a87ce12 fix invoice_ocr 2023-12-23 19:39:16 +08:00
better629
d02692e945 fix invoice_ocr_assistant 2023-12-23 19:39:16 +08:00
geekan
9d8cdd19ac fix conflict 2023-12-23 19:39:11 +08:00
better629
48b484dec8 update 2023-12-23 19:37:35 +08:00
geekan
a44a46ad29 solve conflict 2023-12-23 19:37:23 +08:00
better629
1df49b82e4 fix 2023-12-23 19:36:42 +08:00
better629
c97b54e0ea add non-software role/action BaseModel 2023-12-23 19:36:42 +08:00
geekan
c7f47e80ad add test 2023-12-23 19:35:07 +08:00
geekan
8d20af119c
Merge pull request #610 from iorisa/feature/geekan/dev
fixbug: Fix the confusion caused by the merging of _client, client, and async_client in openai_api.py
2023-12-23 18:12:47 +08:00
莘权 马
a90f52d4b6 fixbug: Fix the confusion caused by the merging of _client, client, and async_client in the openai_api.py;Split Azure LLM and MetaGPT LLM from OpenAI LLM to reduce the number of variables defined in the Config class for compatibility. 2023-12-23 18:07:42 +08:00
geekan
6624819feb add test case for action node 2023-12-23 17:38:47 +08:00
geekan
336350eba9 refine code 2023-12-23 02:41:55 +08:00
geekan
3feee73492 refine debate example 2023-12-23 02:41:55 +08:00
geekan
41a174399e
Merge pull request #606 from better629/feat_basemodel
Feat add basemodel of other roles/actions and fix examples
2023-12-22 22:59:58 +08:00
better629
19d33110bf
Merge branch 'main' into feat_basemodel 2023-12-22 22:33:06 +08:00
better629
571063069e fix 2023-12-22 22:22:01 +08:00