mirror of
https://github.com/FoundationAgents/MetaGPT.git
synced 2026-05-02 04:12:45 +02:00
fix cv2.imshow() bug in manual parse action; update readme & readme_CN with manual screenshot
This commit is contained in:
parent
5df1c9fed6
commit
51d52e5a52
3 changed files with 7 additions and 2 deletions
|
|
@ -28,7 +28,9 @@ #### Learning Based on Human Demonstration
|
|||
```
|
||||
|
||||
After running this command, you will first see a screenshot of an Android screen that has been marked at various interactive locations, as shown in the figure below:
|
||||
##### TODO Add Image
|
||||
|
||||
<img src="./resources/manual_example.png" width = 30%>
|
||||
|
||||
After remembering the location where you want to operate, a request similar to the one below will be output in the terminal. Reply to it and thereby direct the Android assistant to learn your demonstration action:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -30,7 +30,9 @@ #### 基于人类演示的学习
|
|||
```
|
||||
|
||||
在运行这一指令后,你将首先看到一个在各个可交互的位置进行了标记的安卓屏幕的截图,如下图:
|
||||
###### TODO Add Image
|
||||
|
||||
<img src="./resources/manual_example.png" width = 30%>
|
||||
|
||||
在记住你要操作的位置之后,终端中将会输出与下面类似的要求,回复它,进而指挥安卓助理学习你的演示行为:
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -73,6 +73,7 @@ class ManualRecord(Action):
|
|||
screenshot_labeled_path = Path(self.screenshot_after_path).joinpath(f"{step}_labeled.png")
|
||||
labeled_img = draw_bbox_multi(screenshot_path, screenshot_labeled_path, elem_list)
|
||||
|
||||
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
|
||||
cv2.imshow("image", labeled_img)
|
||||
cv2.waitKey(0)
|
||||
cv2.destroyAllWindows()
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue