diff --git a/.gitignore b/.gitignore index 46c9b0dd4..0d6be14ad 100644 --- a/.gitignore +++ b/.gitignore @@ -29,7 +29,6 @@ share/python-wheels/ MANIFEST metagpt/tools/schemas/ examples/data/search_kb/*.json -metagpt/ext/sela/AutogluonModels # PyInstaller # Usually these files are written by a python scripts from a template diff --git a/metagpt/ext/sela/README.md b/metagpt/ext/sela/README.md index a942fdb7d..6fb47b42c 100644 --- a/metagpt/ext/sela/README.md +++ b/metagpt/ext/sela/README.md @@ -1,9 +1,15 @@ # SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning + +Official implementation for paper [SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning](https://arxiv.org/abs/2410.17238). + + +SELA is an innovative system that enhances Automated Machine Learning (AutoML) by integrating Monte Carlo Tree Search (MCTS) with LLM-based agents. Traditional AutoML methods often generate low-diversity and suboptimal code, limiting their effectiveness in model selection and ensembling. SELA addresses these challenges by representing pipeline configurations as trees, enabling agents to intelligently explore the solution space and iteratively refine their strategies based on experimental feedback. + ## 1. Data Preparation You can either download the datasets from the link or prepare the datasets from scratch. -- **Download Datasets:** [Dataset Link](https://deepwisdom.feishu.cn/drive/folder/RVyofv9cvlvtxKdddt2cyn3BnTc?from=from_copylink) +- **Download Datasets:** [Dataset Link](https://drive.google.com/drive/folders/151FIZoLygkRfeJgSI9fNMiLsixh1mK0r?usp=sharing) - **Download and prepare datasets from scratch:** ```bash cd data @@ -82,4 +88,19 @@ ### Ablation Study - **Use a set of insights:** ```bash python run_experiment.py --exp_mode rs --task titanic --rs_mode set - ``` \ No newline at end of file + ``` + +## 4. Citation +Please cite our paper if you use SELA or find it cool or useful! + +```bibtex +@misc{chi2024selatreesearchenhancedllm, + title={SELA: Tree-Search Enhanced LLM Agents for Automated Machine Learning}, + author={Yizhou Chi and Yizhang Lin and Sirui Hong and Duyi Pan and Yaying Fei and Guanghao Mei and Bangbang Liu and Tianqi Pang and Jacky Kwok and Ceyao Zhang and Bang Liu and Chenglin Wu}, + year={2024}, + eprint={2410.17238}, + archivePrefix={arXiv}, + primaryClass={cs.AI}, + url={https://arxiv.org/abs/2410.17238}, +} +``` diff --git a/metagpt/ext/sela/runner/README.md b/metagpt/ext/sela/runner/README.md index 7c031f1ee..4867aa4f0 100644 --- a/metagpt/ext/sela/runner/README.md +++ b/metagpt/ext/sela/runner/README.md @@ -165,34 +165,4 @@ ### 5. Custom Baselines To run additional baselines: - Each baseline must produce `dev_predictions.csv` and `test_predictions.csv` with a `target` column. -- Use the `evaluate_score` function for evaluation. - ---- - -## MLE-Bench - -**Note:** MLE-Bench requires Python 3.11 or higher. - -#### Setup - -Clone the repository and install: - -```bash -git clone https://github.com/openai/mle-bench.git -cd mle-bench -pip install -e . -``` - -Prepare the data: - -```bash -mlebench prepare -c --data-dir -``` - -#### Run the MLE-Bench Experiment - -Run the following command to execute the experiment: - -```bash -python run_experiment.py --exp_mode mcts --custom_dataset_dir --rollouts 10 --from_scratch --role_timeout 3600 -``` \ No newline at end of file +- Use the `evaluate_score` function for evaluation. \ No newline at end of file