LLaMEA: Large Language Model Evolutionary Algorithm

Note

The fully-open alternative to Google DeepMind’s AlphaEvolve. First released 📅 Nov 2024 • MIT License • 100 % reproducible.

LLaMEA couples large-language-model reasoning with an evolutionary loop to invent, mutate and benchmark algorithms fully autonomously.

PyPI version Maintenance Python 3.10+ Codecov DOI

LLaMEA (Large Language Model Evolutionary Algorithm) is an innovative framework that leverages the power of large language models (LLMs) such as GPT-4 for the automated generation and refinement of metaheuristic optimization algorithms. The framework utilizes a novel approach to evolve and optimize algorithms iteratively based on performance metrics and runtime evaluations without requiring extensive prior algorithmic knowledge. This makes LLaMEA an ideal tool for both research and practical applications in fields where optimization is crucial.

🔥 News

🤖 Contributing

Contributions to LLaMEA are welcome! Here are a few ways you can help:

  • Report Bugs: Use GitHub Issues to report bugs.

  • Feature Requests: Suggest new features or improvements.

  • Pull Requests: Submit PRs for bug fixes or feature additions.

Please refer to CONTRIBUTING.md for more details on contributing guidelines.

License

Distributed under the MIT License. See LICENSE for more information.

Cite us

If you use LLaMEA in your research, please consider citing the associated paper:

@article{van2024llamea,
   author={Stein, Niki van and Bäck, Thomas},
   journal={IEEE Transactions on Evolutionary Computation},
   title={LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics},
   year={2025},
   volume={29},
   number={2},
   pages={331-345},
   keywords={Benchmark testing;Evolutionary computation;Metaheuristics;Codes;Large language models;Closed box;Heuristic algorithms;Mathematical models;Vectors;Systematics;Automated code generation;evolutionary computation (EC);large language models (LLMs);metaheuristics;optimization},
   doi={10.1109/TEVC.2024.3497793}
}

If you only want to cite the LLaMEA-HPO variant, use the following:

@article{van2024loop,
   author = {van Stein, Niki and Vermetten, Diederick and B\"{a}ck, Thomas},
   title = {In-the-loop Hyper-Parameter Optimization for LLM-Based Automated Design of Heuristics},
   year = {2025},
   publisher = {Association for Computing Machinery},
   address = {New York, NY, USA},
   url = {https://doi.org/10.1145/3731567},
   doi = {10.1145/3731567},
   note = {Just Accepted},
   journal = {ACM Trans. Evol. Learn. Optim.},
   month = apr,
}

Indices and tables