Comparing LLaMEA with RandomSearch

This tutorial shows how to create a small experiment and compare LLaMEA with RandomSearch.

Setup

Make sure an LLM is available. Here we use an Ollama_LLM instance.

  • Install iohblade pip install iohblade

[1]:
from iohblade.experiment import Experiment
from iohblade.llm import Ollama_LLM
from iohblade.methods import LLaMEA, RandomSearch
from iohblade.problems import BBOB_SBOX
from iohblade.loggers import ExperimentLogger

Tip: Make sure OLlama is running and the model is downloaded before executing the next cell. When using COLAB, you might need to set up port forwarding to connect to your local Ollama instance or use Gemini/OpenAI instead.

[4]:
# We compare LLaMEA with default parameters to Random Search on the BBOB Sphere function in 5D.
llm = Ollama_LLM('qwen2.5-coder:14b')
budget = 30
RS = RandomSearch(llm, budget=budget)
LLA = LLaMEA(llm, budget=budget, name='LLaMEA')
methods = [RS, LLA]
[5]:
problems = [BBOB_SBOX(training_instances=[(1,1)], dims=[5], budget_factor=100, name='BBOB-f1')]
logger = ExperimentLogger('simple_exp')
experiment = Experiment(methods=methods, problems=problems, runs=5, show_stdout=True, exp_logger=logger)

Warning: The next step might take several hours to run (depending on the budget and number of runs)

[ ]:
experiment()  # run the experiment

After running the experiment, use the built in plotting functions to inspect the results.

[ ]:
from iohblade import plot_convergence
plot_convergence(logger)