Implementing a custom LLM-EC algorithmΒΆ

In this tutorial we implement a simple optimizer and compare it with LLaMEA and EoH.

[2]:
!pip install swig
!pip install iohblade
[3]:
from iohblade.method import Method
from iohblade.experiment import Experiment
from iohblade.llm import Ollama_LLM
from iohblade.methods import LLaMEA, EoH
from iohblade.problems import BBOB_SBOX
from iohblade.loggers import ExperimentLogger
[7]:
class MyLLMOptimizer(Method):
    def __init__(self, llm, budget=20, name='MyCustomOptimizer'):
        super().__init__(llm, budget, name)

    def __call__(self, problem):
        msg = [{'role':'user','content': problem.get_prompt()}]
        best = problem(self.llm.sample_solution(msg))
        for _ in range(self.budget - 1):
            msg = [
                {'role':'user','content': problem.get_prompt()},
                {'role':'assistant','content': best.code},
                {'role':'user','content': 'Improve the algorithm.'}
            ]
            cand = problem(self.llm.sample_solution(msg))
            if cand.fitness > best.fitness:
                best = cand
        return best

    def to_dict(self):
        return {'method_name': self.name, 'budget': self.budget}

Tip: Make sure OLlama is running and the model is downloaded before executing the next cell. When using COLAB, you might need to set up port forwarding to connect to your local Ollama instance or use Gemini/OpenAI instead.

[ ]:
llm = Ollama_LLM('qwen2.5-coder:14b') # Make sure Ollama is running and the model is downloaded.
budget = 10
methods = [
    LLaMEA(llm, budget=budget, name='LLaMEA'),
    EoH(llm, budget=budget, name='EoH'),
    MyLLMOptimizer(llm, budget=budget, name='MyCustomOptimizer'),
]
problems = [BBOB_SBOX(training_instances=[(1,1)], dims=[5], budget_factor=200, name='BBOB')]
logger = ExperimentLogger('custom_method')
experiment = Experiment(methods=methods, problems=problems, runs=3, show_stdout=True, exp_logger=logger)

Warning: The next step might take several hours to run (depending on the budget and number of runs)

[ ]:
experiment() # This step might take several hours to run.