LLaMEA¶
The LLaMEA
class implements the evolutionary loop
around a large language model. Its behaviour is governed by many
hyper-parameters controlling population sizes, mutation style, diversity and
evaluation.
Recent features include:
Niching – enable
niching="sharing"
orniching="clearing"
to maintain diversity.distance_metric
,niche_radius
,adaptive_niche_radius
andclearing_interval
further tune the niches.Diff mode – set
diff_mode=True
to request SEARCH/REPLACE patches instead of entire source files from the LLM. This is more token efficient for large code bases.Population evaluation – with
evaluate_population=True
the evaluation functionf
operates on lists of solutions, allowing batch evaluations.- Warm start -With every iteration, LLaMEA archives its latest run in
<experiment_log_directory>/llamea_config.pkl. The framework provides ``warm_start`` class methods that allow you to resume from a previously saved state. This methods:
Accept the path to the
<experiment_log_directory>
.Restore the most recent object from the archive.
Reinitialize the program in warm-start mode.
After restoring the object, you can call restored_object.run() to continue execution from the point where the program was last terminated, while updating the same experiment directory.
Initial Population -After a cold start, initialisation of LLaMEA object one can use .run(<experiment_log_directory>) to start with latest individual from the run logged in that directory. Make sure to use similar initialisation criteria, as was used in the logged experiment.
Initialization Parameters¶
The most important keyword arguments of LLaMEA
are summarised below.
Parameter |
Meaning |
---|---|
|
Evaluation function returning feedback, fitness and error. |
|
Language model wrapper used for generation. |
|
Number of parents and offspring per generation. |
|
|
|
Prompt engineering controls. |
|
Mutation and prompt adaptation settings. |
|
Runtime and parallelisation controls. |
|
Logging configuration. |
|
Special operation modes. |
|
Diversity management. |
|
Use population-level evaluation. |
|
Request unified diff patches. |
LLaMEA - LLM powered Evolutionary Algorithm for code optimization This module integrates OpenAI’s language models to generate and evolve algorithms to automatically evaluate (for example metaheuristics evaluated on BBOB).
- class llamea.llamea.LLaMEA(f, llm, n_parents=5, n_offspring=5, role_prompt='', task_prompt='', example_prompt=None, output_format_prompt=None, experiment_name='', elitism=True, HPO=False, mutation_prompts=None, adaptive_mutation=False, adaptive_prompt=False, budget=100, eval_timeout=3600, max_workers=10, parallel_backend='loky', log=True, minimization=False, _random=False, niching: str | None = None, distance_metric: Callable[[Solution, Solution], float] | None = None, niche_radius: float | None = None, adaptive_niche_radius: bool = False, clearing_interval: int | None = None, evaluate_population=False, diff_mode: bool = False, parent_selection: str = 'random', tournament_size: int = 3)¶
Bases:
object
A class that represents the Language Model powered Evolutionary Algorithm (LLaMEA). This class handles the initialization, evolution, and interaction with a language model to generate and refine algorithms.
- adapt_niche_radius(population)¶
Adapt the niche radius based on the current population.
- apply_niching(population)¶
Apply the configured niching strategy to
population
.
- construct_prompt(individual: Solution)¶
Constructs a new session prompt for the language model based on a selected individual.
- Args:
individual (dict): The individual to mutate.
- Returns:
list: A list of dictionaries simulating a conversation with the language model for the next evolutionary step.
- evaluate_fitness(individual)¶
Evaluates the fitness of the provided individual by invoking the evaluation function f. This method handles error reporting and logs the feedback, fitness, and errors encountered.
- Args:
individual (Solution): The solution instance to evaluate.
- Returns:
Solution: The updated solution with feedback, fitness and error information filled in.
- evaluate_population_fitness(new_population)¶
Evaluate a full population of solutions.
- evolve_solution(individual)¶
Evolves a single solution by constructing a new prompt, querying the LLM, and evaluating the fitness.
- get_population_from(archive_path)¶
Finds population log in archive_path/log.jsonl and loads it to current population. If population size in log file is insufficient, runs initialize() for rest of the population. Used to run a cold started algorithm with best known population. Note: Make sure the goal of initialisation of current instance of LLaMEA matches the population being selected. Args:
archive_path: A directory from previous runs, to load well known population from.
- initialize()¶
Initializes the evolutionary process by generating the first parent population.
- initialize_single()¶
Initializes a single solution.
- logevent(event)¶
- optimize_task_prompt(individual)¶
Use the LLM to improve the task prompt for a given individual.
- pickle_archive()¶
Store the llmea object, into a file, using pickle, to support warm start.
- run(archive_path=None)¶
Main loop to evolve the solutions until the evolutionary budget is exhausted. The method iteratively refines solutions through interaction with the language model, evaluates their fitness, and updates the best solution found.
- Args:
archive_path: Runs the algorithm with a given known population, and performs
- Returns:
tuple: A tuple containing the best solution and its fitness at the end of the evolutionary process.
- selection(parents, offspring)¶
Select the new population based on the parents and the offspring and the current strategy.
- Args:
parents (list): List of solutions. offspring (list): List of new solutions.
- Returns:
list: List of new selected population.
- update_best()¶
Update the best individual in the new population
- classmethod warm_start(path_to_archive_dir)¶
Class method for warm starts, takes a archive directory, and finds pickle archieve stored at path_to_archieve_dir/llamea_config.pkl, generates the object from it, and return it for warm start. Args:
path_to_archive_dir: Directory of instance for which warm start needs to be executed.