LLM

LLM modules to connect to different LLM providers. Also extracts code, name and description.

class llamea.llm.Gemini_LLM(api_key, model='gemini-2.0-flash', **kwargs)

Bases: LLM

A manager class for handling requests to Google’s Gemini models.

query(session_messages)

Sends a conversation history to the configured model and returns the response text.

Args:
session_messages (list of dict): A list of message dictionaries with keys

“role” (e.g. “user”, “assistant”) and “content” (the message text).

Returns:

str: The text content of the LLM’s response.

class llamea.llm.LLM(api_key, model='', base_url='', code_pattern=None, name_pattern=None, desc_pattern=None, cs_pattern=None, logger=None)

Bases: ABC

extract_algorithm_code(message)

Extracts algorithm code from a given message string using regular expressions.

Args:

message (str): The message string containing the algorithm code.

Returns:

str: Extracted algorithm code.

Raises:

NoCodeException: If no code block is found within the message.

extract_algorithm_description(message)

Extracts algorithm description from a given message string using regular expressions.

Args:

message (str): The message string containing the algorithm name and code.

Returns:

str: Extracted algorithm name or empty string.

extract_configspace(message)

Extracts the configuration space definition in json from a given message string using regular expressions.

Args:

message (str): The message string containing the algorithm code.

Returns:

ConfigSpace: Extracted configuration space object.

abstract query(session: list)

Sends a conversation history to the configured model and returns the response text.

Args:
session_messages (list of dict): A list of message dictionaries with keys

“role” (e.g. “user”, “assistant”) and “content” (the message text).

Returns:

str: The text content of the LLM’s response.

sample_solution(session_messages: list, parent_ids=[], HPO=False)

Interacts with a language model to generate or mutate solutions based on the provided session messages.

Args:

session_messages (list): A list of dictionaries with keys ‘role’ and ‘content’ to simulate a conversation with the language model. parent_ids (list, optional): The id of the parent the next sample will be generated from (if any). HPO (boolean, optional): If HPO is enabled, a configuration space will also be extracted (if possible).

Returns:

tuple: A tuple containing the new algorithm code, its class name, its full descriptive name and an optional configuration space object.

Raises:

NoCodeException: If the language model fails to return any code. Exception: Captures and logs any other exceptions that occur during the interaction.

set_logger(logger)

Sets the logger object to log the conversation.

Args:

logger (Logger): A logger object to log the conversation.

class llamea.llm.Ollama_LLM(model='llama3.2', **kwargs)

Bases: LLM

query(session_messages)

Sends a conversation history to the configured model and returns the response text.

Args:
session_messages (list of dict): A list of message dictionaries with keys

“role” (e.g. “user”, “assistant”) and “content” (the message text).

Returns:

str: The text content of the LLM’s response.

class llamea.llm.OpenAI_LLM(api_key, model='gpt-4-turbo', **kwargs)

Bases: LLM

A manager class for handling requests to OpenAI’s GPT models.

query(session_messages)

Sends a conversation history to the configured model and returns the response text.

Args:
session_messages (list of dict): A list of message dictionaries with keys

“role” (e.g. “user”, “assistant”) and “content” (the message text).

Returns:

str: The text content of the LLM’s response.