LLM¶
LLM modules to connect to different LLM providers. Also extracts code, name and description.
- class llamea.llm.DeepSeek_LLM(api_key, model='deepseek-chat', temperature=0.8, **kwargs)¶
Bases:
OpenAI_LLM
A manager class for the DeepSeek chat models.
- class llamea.llm.Dummy_LLM(model='DUMMY', **kwargs)¶
Bases:
LLM
- query(session_messages)¶
Sends a conversation history to DUMMY model and returns a random response text.
- Args:
- session_messages (list of dict): A list of message dictionaries with keys
“role” (e.g. “user”, “assistant”) and “content” (the message text).
- Returns:
str: The text content of the LLM’s response.
- class llamea.llm.Gemini_LLM(api_key, model='gemini-2.0-flash', **kwargs)¶
Bases:
LLM
A manager class for handling requests to Google’s Gemini models.
- query(session_messages, max_retries: int = 5, default_delay: int = 10)¶
Sends the conversation history to Gemini, retrying on 429 ResourceExhausted exceptions.
- Args:
session_messages (list[dict]): [{“role”: str, “content”: str}, …] max_retries (int): how many times to retry before giving up. default_delay (int): fallback sleep when the error has no retry_delay.
- Returns:
str: model’s reply.
- class llamea.llm.LLM(api_key, model='', base_url='', code_pattern=None, name_pattern=None, desc_pattern=None, cs_pattern=None, logger=None)¶
Bases:
ABC
- extract_algorithm_code(message)¶
Extracts algorithm code from a given message string using regular expressions.
- Args:
message (str): The message string containing the algorithm code.
- Returns:
str: Extracted algorithm code.
- Raises:
NoCodeException: If no code block is found within the message.
- extract_algorithm_description(message)¶
Extracts algorithm description from a given message string using regular expressions.
- Args:
message (str): The message string containing the algorithm name and code.
- Returns:
str: Extracted algorithm name or empty string.
- extract_configspace(message)¶
Extracts the configuration space definition in json from a given message string using regular expressions.
- Args:
message (str): The message string containing the algorithm code.
- Returns:
ConfigSpace: Extracted configuration space object.
- abstract query(session: list)¶
Sends a conversation history to the configured model and returns the response text.
- Args:
- session (list of dict): A list of message dictionaries with keys
“role” (e.g. “user”, “assistant”) and “content” (the message text).
- Returns:
str: The text content of the LLM’s response.
- sample_solution(session_messages: list, parent_ids: list | None = None, HPO: bool = False, base_code: str | None = None, diff_mode: bool = False)¶
Generate or mutate a solution using the language model.
- Args:
session_messages: Conversation history for the LLM. parent_ids: Identifier(s) of parent solutions. HPO: If
True
, attempt to extract a configuration space. base_code: Existing code to patch whendiff_mode
isTrue
. diff_mode: WhenTrue
, interpret the LLM response as a unifieddiff patch to apply to
base_code
rather than full source code.- Returns:
tuple: A tuple containing the new algorithm code, its class name, its full descriptive name and an optional configuration space object.
- Raises:
NoCodeException: If the language model fails to return any code. Exception: Captures and logs any other exceptions that occur during the interaction.
- set_logger(logger)¶
Sets the logger object to log the conversation.
- Args:
logger (Logger): A logger object to log the conversation.
- class llamea.llm.Multi_LLM(llms: list[LLM])¶
Bases:
LLM
- query(session_messages: list)¶
Sends a conversation history to the configured model and returns the response text.
- Args:
- session (list of dict): A list of message dictionaries with keys
“role” (e.g. “user”, “assistant”) and “content” (the message text).
- Returns:
str: The text content of the LLM’s response.
- sample_solution(*args, **kwargs)¶
Generate or mutate a solution using the language model.
- Args:
session_messages: Conversation history for the LLM. parent_ids: Identifier(s) of parent solutions. HPO: If
True
, attempt to extract a configuration space. base_code: Existing code to patch whendiff_mode
isTrue
. diff_mode: WhenTrue
, interpret the LLM response as a unifieddiff patch to apply to
base_code
rather than full source code.- Returns:
tuple: A tuple containing the new algorithm code, its class name, its full descriptive name and an optional configuration space object.
- Raises:
NoCodeException: If the language model fails to return any code. Exception: Captures and logs any other exceptions that occur during the interaction.
- set_logger(logger)¶
Sets the logger object to log the conversation.
- Args:
logger (Logger): A logger object to log the conversation.
- class llamea.llm.Ollama_LLM(model='llama3.2', **kwargs)¶
Bases:
LLM
- query(session_messages, max_retries: int = 5, default_delay: int = 10)¶
Sends a conversation history to the configured model and returns the response text.
- Args:
- session_messages (list of dict): A list of message dictionaries with keys
“role” (e.g. “user”, “assistant”) and “content” (the message text).
- Returns:
str: The text content of the LLM’s response.
- class llamea.llm.OpenAI_LLM(api_key, model='gpt-4-turbo', temperature=0.8, **kwargs)¶
Bases:
LLM
A manager class for handling requests to OpenAI’s GPT models.
- query(session_messages, max_retries: int = 5, default_delay: int = 10)¶
Sends a conversation history to the configured model and returns the response text.
- Args:
- session_messages (list of dict): A list of message dictionaries with keys
“role” (e.g. “user”, “assistant”) and “content” (the message text).
- Returns:
str: The text content of the LLM’s response.