llm_replicate_api#

class besser.agent.nlp.llm.llm_replicate_api.LLMReplicate(agent, name, parameters, num_previous_messages=1, global_context=None)[source]#

Bases: LLM

An LLM wrapper for Replicate’s LLMs through its API.

Parameters:
  • agent (Agent) – the agent the LLM belongs to

  • name (str) – the LLM name

  • parameters (dict) – the LLM parameters

  • num_previous_messages (int) – for the chat functionality, the number of previous messages of the conversation to add to the prompt context (must be > 0)

  • global_context (str) – the global context to be provided to the LLM for each request

_nlp_engine#

the NLPEngine that handles the NLP processes of the agent the LLM belongs to

Type:

NLPEngine

name#

the LLM name

Type:

str

parameters#

the LLM parameters

Type:

dict

num_previous_messages#

for the chat functionality, the number of previous messages of the conversation to add to the prompt context (must be > 0)

Type:

int

_global_context#

the global context to be provided to the LLM for each request

Type:

str

_user_context#

user specific context to be provided to the LLM for each request

Type:

dict

_abc_impl = <_abc._abc_data object>#
initialize()[source]#

Initialize the LLM. This function is called during the agent training.

intent_classification(intent_classifier, message, parameters=None)[source]#

Predict the intent of a given message.

Instead of returning only the intent with the highest likelihood, return all predictions. Predictions include not only the intent scores but other information extracted from the message.

Parameters:
  • intent_classifier (LLMIntentClassifier) – the intent classifier that is running the intent classification process

  • message (str) – the message to predict the intent

  • parameters (dict) – the LLM parameters. If none is provided, the RAG’s default value will be used

Returns:

the list of predictions made by the LLM.

Return type:

list[IntentClassifierPrediction]

predict(message, parameters=None, session=None, system_message=None)[source]#

Make a prediction, i.e., generate an output.

Parameters:
  • message (Any) – the LLM input text

  • session (Session) – the ongoing session, can be None if no context needs to be applied

  • parameters (dict) – the LLM parameters to use in the prediction. If none is provided, the default LLM parameters will be used

  • system_message (str) – system message to give high priority context to the LLM

Returns:

the LLM output

Return type:

str

set_model(name)[source]#

Set the LLM model name.

Parameters:

name (str) – the new LLM name

set_num_previous_messages(num_previous_messages)[source]#

Set the number of previous messages to use in the chat functionality

Parameters:

num_previous_messages (int) – the new number of previous messages