llm#

class besser.agent.nlp.llm.llm.LLM(nlp_engine, name, parameters, global_context=None)[source]#

Bases: ABC

The LLM abstract class.

An LLM (Large Language Model) receives a text input and generates an output. Depending on the LLM, several tasks can be performed, such as question answering, translation, text classification, etc.

This class serves as a template to be implemented for specific LLM providers.

Parameters:
  • nlp_engine (NLPEngine) – the NLPEngine that handles the NLP processes of the agent the LLM belongs to

  • name (str) – the LLM name

  • parameters (dict) – the LLM parameters

  • global_context (str) – the global context to be provided to the LLM for each request

_nlp_engine#

the NLPEngine that handles the NLP processes of the agent the LLM belongs to

Type:

NLPEngine

name#

the LLM name

Type:

str

parameters#

the LLM parameters

Type:

dict

_global_context#

the global context to be provided to the LLM for each request

Type:

str

_user_context#

aggregation of user specific contexts to be provided to the LLM for each request

Type:

dict

_user_contexts#

dictionary containing the different context elements making up the user’s context user specific context to be provided to the LLM for each request

Type:

dict

_abc_impl = <_abc._abc_data object>#
add_user_context(session, context, context_name)[source]#

Add user-specific context.

Parameters:
  • session (Session) – the ongoing session

  • context (str) – the user-specific context

  • context_name (str) – the key given to the specific user context

chat(session, parameters=None, system_message=None)[source]#

Make a prediction, i.e., generate an output.

This function can provide the chat history to the LLM for the output generation, simulating a conversation or remembering previous messages.

Parameters:
  • session (Session) – the user session

  • parameters (dict) – the LLM parameters. If none is provided, the RAG’s default value will be used

  • system_message (str) – system message to give high priority context to the LLM

Returns:

the LLM output

Return type:

str

abstract initialize()[source]#

Initialize the LLM. This function is called during the agent training.

intent_classification(intent_classifier, message, parameters=None)[source]#

Predict the intent of a given message.

Instead of returning only the intent with the highest likelihood, return all predictions. Predictions include not only the intent scores but other information extracted from the message.

Parameters:
  • intent_classifier (LLMIntentClassifier) – the intent classifier that is running the intent classification process

  • message (str) – the message to predict the intent

  • parameters (dict) – the LLM parameters. If none is provided, the RAG’s default value will be used

Returns:

the list of predictions made by the LLM.

Return type:

list[IntentClassifierPrediction]

abstract predict(message, parameters=None, session=None, system_message=None)[source]#

Make a prediction, i.e., generate an output.

Parameters:
  • message (Any) – the LLM input text

  • session (Session) – the ongoing session, can be None if no context needs to be applied

  • parameters (dict) – the LLM parameters to use in the prediction. If none is provided, the default LLM parameters will be used

  • system_message (str) – system message to give high priority context to the LLM

Returns:

the LLM output

Return type:

str

remove_user_context(session, context_name)[source]#

Remove user-specific context.

Parameters:
  • session (Session) – the ongoing session

  • context_name (str) – the key given to the specific user context

set_parameters(parameters)[source]#

Set the LLM parameters.

Parameters:

parameters (dict) – the new LLM parameters