MLLLaMa2
Current version v0.0.1
Last updated
Current version v0.0.1
Last updated
This Action is only available in certain Tenants.
This action enriches based on the evaluation of the LLaMa2 Chat model. This model offers a flexible, advanced prompt system capable of understanding and generating responses across a broad spectrum of use cases for text logs.
By integrating LLaMA 2, Onum not only enhances its data processing and analysis capabilities but also becomes more adaptable and capable of offering customized and advanced solutions for the specific challenges faced by users across different industries.
Find MlLlama in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
Token - this will be the API token of the model. See here for where to find these values.
Model - the name of the model to connect to. It’s possible to select between the three available Llama2 models: Llama2-7b-Chat, Llama2-13b-Chat and Llama2-70b-Chat.
Prompt - this will be the input field to call the model.
Temperature - this is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.
System Prompt - describe in detail the task you wish the AI assistant to carry out.
Max Length - the maximum number of characters for the result.
Output - specify a name for the output field.
Click Save to complete.