Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In order to use Onum, there are certain system requirements.
The installation process creates the Distributor and all Workers for each data source in the cluster.
Onum supports the following browsers:
Google Chrome
Once you have acquired an Onum account, there are a few steps required for Onum Installation:
Onum’s Operations team will prepare the infrastructure on Onum’s SaaS based on the estimated volumetrics.
The infrastructure requirements are shared with the Operations team of the client. Further steps cannot be conducted without the required infrastructure.
Access to the infrastructure is granted to Onum’s team with the right permissions for conducting the installation.
A validation script is run by Onum in order to assess if all requirements described in the Annex are met and all connectivities are opened.
If the above point is successful, an installation slot is scheduled and agreed.
Installation is conducted by Onum engineers using a Docker and a post-installation validation script is run.
You can now access its tenant, ingest data, invite users and use all of the Onum capabilities.
Dependencies:
Docker
Packages:
gpg
curl
ipvsadm
ca-certificates
SIEM access
Access to sources
Hardware (per Virtual Machine):
Distribution: Linux (Debian or Red Hat)
Server Hardware: 16GB RAM and 8 CPU
Disk Storage: 500GB
In case of upcoming system maintenance, we kindly seek permission to access customer infrastructure. Our aim is to ensure seamless operations and address any potential issues promptly.
Get to grips with these key concepts to better understand how Onum works and use it to its full potential.
A unit of work performing a given operation on an event.
Application Programming Interface. A set of defined methods of communication among various components.
Various distributors and workers can be grouped and contained within a cluster. You can have as many clusters as required per Tenant.
Where the data is routed after being processed by Onum.
Where the data is generated before ingesting it into Onum, e.g. application server logs, firewall logs, S3 bucket, Kafka Topic, etc.
This service receives and processes the Listener data before sending it on to workers within a cluster.
An event represents semi-structured data such as a log entry. Events can be parsed so that structured data can be generated and eventually processed by the engine. Events are composed of fields, which are referred to as Field. An action that produces a new field will be referred to as outputField.
Used to sort events coming from Listeners into categories or sets that meet given filters to be used in a Pipeline.
A Listener retrieves events in a given IP address and a port, routing the data to the Pipelines so that it can be processed.
A lookup refers to searching for and retrieving information from a specific source or dataset, typically based on a key or reference.
Multitenancy is an architecture in which tenants share the same underlying infrastructure, including databases and application code, but their data and configurations are kept separate to ensure privacy and security.
A sequence of Actions connected through inputs/outputs to process a stream of data. Data comes from the Listener and eventually is routed to a Datasink.
A role is assigned to a user in order to control the access they have to certain or all Onum features. This way, we can personalise the experience for each user.
Tags can be assigned to Listeners, Pipelines or Data sinks in order to classify them or make them easier to find. This is particularly useful if you have a wide database and want to avoid lengthy searching for the resources you wish to use.
A Tenant is a domain that contains a set of data in your organization. You can use one or various tenants and grant access to as many as required.
This service runs the Pipelines, receiving data from its distributor and contained within a Cluster.
Get to grips with the important concepts & best practices of the Onum application.
These articles contain information on functionalities across the entire platform.
Observability & Orquestration in real time. Any format. Any source.
The exponential growth of data ingestion volumes can lead to reduced performance, slow response times, and increased costs. With this comes the need to implement optimization strategies & volume reduction control. We help you cut the noise of large data streams and reduce infrastructure by up to 80%.
Gain deep insights from any type of data, using any format, from any source.
All of this...
By collecting and observing that data at the edge, as close as possible to where it’s being generated, gain real-time observations and take decisive action to prevent network downtime, payment system failures, malware infections, and more.
Unlike most tools that provide data observation and orchestration, Onum is not a data analytics space, which is already served well by security information and event management (SIEM) vendors and other analytics tools. Instead, Onum sits as close as possible to where the data is generated, and well in front of your analytics platforms, to collect and observe data across every aspect of your hybrid network.
Welcome to Onum! This guide will help you start working with Onum, a powerful tool designed to enhance your data analysis and processing capabilities.
A Tenant is a domain that contains a set of data in your organization. You can use one or various Tenants and grant access to as many as required.
You can access the rest of the areas in Onum using the left panel.
Onum receives any data through Listeners.
These are logical entities created within a Distributor, acting as the gateway to the Onum system. Configuring a Listener involves defining an IP address, a listening port, and a transport layer protocol, along with additional settings depending on the type of Listener specialized in the data it will receive.
Onum outputs data via Data sinks. Use them to define where and how to forward the results of your streamlined data.
Use Pipelines to start transforming your data and build a data flow. Pipelines are made of the following components:
Easily identify data types using the color legend
Since Onum is able to process any data type, you may be wondering how to identify which is which. See the color legend below:
When opening Onum, the Home area is the default view. Here you can see an overview of all the activity in your .
Use this view to analyze the flow of data and the change from stage to stage of the process. Here you can locate the most important contributions to your workflow at a glance.
All data shown is analyzed compared to the previously selected time range. Use the time range selector at the top of this area to specify the periods to examine.
For example, if the time range were 1 hour ago (the default period), the calculation of differences will be carried out using the previous one hour before the current selection:
Range selected: 10:00-11:00
Comparison: 09:00-10:00
The Home view shows various infographics that provide insights into your data flow. Some Listeners or Data Sinks may be excluded from these metrics if they are duplicates or reused.
In those cases, you can hover over the icon to check the total metrics including all the Data sinks.
Each column of the Sankey diagram provides information and metrics on the key steps of your flow.
You can see how the data flows between:
Hover over a part of the diagram to see specific savings.
You can narrow down your analysis even further by selecting a specific node and selecting Show metrics.
This option is not available for all columns.
Click a node and select View details to open a panel with in-depth details of the selected piece.
From here, you can go on to edit the selected element.
This option is not available for all columns.
You can choose which columns to view or hide using the eye icon next to its name.
You can add a new Listener, Label, Pipeline or Data sink using the plus button next to its name.
You can also create all of the aforementioned elements using the Create new button at the top-right:
Once you get your Onum credentials, you only have to go to and enter them to access your Tenant.
Learn more about working with Tenants .
When you access the Onum app, you'll see , where you can see an overview of the activity in your Tenant.
Access the Listeners area to start working with them. Learn how to create your first Listener .
Access the Data sinks area to start working with them. Learn how to create your first Data sink .
Learn more about Pipelines .
Do you want to check the essential steps in Onum through specific Pipelines? Explore the most common use cases .
To learn more about time ranges, go to
The Net Saved/Increased and Estimation graphs will show an info tooltip if some are excluded from these metrics. You may decide this during the Data sink creation.
each Listener in your Tenant.
the Distributor/Worker group receives the Listener data and forwards it to Pipeline.
the operations and criteria used to filter out the data to be sent on to Pipelines.
the Pipelines used to obtain desired data and results.
the end destination for data having passed through Listener › Cluster › Label › Pipeline.
Discover Pipelines to manage and customize your data
Add the final piece of the puzzle for simpler data
Learn about how to set up and use Listeners
A sequence of characters employed primarily for textual data representation.
Used to represent whole numbers without any fractional or decimal component. Integers can be positive, negative, or zero.
Sequence of characters or encoded information that identifies the precise time at which an event occurred. Format: 2024-05-17T14:30:00Z
Used to represent real numbers with fractional parts, allowing for the representation of a wide range of values, including decimals. Format: 1.23456
Fundamental data type in computer programming that represents one of two possible values: true or false.
Characters that separate individual fields or columns of data. The delimiter ensures that each piece of data within a row is correctly identified and separated from the others.
In a JSON, fields are represented by keys within objects, and the corresponding values can be of any JSON data type. This flexibility allows a JSON to represent structured data in a concise and readable manner, making it suitable for various applications, especially in web development and API communication.
A simple and widely used file format for storing tabular data, such as a spreadsheet or database. In a CSV file, each line of the file represents a single row of data, and fields within each row are separated by a delimiter, usually a comma.
A key-value pair is a data structure commonly used in various contexts, including dictionaries, hash tables, and associative arrays. It consists of two components: a key and its corresponding value.
A literal data type, often referred to simply as a literal, represents a fixed value written directly into the source code of a program.
Everything starts with a good Listener
Essentially, Onum receives any data through Listeners. These are logical entities created within a Distributor, acting as the gateway to the Onum system. Due to this, configuring a Listener involves defining an IP address, a listening port, and a transport layer protocol, along with additional settings depending on the type of Listener specialized in the data it will receive.
Click the Listeners tab on the left menu for a general overview of the Listeners configured in your Tenant and the events generated.
The graph at the top plots the volume ingested by your listeners. The line graph represents the events in, and the bar graph represents bytes in. Learn more about this graph in this article.
At the bottom, you have a list of all the Listeners in your Tenant. You can switch between the Cards view, which shows each Listener in a card, and the Table view, which displays Listeners listed in a table. Learn more about the cards and table views in this article.
There are various ways to narrow down what you see in this view:
Add filters to narrow down the Listeners you see in the list. Click the + Add filter button and select the required filter type(s). You can filter by:
Name: Select a Condition (Contains, Equals, or Matches) and a Value to filter Listeners by their names.
Type: Choose the Listener type(s) you want to see in the list.
Version: Filter Listeners by their version.
Created by: Selecting this option opens a User drop-down where you can filter by creator.
Updated by: Selecting this option opens a User drop-down where you can filter by the last user to update a pipeline.
The filters applied will appear as tags at the top of the view.
Note that you can only add one filter of each type.
If you wish to see data for a specific time period, this is the place to click. Go to this article to dive into the specifics of how the time range works.
You can choose to view only those Listeners that have been assigned the desired tags. You can create these tags in the Listener settings or from the cards view. Press the Enter
key to confirm the tag, then Save.
To filter by tags, click the + Tags button, select the required tag(s) and click Save.
Depending on your permissions, you can create a new Listener from this view. To do it, simply click the New listener button at the top right corner.
This will open the Listener configuration.
Configuring your Listener involves various steps. You can open the configuration pane by creating a new Listener or by clicking a Listener in the Listener tab or the Pipeline view and selecting Edit Listener in the pane that opens.
Alternatively, click the ellipses in the card or table view and select Edit.
The first step is to define the Listener Type. Select the desired type in this window and select Configuration.
The configuration is different for each Listener type. Check the different Listener types and how to configure them in this section.
If your Listener is deployed in the Cloud, you will see an extra step for the network properties.
Use Onum's labels to cut out the noise with filters and search criteria based on specific metadata. This way, you can categorize events sent on and processed in your Pipelines.
Learn more about labels in this article.
This article outlines the more complex calculations that go on behind the graphs you see.
In the Listeners, Pipelines, and Data sinks views, you will see detailed metrics on your events and bytes in/out, represented in a graph at the top of these areas.
The line graph represents the events in/out, and the bar graph represents bytes in/on. Hover over a point on the chart to show a tooltip containing the events and bytes in for the selected time, as well as a percentage of how much increase/decrease has occurred since the previous lapse of time since the one currently selected.
The values on the left-hand side represent the events in/out for the selected period.
AVG EPS
The average events per second ingested or sent by all listeners/Data sinks in your Tenant.
MAX EPS
The maximum number of events per second ingested or sent by all Listeners/Data sinks in your Tenant.
MIN EPS
The minimum number of events per second ingested or sent by all Listeners/Data sinks in your Tenant.
The values on the right-hand side represent the bytes in/out for the selected period.
AVG Bytes
The average kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.
MAX Bytes
The maximum kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.
MIN Bytes
The minimum kilobytes per second ingested or sent by all Listeners/Data sinks in your Tenant.
Viewing and modifying elements in the table.
In the Listeners, Pipelines, and Data sinks areas, you can view all the resources in your Tenant as cards or in a table.
In both views, you can:
Click the magnifying glass icon to look for specific elements in the list. You can search by name, status, or tag.
Display all the elements individually in a list or grouped by Status or Type. These grouping options vary depending on the area you are.
In the Table view, you can click the cog icon to begin customizing the table settings. You can reorder the columns in the table, hide or display the required ones or pin them.
Changes will be automatically applied. Click the Reset button to recover the original configuration.
Use the buttons at the top right part of the table to expand or collapse each row in the table. This will change the level of detail of each element.
Click the ellipsis button on each row to edit the element, copy its ID, or remove it.
In this view, each element is displayed as a card that shows details about it.
Click the ellipsis button on each card to edit the element, copy its ID, or remove it.
Click the Add tag button and add the required tags to an element. For each tag you enter in the box, hit the Enter
key. Click Save to add the tags.
Current version v0.0.1
See the changelog of this Listener type .
Onum supports integration with Google Pub/Sub. Select Google Sub from the list of Listener types and click Configuration to start.
Now you need to specify how and where to collect the data, and establish a connection with your Google account.
Enter the basic information for the new Listener.
Name* - Enter a name for the new Listener.
Description - Optionally, enter a description for the Listener.
Tags - Add tags to easily identify your Listener. Hit the Enter
key after you define each tag.
Now add the configuration to establish the connection.
Project ID* - This is a unique string found in the Manage all projects area of the projects list.
Subscription Name* - Find your subscription in the Google Cloud console, Pub/Sub Subscriptions page, Metrics tab.
Credentials File* - The Google Pub/Sub connector uses OAuth 2.0 credentials for authentication and authorization. Create a secret containing these credentials or select one already created.
Enabled* - Decide whether or not to activate the bulk message option.
Message Format - Choose the required message format.
Delimiter Char Codes - Enter the characters you want to use as delimiters.
Click Create labels to move on to the next step and define the required Labels if needed.
Designed for the Edge, created in the Cloud
Easy, flexible deployment in any environment while keeping them as close as possible to where the data is produced delivers unparalleled speed and efficiency, enabling you to cut the infrastructure you have dedicated to orchestration by up to 80%.
The Onum infrastructure consists of:
Distributor: this is the service that hosts the Listener before forwarding it to Workers.
Worker: this is the service that runs the Pipelines, receiving data from its Distributor and contained within a Cluster.
Cluster: a container grouping Distributors and Workers. You can have as many clusters as required per Tenant.
Listeners are hosted within Distributors and are placed as close as possible to where data is generated. The Distributor pulls tasks from the data queue passing through the pipeline and distributes it to the next available worker in a Cluster. As soon as a Worker completes a task it becomes available again, and the Distributor in turn will assign it the next task from the queue.
The installation process creates the Distributor and all Workers for each data source in the cluster.
The Onum Platform supports any deployment type ― including on-premises
, the Onum public cloud, or your own private cloud
.
In a typical SaaS-based deployment, most processing activities are conducted in the Cloud.
Client-side components can be deployed on a Linux machine or on a Kubernetes cluster for easy, flexible deployment in any environment. Onum supports all major cloud environments, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
Learn more about Deployment requirements here.
Onum supports all major standards such as Netflow, Syslog, and Kafka to orchestrate data streams to any desired destination, including popular data analytics tools such as Splunk and Devo, as well as storage environments such as S3.
Throughout the entire Onum platform, you can set a period to either narrow down or extend the data shown. You can either select a predefined period or apply a custom time range.
The related graph and resources will be automatically updated to display data from the chosen period. To remove a selected period, simply click the bin icon that appears next to the period to go back to the default time range (1 hour ago).
The intervals will be calculated according to the Timezone of your browser. Keep an eye out for future implementations, where you can manually select a timezone.
As well as predefined time intervals, you can also define a custom time range. To do it, simply select the required starting and ending dates in the calendar.
The interesting thing about Onum is that you can directly see how much volume you have saved compared to past ingestions, telling you what is going well and what requires further streamlining.
The comparison is direct/equivalent, meaning all data shown is analyzed compared to the previously selected equivalent time range.
For example, if the time range is 1 hour, the calculation of differences will be carried out using the previous one hour before the current selection =
Range selected: 10:00-11:00
Comparison: 09:00-10:00
Again, let´s say you now wish to view data over the last 7 days. The percentages will be calculated by measuring the volume retrospectively two weeks ago with the previous week.
Onum is compatible with any data source, regardless of technology and architecture. A Listener Type is not necessarily limited to one integration and can be used to connect to various.
Although there are only a limited number of types available for use, the integration possibilities are endless. Alternatively, you can contact us to request a Listener type.
Click a Listener to see how to configure it.
Current version v0.1.0
See the changelog of this Listener type .
Onum supports integration with Cisco System NetFlow. Select Flow from the list of Listener types and click Configuration to start.
Now you need to specify how and where to collect the data, and how to establish a connection with Cisco NetFlow.
Enter the basic information for the new Listener.
Name* - Enter a name for the new Listener.
Description - Optionally, enter a description for the Listener.
Tags - Add tags to easily identify your Listener. Hit the Enter
key after you define each tag.
Now add the configuration to establish the connection.
Transport protocol* - Currently, Onum supports the UDP protocol.
Port* - Enter the required IP port number.
Protocols to process* - Select the required protocol(s) from the list.
Fields to include* - Select all the fields you wish to include in the output data.
Access control type* - Choose between None, Whitelist, or Blacklist.
IPs - Enter the IPs you wish to apply the access control to. Click Add element to add as many as required.
Click Create labels to move on to the next step and define the required Labels if needed.
Current version v0.1.0
See the changelog of this Listener type .
Onum supports integration with Transmission Control Protocol. Select TCP from the list of types.
Now you need to specify how and where to collect the data, and how to establish a connection with TCP.
Enter the basic information for the new Listener.
Name* - Enter a name for the new Listener.
Description - Optionally, enter a description for the Listener.
Tags - Add tags to easily identify your Listener. Hit the Enter
key after you define each tag.
Port* - Enter the IP port number.
Trailer Character* - Choose between LF, CR+LF, or NULL.
TLS configuration
Certificate* - This is the predefined TLS certificate.
Private Key* - The private key of the corresponding certificate.
CA chain - The path containing the CA certificates.
Client Authentication Method* - Choose between No, Request, Require, Verify, and Require & Verify.
Minimum TLS version* - Select a version from the menu.
The default tab that opens when in the Pipeline area is the Listeners tab, which shows all Listeners in your Tenant, as well as their labels.
Use the search bar to find a specific Listener or Label.
You can edit a label from the list by clicking the ellipses next to its name and selecting Edit.
If the Listener you wish to use in the Pipeline does not already exist, you can create it directly from this view using the Create Listener button in the bottom right of the tab. This will open the Listener Configuration window.
The Pipeline canvas provides infinite possibilities to use your data.
This pane shows the general properties of your Pipeline. Click the ellipses next to its name to Copy ID.
Depending on your permissions, you can view or modify:
Name: When you create a pipeline by default, the first recommendation is to change the default name. But you can modify the name at any time clicking on the pencil icon close to the Pipeline's name
Tags: Click the tag icon to open the menu.
Clusters: Here you can see how many clusters your Pipeline is running in, as well as update them.
Versions: View and run multiple versions of the Pipeline.
Stop/Start Pipeline: Stop and start the Pipeline in some or all of the clusters it is running in.
Publish
When you modify your Pipeline, you will be creating a new version. When your modifications are complete, you can Publish this new version using this button in the top right.
If the Pipeline is running, the Metrics bar provides a visual, graphical overview of the data being processed in your Pipeline.
Events In: View the total events in per second for the selected period, compared to the previous range (in %).
Bytes In: The total bytes in per second for the selected time range, compared to the previous (in %).
Events Out: View the total events out per second for the selected period, compared to the previous range (in %).
Bytes Out: The total bytes out per second for the selected time range, compared to the previous (in %).
Latency: The time (in milliseconds) it takes for data to travel from one point to another, compared to the previous (in %).
You can set a time range to view the metrics for a specific period of time. This will be used to calculate the percentages, compared to the previous time of the same period selected.
Use the Hide metrics/Show metrics button to hide/show the metrics pane.
Simply drag and drop an element from the left-hand side onto the canvas to add it to your Pipeline.
The canvas is where you will build your Pipeline. Drag and drop an element from the left pane to add it to your Pipeline.
If you have enough permissions to modify this Pipeline, click the node in the canvas and select the Remove icon.
Zoom in/out, Center, undo, and redo changes using the buttons on the right.
Use the window in the bottom-right to move around the Canvas.
Connect the separate nodes of the canvas to form a Pipeline from start to finish.
Simply click the port you wish to link from and drag to the port you wish to link to. When you let go, you will see a link form between the two.
To Unlink, click anywhere on the link and select unlink in the menu.
Notice the ports of each element in the canvas. Ports are used as connectors to other nodes of the Pipeline, linking either incoming or outgoing data.
Listener: As a Listener is used to send information on, there are no in ports, and one out port.
Action: Actions generally have one in port, injecting it with data. When information is output, it will be sent via the default port. If there are problems sending on the data, it will not be lost, bur rather output via the error port.
Datasink: A datasink is the end stop for our data, so there is only one in port that receives your processed data.
Click one to read more about how to configure them:
Use Onum's labels to cut out the noise with filters and search criteria based on specific metadata. This way, you can categorize the events that Listeners receive before being processed in your .
As different log formats are being ingested in real-time, the same Listener may ingest different technologies. Labels are useful for categorizing events based on specific criteria.
When creating or editing a Listener, use Labels to categorize and assign filters to your data.
For most Listeners, you will see two main event categories on this screen:
All Data - Events that follow the structure defined by the specified protocol, for example, Syslog events with the standard fields, or most of them.
Unparsed - These are events that do not follow the structure defined in the selected protocol.
You can define filters and rules for each of these main categories.
Once you've defined your labels to filter specific events, you can use them in your Pipelines.
Instead of using the whole set of events that come into your Listeners, you can use your defined labels to use only specific sets of data filtered by specific rules.
When you create a new Listener, you'll be prompted to the Labels screen after configuring your Listener data.
Click the + button under the set of data you want to filter (All Data or Unparsed). You'll see your first label. Click the pencil icon a give it a name that describes the data that will filter out.
In this example, we want to filter only events whose version is 2.x
, so we named our label accordingly:
Below, see the Add filter button. This is where you add the criteria to categorize the content under that label. Choose the field you want to filter by.
In this example, we're choosing Version
.
Now, define the filter criteria:
Condition - Choose between:
Contains - Checks when the indicated value appears anywhere in the log.
Equals - Filters for exact matches of the value in the log.
Matches - Filters for exact matches of the value in the log, allowing for regular expressions.
Value - Enter the value to filter by.
In this example, we are setting the Condition to Contains
and Value to 2
.
Click Save and see the header appear for your first label.
From here, you have various options:
To create a new subset of data, select the + sign that extends directly from the All data or Unparsed bars. Be aware that if you select the + sign extending from the header bar, you will create a subheader.
You can create a branch from your primary header by clicking the plus button that extends from the main header. There is no limit to the amount that you can add.
Notice that the subheader shows a filter icon with a number next to it to indicate the string of filters applied to it already.
To duplicate a label, simply select the duplicate button in its row.
To delete a label, simply select the delete button in its row.
If you attempt to delete a Label that is being used in a Pipeline, you will be asked to confirm where to remove it from.
Once you have completed your chain, click Save.
Any data that has not been assigned a label will be automatically categorized as unlabeled. This allows you to see the data that is not being processed by any Pipeline, but has not been lost.
This label will appear in the list of Labels for use in your Pipeline so that you can process the data in its unfiltered form.
Your Listener is now ready to use and will appear in the list.
Current version v0.1.1
See the changelog of this Listener type .
Onum receives data from Syslog, supporting TCP and UDP protocols. Select Syslog from the list of types.
Now you need to specify how and where to collect the data, and how to establish a connection with Syslog.
Enter the basic information for the new Listener.
Name* - Enter a name for the new Listener.
Description - Optionally, enter a description for the Listener.
Tags - Add tags to easily identify your Listener. Hit the Enter
key after you define each tag.
Port* - Enter the IP port number.
Protocol* - Onum supports TCP and UDP protocols.
Framing Method* - Choose the required framing method between: Auto-Detect, Non-Transparent (newline), Non-Transparent (zero), or Octet Counting (message length).
Certificate* - This is the predefined TLS certificate.
Private key for this listener* - The private key of the corresponding certificate.
CA chain - The path containing the CA certificates.
Client authentication method* - Choose between No, Request, Require, Verify, and Require & Verify.
Minimum TLS version* - Select the required version from the menu.
A Pipeline is Onum's way of streamlining your data
Use Pipelines to transform your data and build a data flow linking from and to .
Select the Pipelines tab at the left menu to visualize all your Pipelines in one place. Here's what you will find and the actions you can perform in this area:
There are various ways to narrow down what you see in this view, both the Pipeline list and the informative graphs. To do it, use the options at the top of this view:
Add filters to narrow down the Pipelines you see in the list. Click the + Add filter button and select the required filter type(s). You can filter by:
Name: Select a Condition (Contains, Equals, or Matches) and a Value to filter Pipelines by their names.
Status: Choose the status(es) you want to filter by: Draft, Running, and/or Stopped. You'll only see Pipelines with the selected status(es).
Created by: Filter for the creator of the Pipeline in the window that appears.
Updated by: Filter for users to see the Pipeline they last updated.
The filters applied will appear as tags at the top of the view.
Note that you can only add one filter of each type.
You can choose to view only those Pipelines that have been assigned the desired tags. You can create these tags in the Pipeline settings or from the cards view. Press the Enter key to confirm the tag, then Save.
To filter by tags, click the + Tags button and select the required tag(s).
Just below the filters, you will see 3 metrics informing you about various components in your Pipelines.
View the events per second (EPS) ingested by all Listeners in your Pipelines for the selected time range, as well as the difference in percentage compared to the previous lapse.
View the events per second (EPS) sent by all Data Sinks in your Pipelines for the selected time range, as well as the difference in percentage compared to the previous.
See the overall data volume processed by all Pipelines for the selected time range, and the difference in percentage with the previous.
Select between In and Out to see the volume received or sent by your Pipelines for the selected time range. The line graph represents the Events and the bar graph represents Bytes.
Hover over a point on the chart to show a tooltip containing the Events and Bytes in/out for the selected time, as well as a percentage of how much increase/decrease has occurred since the previous lapse of time since the one currently selected.
You can also analyze a different time range directly on the graph. To do it, click a starting date in the map and drag the frame that appears until the required ending date. The time range above will be also updated.
At the bottom, you have a list of all the Pipelines in your tenant.
Use the Group by drop-down menu at the right area to select a criterion to organize your Pipelines in different groups (Status or None). You can also use the search icon to look for specific Pipelines by name.
Use the bottoms at the left of this area to display the Pipelines as Cards or listed in a Table:
In this view, Pipelines are displayed as cards that display useful information. Click a card to open the Pipeline detail view, or double-click it to access it.
This is the information you can check on each card:
The percentage at the top left corner indicates the amount of data that goes out of the Pipeline compared to the total incoming events, so you can check how data is optimized at a glance. Hover over it to see the in/out data in bytes and the estimation over the next 24 hours.
You can also see the status of the Pipeline (Running, Draft, or Stopped).
Next to the status, you can check the Pipeline current version.
Click the Add tag button to define tags for the Pipeline. To assign a new tag, simply type the name you wish to assign, make sure to press Enter, and then select the Save button. If the Pipeline has tags defined already, you'll see the number of tags next to the tag icon.
Click the ellipses in the right-hand corner of the card to reveal the options to Edit, Copy ID, or Remove it.
In this view, Pipelines are displayed in a table, where each row represents a Pipeline. Click a row to open the Pipeline detail view, or double-click it to access it.
Click the cog icon at the top left corner to rearrange the column order, hide columns, or pin them. You can click Reset to recover the default configuration.
The details pane is split into three tabs showing the Pipeline at different statuses:
Running: select the drop-down to see which clusters the Pipeline is currently running in.
Draft
Stopped
In each one, you can see the various versions of this Pipeline at this stage. Open one to see a preview of the pipeline, creation data, data metrics, and the Listeners and Data sinks it contains.
Once you have located the Pipeline to work with, click+ Edit Pipeline to open it.
If you wish to use a Pipeline just like the one you are currently working on, click the ellipses in the Card or Table view and select Duplicate, or from the Configuration pane.
Depending on your permissions, you can create a new Pipeline from this view. There are several ways to create a new Pipeline:
This will open the new Pipeline, ready to be built.
Keep reading to learn how to build a Pipeline from this view.
Current version v0.1.1
See the changelog of this Listener type .
Onum supports integration with HTTP. Select HTTP from the list of Listener types and click Configuration to start.
Now you need to specify how and where to collect the data, and how to establish an HTTP connection.
Enter the basic information for the new Listener.
Name* - Enter a name for the new Listener.
Description - Optionally, enter a description for the Listener.
Tags - Add tags to easily identify your Listener. Hit the Enter
key after you define each tag.
Port* - Enter the IP port number.
TLS Configuration
Certificate* - This is the predefined TLS certificate.
Private key for this listener* - The private key of the corresponding certificate.
CA chain - The path containing the CA certificates.
Client authentication method* - Choose between No, Request, Require, Verify, and Require & Verify.
Minimum TLS version* - Select the required version from the menu.
HTTP Method* - Choose GET, POST, or PUT method.
Request path* - Enter the RegEx used to request access.
Strategy* - Choose what and how to extract.
Extraction info - Any additional information on the strategy to use.
Propogate headers strategy - Choose between None or Allow.
Header keys - Enter the required header keys in this field. Click Add element for each one.
Exported headers format - Choose the required format for your headers.
Maximum message length - Maximum characters.
Response code - Specify the response code to show when successful.
Response Content-Type - Choose the text or application type.
Response Text - The text that will show in case of success.
Perform operations on your events
The Actions tab shows all available actions to be assigned and used in your Pipeline.
Use the search bar at the top to find a specific action.
Hover over an action in the list to see a tooltip, as well as the option to View details.
To add an action to a Pipeline, drag it onto the canvas.
Onum supports action versioning, so be aware that the configuration may be showing either the Latest version if you are adding a new action, or current version if you are editing an existing action.
We are constantly updating and improving Actions, therefore you may come across old or even discontinued actions.
If there is an updated version of the Action available, it will show update available in its Definition, above the node when added to a Pipeline, and Details pane.
If you have added an Action to a Pipeline that is now discontinued, it will show as deactivated in the Canvas.
Current version v0.0.1
Redis is a powerful in-memory data structure store that can be used as a database, cache, and message broker. It provides high performance, scalability, and versatility, making it a popular choice for real-time applications and data processing.
This action stores data in its cache and makes it available on command.
Find Redis in the Actions tab and drag it onto the canvas to use it.
To use this action, you will need to have access to the Redis service, either on-premise or cloud, installed.
Redis Configuration
Redis endpoint* - enter the endpoint used to establish connection to the redis server.
Commands - the command to retrieve the data from the server. Choose between SET and GET.
Redis Key* - if the model has a version, enter it here.
Event in field - the message to input to redis.
Output - enter a name for the output event.
Rate Limit
Maximum requests - set a limit on the number of requests per second to launch on the server.
Click Save to complete.
Current version v0.0.1
Evaluate any AI model built with the Cog library and deployed as an endpoint anywhere: SageMaker, HuggingFace, Replicate…
The HttpCog Action seamlessly integrates with models deployed on any platform, including the client's own infrastructure. It enables not only the utilization of state-of-the-art machine learning models but also the execution of code specific to a client's unique use case.
Find Http Enrichment in the Actions tab and drag it onto the canvas to use it.
To use this Action, the model must be available through an endpoint based on the Cog API.
To open the configuration, click the Action in the canvas and select Configuration.
Endpoint* - enter the endpoint used to establish connection to the model.
Token - if the model has a token, use it here.
Version - if the model has a version, enter it here.
Input* - the message to input to the model.
Output* - enter a name for the output event.
Click Save to complete.
These Actions involve improving raw data using advanced procedures.
Click an Action to learn more about how to use it.
Current version v0.0.1
See the changelog of this Action type .
This action adds new fields to your events using a given operation. You can select one or more operations to execute, and their resulting values will be set in user-defined event fields.
Find Field Generator in the Actions tab (under the Advanced group) and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
You can perform the following operations to define new fields in your events:
Click Save to complete.
Current version v0.0.1
For each element in the input list, the For Each action generates an output that adds to the input event and the index it occupies in the list.
For example, an input list containing [a,b,c]
will generate three outputs, with these fields added to the event:
elementValueOutField: a; elementIndexOutField: 0
elementValueOutField: b; elementIndexOutField: 1
elementValueOutField: c; elementIndexOutField: 2
This operation only accepts List input types.
Find For Each in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
Input* - the field that contains the input list. Valid types areString, Integer, Float, Boolean,Timestamp
Output Field* - the field where each iterated element will be stored. This will be the same type as the input list field.
Index Field* - where to stored the index of the returned out field in the input list.
Click Save to complete.
127.0.0.1,127.0.0.2,127.0.0.3,127.0.0.4,192.168.0.1.
In the input field, select the list type field.
Assign the value and the index fields a name.
The action will create a separate event for each element of the string, each event containing two fields (value and index).
Current version v0.1.0
The Lookup action allows you to retrieve information from your uploaded lookup. To learn more about how to upload data, go to .
Find Lookup in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
Select the table you wish to retrieve data from. The tables that show here in the list will be those you previously uploaded in the Enrichment view.
The Key column you selected during the upload will automatically appear. Select the field to search for this key column.
In the Outputs, choose the field that will be injected from the drop-down and assign it a name. Add as many as required.
Click Save to complete.
Current version v0.0.2
This Action is only available in certain Tenants.
This Action accumulates events before sending them on.
Find Accumulator in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
Field list
Fields - choose the Listener event fields you would like to accumulate. You can select an infinite number of fields using the Add element button.
Accumulate type* - choose how to accumulate the events, by period or number of events.
Accumulate period - if you select by period, define the number of seconds to accumulate for.
Number of events - if you select by number of events, define how many.
Output* - enter a name for the output event.
Click Save to complete.
Current version v0.0.1
This action allows you to configure and execute HTTP requests with custom settings for methods, headers, authentication, TLS, and more.
Find Http Request in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
HTTP Method* - The HTTP method to use for the request (e.g., GET, POST, PUT, DELETE, PATCH).
Server URL* - The target URL for the HTTP request.
Payload field - JSON field to include as the request body.
Output field - Field to store the HTTP response.
HTTP headers - Key-value pairs for HTTP headers.
Timeout - Timeout for the request (in seconds, minimum 1).
Disable redirects - Select true to disable HTTP redirects or false to ignore.
Content type - Set the request content type (default is application/json).
Authentication Configuration: if you require authentication.
Authentication type* - None, Basic, Bearer, or API Key.
Authentication credentials - Credentials required based on the authentication type.
Basic Authentication - Username and password for Basic authentication.
Bearer token - Token for Bearer authentication.
API key: -API key configuration with API Key Name and API Key Value.
Bulk Configuration
Events per batch - Number of events per batch.
Store as - Store response with options (Delimited, Without Delimiter, JSON Array).
Delimiter - Custom delimiter (default is newline).
Rate Limit
Maximum requests* - set a limit on the number of requests per second to launch on the server.
TLS Configuration
Certificate - This is the predefined TLS certificate.
Private Key - The private key of the corresponding certificate.
CA Chain - The path containing the CA certificates.
Minimum TLS version - choose the TLS version to use.
Proxy Configuration: if your organization uses proxy servers, establish the connection here.
Proxy Scheme
Username
Password
URL
Retry Configuration
Max attempts - Maximum retry attempts.
Wait between - Wait time (in milliseconds) between attempts.
Backoff interval - Backoff interval for retries.
Click Save to complete.
Click Create labels to move on to the next step and define the required if needed.
This will open the Listener and for you to modify.
Go to to learn step by step.
Go to to learn more.
You can carry out all these actions in if you wish to modify more than one Pipeline at a time.
Go to to learn more about the specifics of how this works.
For Listeners, you can drag the specific down to the required level. Once in the Pipeline, you can see which Listener the label belongs to by hovering over it, or in the Metrics area of the configuration pane.
Click it in the canvas to open its .
Create links between your nodes to create a flow of data between them. Learn more about below.
Click Create labels to move on to the next step and define the required if needed.
The graph at the top plots the data volume going through your Pipelines. The line graph represents the events in/out, and the bar graph represents bytes in/out. Learn more about this graph .
At the bottom, you have a list of all the Listeners in your Tenant. You can switch between the Cards view, which shows each Listener in a card, and the Table view, which displays Listeners listed in a table. Learn more about the cards and table views .
If you wish to see data for a specific time period, this is the place to click. Go to to dive into the specifics of how the time range works.
Note that these metrics are affected by the selected.
Click a Pipeline to open its settings in the right-hand pane. Here you can see and Edit the Pipeline. Click the ellipses in the top right to Copy ID or Remove it.
From the view
From the page
Give your Pipeline a name and add optional Tags to identify it. You can also assign a in the top-right.
See to learn step by step.
Click Create labels to move on to the next step and define the required if needed.
See the complete version history of each Action .
You'll soon be able to see all the Actions with updates available in the
Go to to learn step by step.
For more help and in-depth detail, see
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
In order to configure this action, you must first link it to a Listener or another Action. Go to to learn how this works.
You receive a field containing a string of five IPs
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
Now
Select true to create a new field with the current date and time according to the selected time unit*
Give a name to the new field with the current date and time.
Today
Select true to create a new field with the current date (today) according to the selected time unit*
Give a name to the new field with the current day.
Yesterday
Select true to create a new field with the date from the previous day (yesterday) according to the selected time unit*
Give a name to the new field with the yesterday.
Random
Select true to create a new field with a random value.
Give a name to the new field with the random value.
Custom field
Select true to create a new field with a custom value. Enter the value and the data type in the corresponding fields.
Give a name to the new field with the custom value.
A comprehensive list of the operations available in the Field Tranformation Action.
Converts a size in bytes to a human-readable string.
Input data - 134367
Output data - 131.22 KiB
Converts values from one unit of measurement to another.
Input data - 5000
Input units - Square foot (sq ft)
Output units - Square metre (sq m)
Output data - 464.515215
Converts a unit of data to another format.
Input data - 2
Input units - Megabits (Mb)
Output units - Kilobytes (KB)
Output data - 250
Converts values from one unit of length to another.
Input data - 100
Input units - Metres (m)
Output units - Yards (yd)
Output data - 109.3613298
Converts values from one unit of mass to another.
Input data - 100
Input units - Kilogram (kg)
Output units - Pound (lb)
Output data - 220.4622622
Converts values from one unit of speed to another.
Input data - 200
Input units - Kilometres per hour (km/h)
Output units - Miles per hour (mph)
Output data - 124.2841804
Counts the amount of times a given string occurs in your input data.
Input data - This is a sample test
Search - test
Search Type - simple
Output data - 1
Calculates an 8-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - C7
Calculates an 16-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - 57D4
Calculates an 24-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - 3B6473
Calculates an 32-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - 7ED8D648
Obfuscates all digits of a credit card number except for the last 4 digits.
Input data - 1111222233334444
Output data - ************4444
Converts a CSV file to JSON format.
Input data -
First name,Last name,Age,City John,Wick,20,New-York Tony,Stark,30,Madrid
Cell delimiter - ,
Format - Array of dictionaries
Output data -
[ { "First name": "John", "Last name": "Wick", "Age": "20", "City": "New-York" }, { "First name": "Tony", "Last name": "Stark", "Age": "30", "City": "Madrid" } ]
Defangs an IP address to prevent it from being recognized.
Input data - 192.168.1.1
Output data - 192[.]168[.]1[.]1
Defangs a URL to prevent it from being recognized as a clickable link.
Input data - https://example.com
Escape Dots - true
Escape HTTP - true
Escape ://* - false
Process Type - Everything
Output data - hxxps://example[.]com
Divides a list of numbers provided in the input string, separated by a specific delimiter.
Input data - 26:2:4
Delimiter - Colon
Output data - 3.25
Analyzes a URI into its individual components.
Input data -
https://user:pass@example.com:8080/path/to/resource?key=value#fragment
Output data -
Scheme: https Host: example.com:8080 Path: /path/to/resource Arguments: map[key:[value]] User: user Password: pass
Escapes specific characters in a string
Input data - She said, "Hello, world!"
Escape Level - Special chars
Escape Quote - "
JSON compatible -false
Output data - She said, \"Hello, world!\"
Extracts all the IPv4 and IPv6 addresses from a block of text or data.
Input data -
User logged in from 192.168.1.1. Another login detected from 10.0.0.5.
Output data -
192.168.1.1
10.0.0.5
Makes defanged IP addresses valid.
Input data - 192[.]168[.]1[.]1
Output data - 192.168.1.1
Makes defanged URLs valid.
Input data - hxxps://example[.]com
Escape Dots - true
Escape HTTP - true
Escape ://* - false
Process Type - Everything
Output data - https://example.com
Splits the input string using a specified delimiter and filters.
Input data -
Error: File not found Warning: Low memory Info: Operation completed Error: Disk full
Delimiter - Line feed
Regex - ^Error
Invert - false
Output data -
Error: File not found Error: Disk full
Finds values in a string and replace them with others.
Input data - The server encountered an error while processing your request.
Substring to find - error
Replacement - issue
Output data - The server encountered an issue while processing your request.
Decodes data from a Base64 string back into its raw format.
Input data - SGVsbG8sIE9udW0h
Strict Mode - true
Output data - Hello, Onum!
Converta hexadecimal-encoded data back into its original form.
Input data - 48 65 6c 6c 6f 20 57 6f 72 6c 64
Delimiter - Space
Output data - Hello World
Converts a timestamp into a human-readable date string.
Input data - 978346800
Time Unit - Seconds
Timezone Output - UTC
Format Output - Mon 2 January 2006 15:04:05 UTC
Output data - Mon 1 January 2001 11:00:00 UTC
Converts an IP address (either IPv4 or IPv6) to its hexadecimal representation.
Input data - 192.168.1.1
Output data - c0a80101
Reduces the size of a JSON file by removing unnecessary characters from it.
Input data -
{ "name": "John Doe", "age": 30, "isActive": true, "address": { "city": "New York", "zip": "10001" } }
Output data -
{"name":"John Doe","age":30,"isActive":true,"address":{"city":"New York","zip":"10001"}}
Converts a JSON file to CSV format.
Input data -
[ { "First name": "John", "Last name": "Wick", "Age": "20", "City": "New-York" }, { "First name": "Tony", "Last name": "Stark", "Age": "30", "City": "Madrid" } ]
Cell delimiter - ,
Row delimiter - /n
Output data -
First name,Last name,Age,City John,Wick,20,New-York Tony,Stark,30,Madrid
Decodes the payload in a JSON Web Token string.
Input data - eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
Output data - {"sub":"1234567890","name":"John Doe","iat":1516239022}
Generates a Keccak cryptographic hash function from a given input.
Input data - Hello World !
Size - 256
Output data -3ea2f1d0abf3fc66cf29eebb70cbd4e7fe762ef8a09bcc06c8edf641230afec0
Produces a MD2 hash string from a given input.
Input data - Hello World!
Output data - 315f7c67223f01fb7cab4b95100e872e
Produces a MD4 hash string from a given input.
Input data - Hello World!
Output data -b2a5cc34fc21a764ae2fad94d56fadf6
Produces a MD5 hash string from a given input.
Input data - Hello World!
Output data -d41d8cd98f00b204e9800998ecf8427e
Calculates the median of given values.
Input data - 10, 5, 20, 15, 25
Delimiter - ,
Output data - 15
Calculates the result of the multiplication of given values.
Input data - 2, 3, 5
Delimiter - ,
Output data - 30
Pads each input line with a specified number of characters.
Input data - Apple Banana Cherry
Pad position - Start
Pad line - 7
Character - >>>
Output data -
>>> >>>Apple >>> >>>Banana >>> >>>Cherry
Parses a string and returns an integer of the specified base.
Input data - 100
Base - 2
Output data -4
Takes UNIX file permission strings and converts them to code format or vice versa.
Input data - -rwxr-xr--
Output data -
Textual representation: -rwxr-xr-- Octal representation: 0754 +---------+-------+-------+-------+ | | User | Group | Other | +---------+-------+-------+-------+ | Read | X | X | X | +---------+-------+-------+-------+ | Write | X | | | +---------+-------+-------+-------+ | Execute | X | X | | +---------+-------+-------+-------+
Extracts or manipulates parts of your input strings that match a specific regular expression pattern.
Input data - 100
Base - 2
Output data -4
Removes whitespace and other characters characters from a string.
Input data -
Hello World!
This is a test.
Spaces - true
Carriage returns - false
Line feeds - true
Tabs - false
Form feeds - false
Full stops - true
Output data -
HelloWorld!Thisisatest
Reverses the order of the characters in a string.
Input data - Hello World!
Reverse mode - Character
Output data - !dlroW olleH
Returns the SHA0 hash of a given string.
Input data - Hello World!
Output data - 1261178ff9a732aacfece0d8b8bd113255a57960
Returns the SHA1 hash of a given string.
Input data - Hello World!
Output data - 2ef7bde608ce5404e97d5f042f95f89f1c232871
Returns the SHA2 hash of a given string.
Input data - Hello World!
Size - 512
Output data - f4d54d32e3523357ff023903eaba2721e8c8cfc7702663782cb3e52faf2c56c002cc3096b5f2b6df870be665d0040e9963590eb02d03d166e52999cd1c430db1
Returns the SHA3 hash of a given string.
Input data - Hello World!
Size - 512
Output data - 32400b5e89822de254e8d5d94252c52bdcb27a3562ca593e980364d9848b8041b98eabe16c1a6797484941d2376864a1b0e248b0f7af8b1555a778c336a5bf48
Returns the SHAKE hash of a given string.
Input data - Hello World!
Capacity - 256
Size - 512
Output data - 35259d2903a1303d3115c669e2008510fc79acb50679b727ccb567cc3f786de3553052e47d4dd715cc705ce212a92908f4df9e653fa3653e8a7855724d366137
Shuffles the characters of a given string.
Input data - Hello World!
Delimiter - Nothing (separate chars)
Output data - rH Wl!odolle
Returns the SM3 cryptographic hash function of a given string.
Input data - Hello World!
Length - 64
Output data - 0ac0a9fef0d212aa
Sorts a list of strings separated by a specified delimiter according to the provided sorting order.
Input data - banana,apple,orange,grape
Delimiter - Comma
Order - Alphabetical (case sensitive)
Reverse - false
Output data - apple,banana,grape,orange
Extracts characters from a given string.
Input data - +34678987678
Start Index - 3
Length - 9
Output data - 678987678
Calculates the result of the subtraction of given values.
Input data - 10, 5, 2
Delimiter - Comma
Output data - 3
Calculates the total of given values.
Input data - 10, 5, 2
Delimiter - Comma
Output data - 17
Swaps the case of a given string.
Input data - Hello World!
Output data - hELLO wORLD!
Encodes raw data into an ASCII Base64 string.
Input data - Hello, Onum!
Output data - SGVsbG8sIE9udW0h
Converts an integer to its corresponding hexadecimal code.
Output data - Hello World!
Delimiter - Space
Input data - 48 65 6c 6c 6f 20 57 6f 72 6c 64
Converts the characters of a string to lower case.
Input data - Hello World!
Output data - hello world!
Transforms a string representing a date into a timestamp.
Input data - 2006-01-02
Format - DateOnly
Output data - 2006-01-02T00:00:00Z
Parses a datetime string in UTC and returns the corresponding UNIX timestamp.
Input data - Mon 1 January 2001 11:00:00
Unit - Seconds
Output data - 978346800
Converts the characters of a string to upper case.
Input data - Hello World!
Output data - HELLO WORLD!
Converts a date and time from one format to another.
Input data - 2024-10-24T14:11:13Z
Input Format - 2006-01-02T15:04:05Z
Input Timezone - UTC+1
Output Format - 02/01/2006 15:04:05
Output Timezone - UTC+8
Output data - 24/10/2024 21:11:13
Removes escape characters from a given string.
Input data - She said, \"Hello, world!\"
Output data - She said, "Hello, world!"
Decodes a URL and returns its corresponding URL-decoded string.
Input data - https%3A%2F%2Fexample.com%2Fsearch%3Fq%3DHello+World%21
Output data - https://example.com/search?q=Hello World!
Encodes a URL-decoded string back to its original URL format,
Input data - https://example.com/search?q=Hello World!
Output data - ttps%3A%2F%2Fexample.com%2Fsearch%3Fq%3DHello+World%21
Current version v0.0.1
This Action is only available in certain Tenants.
This action enriches based on the evaluation of the LLaMa2 Chat model. This model offers a flexible, advanced prompt system capable of understanding and generating responses across a broad spectrum of use cases for text logs.
By integrating LLaMA 2, Onum not only enhances its data processing and analysis capabilities but also becomes more adaptable and capable of offering customized and advanced solutions for the specific challenges faced by users across different industries.
Find MlLlama in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
Token - this will be the API token of the model. See here for where to find these values.
Model - the name of the model to connect to. It’s possible to select between the three available Llama2 models: Llama2-7b-Chat, Llama2-13b-Chat and Llama2-70b-Chat.
Prompt - this will be the input field to call the model.
Temperature - this is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.
System Prompt - describe in detail the task you wish the AI assistant to carry out.
Max Length - the maximum number of characters for the result.
Output - specify a name for the output field.
Click Save to complete.
These Actions involve enriching Onum data with external or additional data.
Click an Action to learn more about how to use it.
Aggregation Actions involve summarizing or grouping data points based on certain criteria.
Click an Action to learn more about how to use it.
These actions involve processing or transforming data in some way. This can be to remove unwanted data, to reformat, to extract, etc.
Click an Action to learn more about how to use it.
Current version v1.0.0
Summarize data by performing aggregations using keys and temporal keys (min, hour or day).
Find Aggregator in the Actions tab and drag it onto the canvas to use it.
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
The Fields to Group option lists the fields coming from the linked Listener or Action for you to choose from. Choose one or more fields to group by.
Having defined which fields to group by, choose or create a Grouping Time. You can write the amount and unit (seconds, minutes, hours, days), or select a common amount.
Click Save.
Now you can add aggregation(s) to your grouping using the following operations:
average: calculates the average of the values of each grouping.
count: calculates the total occurrences for each grouping.
countnotnull: calculates the total occurrences for each grouping, excluding null values.
first: finds the first value found for each grouping . The first value will be the first in the workers queue.
firstnotnull: finds the first not null value found for each grouping . The first value will be the first in the workers queue.
ifthenelse: the operation will only be executed if the given conditions are met.
last: finds the last value found for each grouping . The last value will be the last in the workers queue.
lastnotnull: finds the last not null value found for each grouping . The last value will be the last in the workers queue.
max: finds the highest value found.
min: finds the lowest value found.
sum: calculates the total of the values for each grouping.
To add another aggregation, use the Add item option.
You can also use the arrow keys on your keyboard to navigate up and down the list.
You can also carry out an advance configuration by Grouping By Conditionals.
Use the Add Condition option to add conditions to your Aggregation.
Click Save when complete.
In this example, we will use the Group By action to summarize a large amount of data, grouping by IP address every 5 minutes and aggregate the number of requests by type per IP address.
Current version v1.0.0
This action evaluates a list of conditions for an event. If an event meets a given condition, it will be sent through an output port specific to that condition. If the event does not meet any condition, it will be sent through the default output.
Set any number of conditions on your data for filtering and alerting.
Find Conditional in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
In the panel that appears, you can begin to add conditions, each with their own port.
This action is a supports various output ports. We can add string (contains, not contain, equals, not equal) or int (greater, less, equal, not equal) using OR & AND.
There will be as many output ports as there are conditions, as well as the default and error ports.
The Field option lets you choose not only the field to filter for, but the specific Action to take it from.
Choose between the following conditions for the filter (or use the arrow keys on your keyboard to navigate up and down the list):
The options you see here will differ depending on the data type of the field you have selected.
Number
(<) Less than
(≤) Less than or equal to
(>) Greater than
(≥) Greater than or equal to
(=) Equal to
(!=) Not equal to
String
Contains
Doesn't contain
Equal
Not equal
Boolean
Equal
Not equal
Contains
Does not contain
Equal to
Not equal to
Enter the value to filter in (remember to press enter if you´re writing one) and you have your condition.
Now you can add AND/OR clauses to your condition, or add a new condition entirely using the Add Condition option.
You can define conditions in List mode or change tocode
and write the syntax.
Click Save when complete.
Let's say you have data on error and threat detection methods in storage devices and you wish to detect threats and errors using the Cyclic Redundancy Check methods crc8, crc16 and crc24.
Current version 0.0.1
The Sampling action allows only M(Allowed Events) out of N(Total Events) events to go through it.
To add it to your Pipeline, drag and drop it onto the canvas.
To open the configuration, click the Action in the canvas and select Configuration.
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
the number of allowed events*
out of the total events entering the action*
Click Save to complete.
Current version v0.2.2
The Field Transformation action acts as a container that enables users to perform a wide range of operations on data, including encoding and decoding various types of encryption, format conversion, file compression and decompression, data structure analysis, and much more. The results are stored in new events fields.
Find it in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
In order to configure this action, you must first link it to a Listener or other Action. Go to Building a Pipeline to learn how to link.
Choose a field from the linked Listener/Action to transform in your Action using the drop-down.
Add as many fields as required using the Add New Field button.
See a comprehensive list of all the available operations for this Action.
Please bear in mind that the options available in this window will depend on the field to transform.
Add as many Operations as required using Add Operation.
You can also use the arrow keys on your keyboard to navigate up and down the list.
If you have added more than one operation, you can reorder them by dragging and dropping them into position.
Before saving your action, you can test it to see the outcome.
Type a message in the Input field and see it transformed in the Output field after passing through the selected operation(s).
Give a name to the transformed field and click Save to complete.
Here is an example of a data set on the Bytes in/out from IP addresses.
We can use the field transformation operations to reduce the quantity of data sent.
We have a Syslog Listener, connected to a Parser.
Link the Parser to the Field Transformation action and open its configuration.
We will use the To IP Hex and CRC32 operations.
DESTINATION_IP_ADDRESS: 192.168.70.210518
DestinationIPAddressHex: c0.a8.46.d2.224
DESTINATION_HOST: server.example.com
DestinationHostCRC32:
0876633F
Transform the Destination IP to hexadecimal to reduce the number of characters.
192.168.70.210518
c0.a8.46.d2.224
Field>Parser: DESTINATION_IP_ADDRESS
Operation: To IP Hex
Output Field: DestinationIPAddessHex
Add a new field for Destination Host to CRC32
Codify the Destination Host as crc32 to transform the machine name into 8 characters.
server.example.com
0876633F
Field>Parser: DESTINATION_HOST
Operation: Crc32
Output field: DestinationHostCrc32
These Actions involve reformatting and modifying the event fields based on certain conditions.
Click an Action to learn more about how to use it.
These Actions involve reformatting data to fit given conditions or criteria.
Click an Action to learn more about how to use it.
This operation is used to calculate the median value of a set of numbers. The median is a statistical measure representing the middle value of a sorted dataset. It divides the data into two halves, with 50% of the data points below and 50% above the median.
These are the input/output expected data types for this operation:
- List of numbers separated by a specified delimiter.
- The result of the median.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to calculate the median a series of numbers in your input strings. They are separated by commas (,). To do it:
In the Operation field, choose Median.
Set Delimiter to Comma.
Give your Output field a name and click Save. You'll get the median of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation performs arithmetic subtraction between numbers. This operation is useful for calculations, data manipulation, and analyzing numerical differences.
These are the input/output expected data types for this operation:
- Input string containing numbers to subtract.
- The result of the subtraction.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the subtraction of a series of numbers in your input strings. They are separated by commas (,). To do it:
In the Operation field, choose Subtract Operation.
Set Delimiter to Comma
.
Give your Output field a name and click Save. You'll get the subtraction of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation compresses JSON data by removing unnecessary whitespace, line breaks, and formatting while retaining the full structure and functionality of the JSON. It is handy for reducing the size of JSON files or strings when storage or transfer efficiency is required.
These are the input/output expected data types for this operation:
- Strings representing the JSON data you want to optimize.
- Optimized versions of the JSON data in your input strings.
Suppose you want to minify the JSON data in your input strings. To do it:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Json Minify.
Give your Output field a name and click Save. Your JSON data will be optimized and formatted properly.
For example, the following JSON:
will be formatted like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to multiply numbers in a dataset by a specified value. It processes numerical input and applies the multiplication operation to each number individually. This is useful for scaling data, performing simple arithmetic, or manipulating numerical datasets.
These are the input/output expected data types for this operation:
- Input string containing numbers to multiply.
- The result of the multiplication.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to multiply a series of numbers in your input strings. They are separated by commas (,). To do it:
In the Operation field, choose Multiply Operation.
Set Delimiter to Comma.
Give your Output field a name and click Save. You'll get the multiplication of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation divides a list of numbers provided in the input string, using the specified delimiter to separate the numbers.
These are the input/output expected data types for this operation:
- List of numbers you want to divide, separated by a specified delimiter.
- Result of the division of the numbers in your input string.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to divide a series of numbers in your input strings. They are separated by colons (:). To do it:
In the Operation field, choose Divide Operation.
Set Delimiter to Colon.
Give your Output field a name and click Save. You'll get the division results. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation calculates the total sum of a series of numbers provided as input. It is a simple yet powerful tool for numerical data analysis, enabling quick summation of datasets or values.
These are the input/output expected data types for this operation:
- Input string containing numbers to sum.
- The result of the total sum.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the sum of a series of numbers in your input strings. They are separated by commas (,). To do it:
In the Operation field, choose Sum Operation.
Set Delimiter to Comma
.
Give your Output field a name and click Save. You'll get the sum of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
This operation converts values between different units of digital data, such as bits, bytes, kilobytes, megabytes, and so on. It’s especially useful when you’re dealing with data storage or transfer rates, and you need to switch between binary (base 2) and decimal (base 10) units.
These are the input/output expected data types for this operation:
- Values whose unit of data you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from megabits into kilobytes:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert data units.
Set Input units to Megabits (Mb)
.
Set Output units to Kilobytes (KB)
.
Give your Output field a name and click Save. The data type of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of length or distance.
These are the input/output expected data types for this operation:
- Values whose unit of length you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of length.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from meters into yards:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert distance.
Set Input units to Metres (m)
.
Set Output units to Yards (yd)
.
Give your Output field a name and click Save. The unit of length of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to encode or "escape" characters in a string so that they can be safely used in different contexts, such as URLs, JSON, HTML, or code. This operation is helpful when you need to format text with special characters in a way that won’t break syntax or cause unintended effects in various data formats.
These are the input/output expected data types for this operation:
- Strings with the characters you want to escape.
- Strings with the required escaped characters.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to escape characters that are between "
in a series of input strings. To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Escape String.
Set Escape Level to Special chars
.
Set Escape Quote to "
.
Set JSON compatible to false
.
Give your Output field a name and click Save. Matching characters will be escaped. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to decode escape sequences in a string back to their original characters. Escaped strings are often used in programming, web development, or data transmission to represent special characters that cannot be directly included in text.
These are the input/output expected data types for this operation:
- String with escape characters.
- Resulting unescaped string.
Suppose you want to unescape characters in a series of input strings. To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Unescape string.
Give your Output field a name and click Save. All the escape characters will be removed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of mass.
These are the input/output expected data types for this operation:
- Values whose unit of mass you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of mass.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from kilograms into pounds:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert mass.
Set Input units to Kilogram (kg)
.
Set Output units to Pound (lb)
.
Give your Output field a name and click Save. The unit of mass of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of speed.
These are the input/output expected data types for this operation:
- Values whose unit of speed you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of speed.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from kilometers per hour into miles per hour:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert speed.
Set Input units to Kilometres per hour (km/h)
.
Set Output units to Miles per hour (mph)
.
Give your Output field a name and click Save. The unit of speed of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values from one unit of measurement to another, such as square feet, acres, square meters, and even smaller or less common units used in physics (like barns or nanobarns).
These are the input/output expected data types for this operation:
- Values whose unit of measurement you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of measurement.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from square feet into square meters:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert area.
Set Input units to Square foot (sq ft)
.
Set Output units to Square metre (sq m)
.
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to decode data from a Base64 string back into its raw format. Base64 is a binary-to-text encoding method commonly used to encode binary data (like images or files) into text that can be easily transmitted over text-based protocols such as email, JSON, or XML. It’s also used for data storage, ensuring the data remains ASCII-safe.
These are the input/output expected data types for this operation:
- The Base64 strings you want to decode.
- Decoded strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to decode a series of events in the Base64 encoding scheme:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose From Base64.
Set Strict Mode to true
.
Give your Output field a name and click Save. The values in your input field will be decoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to encode data into a Base64 string. Base64 is a binary-to-text encoding method commonly used to encode binary data (like images or files) into text that can be easily transmitted over text-based protocols such as email, JSON, or XML. It’s also used for data storage, ensuring the data remains ASCII-safe.
These are the input/output expected data types for this operation:
- The string you want to encode.
- Resulting Base64 string.
Suppose you want to encode a series of events into Base64:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose To Base64.
Give your Output field a name and click Save. The values in your input field will be encoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a date and time into a Unix timestamp. A Unix timestamp is the number of seconds (or milliseconds) that have elapsed since January 1, 1970, 00:00:00 UTC (commonly referred to as the "Epoch").
These are the input/output expected data types for this operation:
- Strings representing the dates you want to transform.
- Integers representing the resulting Unix timestamps.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of dates into Unix timestamps:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose To Unix Timestamp.
Set Time Unit to Seconds.
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to convert a string to a hexadecimal code. Hexadecimal encoding is often used to represent binary data in a readable, ASCII-compatible format.
These are the input/output expected data types for this operation:
- Strings you want to encode.
- Resulting hexadecimal codes.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to encode a series of events into hexadecimal-encoded data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose To Hex.
Set Delimiter to Space
.
Give your Output field a name and click Save. The values in your input field will be encoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts Unix timestamps into human-readable date and time formats. Unix timestamps represent the number of seconds (or milliseconds) that have elapsed since the Unix epoch, which began at 00:00:00 UTC on January 1, 1970.
These are the input/output expected data types for this operation:
- Integer values representing the Unix timestamps to be converted.
- A string representing the formatted time.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of timestamps into human-readable date strings:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose From Unix Timestamp.
Set Time Unit to Seconds.
Set Timezone Output to UTC
.
Set Format Output to Mon 2 January 2006 15:04:05 UTC
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to convert hexadecimal-encoded data back into its original form, whether it’s plain text, binary data, or another format. Hexadecimal encoding is often used to represent binary data in a readable, ASCII-compatible format.
These are the input/output expected data types for this operation:
- The hexadecimal-encoded data you want to decode.
- Decoded string.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to decode a series of events including hexadecimal-encoded data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose From Hex.
Set Delimiter to Space
.
Give your Output field a name and click Save. The values in your input field will be decoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts strings representing a date into a RFC 3339 timestamp.
These are the input/output expected data types for this operation:
- Strings representing the dates you want to transform in the format specified.
- Resulting RFC 3339 timestamps.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of strings into timestamps:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose To Timestamp.
Set Format to DateOnly
.
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to break down and analyze a Uniform Resource Identifier (URI) into its individual components, making it easier to understand and work with the data in a URI.
These are the input/output expected data types for this operation:
- URLs you want to analyze.
- Breakdown of the input URLs.
Suppose you want to analyze a series of URLs in your input data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Encode URI.
Give your Output field a name and click Save. The URLs in your input field will be analyzed.
For example, for the following URL:
you will get the following analysis:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to interpret and analyze standard UNIX file permission strings (e.g., -rwxr-xr--
) and provide a detailed breakdown of the permissions, including their binary and octal representations. You can also enter the code formats to get the UNIX file permission strings.
These are the input/output expected data types for this operation:
- UNIX-style file permission strings or codes you want to analyze.
- Details of the provided UNIX file permission strings/codes.
Suppose you want to analyze a series of UNIX file permission strings in your input data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Parse UNIX file permissions.
Give your Output field a name and click Save. The values in your input field will be decoded.
For example, for the following UNIX file permission string:
you'll get the following breakdown:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts JSON data into a CSV file.
These are the input/output expected data types for this operation:
- JSON data you want to transform into CSV. They must be strings formatted as JSON data.
- Resulting CSV files.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events in JSON format into CSV:
In the Operation field, choose JSON to CSV.
Set Cell delimiter to ,
(comma).
Set Row delimiter to \n
(new line)
Give your Output field a name and click Save. The JSON-formatted strings in your input field will be transformed into CSV.
For example, the following JSON:
will be transformed into this CSV:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a CSV file to JSON format.
These are the input/output expected data types for this operation:
- CSV-formatted strings you want to transform into JSON.
- Resulting JSON-formatted strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events in CSV format into JSON:
In the Operation field, choose CSV to JSON.
Set Cell delimiter to ,
(comma).
Set Format to Array of dictionaries.
Give your Output field a name and click Save. The CSV strings in your input field will be transformed into JSON.
For example, the following CSV:
will be transformed into this JSON:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to convert dates and times from one format to another. This is useful for standardizing timestamps, converting between systems with different date/time formats, or simply making a date more readable.
These are the input/output expected data types for this operation:
- Strings representing the dates you want to convert.
- Output formatted date strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of dates in the following format:
MM-DD-YYYY HH:mm:ss
into this one:
ddd, D MMM YYYY HH:mm:ss ZZ
In the Operation field, choose Translate Datetime Format.
Set Input Format to 01-02-2006 15:04:05
Set Input Timezone to UTC+1
Set Output Format to Mon, 2 Jan 2006 15:04:05 +0000
Set Output Timezone to UTC+1
Give your Output field a name and click Save. The format of the dates in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the MD2 (Message Digest 2) algorithm. MD2 is a cryptographic hash function primarily intended for use in systems based on 8-bit computers. It produces a 128-bit hash value (16 bytes), typically represented as a 32-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to hash.
- MD2 hash values.
Suppose you want to hash your input strings using the MD2 algorithm:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose MD2.
Give your Output field a name and click Save. The strings in your input field will be hashed using the MD2 algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the MD4 (Message Digest 4) algorithm. MD4 is a cryptographic hash function primarily intended for use in systems based on 32-bit computers. It produces a 128-bit hash value (16 bytes), typically represented as a 32-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to hash.
- MD4 hash values.
Suppose you want to hash your input strings using the MD4 algorithm:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose MD4.
Give your Output field a name and click Save. The strings in your input field will be hashed using the MD4 algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to decode JSON Web Tokens (JWTs). JWTs are commonly used for authentication and data exchange in web applications, and they consist of three parts:
Header - Encoded metadata about the token.
Payload - Encoded claims or data being transmitted.
Signature - A cryptographic signature to verify the token’s integrity.
This operation helps decode and inspect the header and payload of a JWT without verifying the signature.
These are the input/output expected data types for this operation:
- JWT string you want to decode.
- Decoded JWT strings
Suppose you want to decode a series of events in the Base64 encoding scheme:
In the Operation field, choose JWT Decode.
Give your Output field a name and click Save. The values in your input field will be decoded.
For example, the following JWT:
will be decoded as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute the SHA-0 hash of an input string. SHA-0 is a cryptographic hash function and a predecessor to the more widely known SHA-1.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-0 hash of the input data.
Suppose you want to get the SHA0 hashes a series of strings in your input data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose SHA0.
Give your Output field a name and click Save. You'll get the SHA0 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the MD5 (Message Digest 5) algorithm. MD5 is a cryptographic hash function primarily intended for use in systems based on 32-bit computers. It produces a 128-bit hash value (16 bytes), typically represented as a 32-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to hash.
- MD5 hash values.
Suppose you want to hash your input strings using the MD5 algorithm:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose MD5.
Give your Output field a name and click Save. The strings in your input field will be hashed using the MD5 algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute the SHA-1 hash of a given input. SHA-1 (Secure Hash Algorithm 1) is a cryptographic hash function that produces a 160-bit (20-byte) hash value, typically represented as a 40-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-1 hash of the input data.
Suppose you want to get the SHA1 hashes a series of strings in your input data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose SHA1.
Give your Output field a name and click Save. You'll get the SHA1 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the Keccak cryptographic hash algorithm. Keccak is the original algorithm that was standardized as SHA-3 by the National Institute of Standards and Technology (NIST). It is widely used in cryptographic applications, such as blockchain technologies (e.g., Ethereum).
These are the input/output expected data types for this operation:
- Data you want to hash. This could be text, binary, or hexadecimal data.
- Keccak hash value in hexadecimal format.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to hash your input strings using the Keccak algorithm:
In the Operation field, choose Keccak.
Set Size to 256
.
Give your Output field a name and click Save. The strings in your input field will be hashed using the Keccak algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
This operation is used to compute a cryptographic hash using the SM3 algorithm.
These are the input/output expected data types for this operation:
- Data you want to process.
- SM3 hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SM3 hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose SM3.
Set Length to 64
.
Give your Output field a name and click Save. You'll get the SM3 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute cryptographic hashes using the SHA-2 family of hash functions. SHA-2 is a widely used and more secure successor to SHA-1.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-2 hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SHA2 hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose SHA2.
Set Size to 512
.
Give your Output field a name and click Save. You'll get the SHA2 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute the cryptographic hash of an input using the SHAKE (Secure Hash Algorithm with Keccak) family. SHAKE is a customizable hash function based on the Keccak sponge construction, which allows you to specify the length of the output hash.
SHAKE is part of the SHA-3 family, but it differs from other SHA-3 variants in that it is an Extendable Output Function (XOF). This means you can generate a hash of any length, rather than being restricted to fixed-length outputs like SHA3-256 or SHA3-512.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHAKE hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SHAKE hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Shake.
Set Capacity to 256
.
Set Size to 512
.
Give your Output field a name and click Save. You'll get the SHAKE hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation takes an IPv4 or IPv6 address and 'defangs' it, meaning the IP becomes invalid, removing the risk of accidentally using it as an IP address. The operation replaces certain characters with alternatives, making the IP non-functional.
These are the input/output expected data types for this operation:
- IP addresses you want to defang.
- Defanged IP addresses.
Suppose you want to defang a series of events that represent IP addresses:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Defang IP Address.
Give your Output field a name and click Save. The IP addresses in your input field will be defanged. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation makes URLs safe to share by preventing accidental clicks or access. This is especially useful in cybersecurity contexts, where you might need to share potentially malicious URLs without making them active links.
These are the input/output expected data types for this operation:
- URLs you want to defang.
- Defanged URLs.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to defang a series of events that represent URLs:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Defang URL.
Set Escape Dots to true.
Set Escape HTTP to true.
Set Escape ://* to false.
Set Process Type to Everything.
Give your Output field a name and click Save. The URLs in your input field will be defanged. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute cryptographic hashes using the SHA-3 family of hash functions. SHA-3 offers enhanced security and flexibility compared to its predecessors, including the SHA-2 family.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-3 hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SHA3 hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose SHA3.
Set Size to 512
.
Give your Output field a name and click Save. You'll get the SHA3 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation extracts all the IPv4 and IPv6 addresses from a block of text or data.
These are the input/output expected data types for this operation:
- Strings with a block of IP addresses you want to extract.
- List of IP addresses.
Suppose you want to extract a list of IP addresses from your input strings. To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Extract IP Address.
Give your Output field a name and click Save.
For example, in this input text:
this will be the output list of IP addresses detected:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation takes a 'defanged' URL and 'fangs' it, meaning, it removes the alterations that render it useless so that it can be used again.
These are the input/output expected data types for this operation:
- URLs you want to fang.
- Valid URLs.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to fang a series of events that represent URLs:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Fang URL.
Set Escape Dots to true.
Set Escape HTTP to true.
Set Escape ://* to false.
Set Process Type to Everything
.
Give your Output field a name and click Save. The URLs in your input field will be made valid. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation takes an invalid IPv4 or IPv6 address and 'fangs' it, meaning the IP becomes valid.
These are the input/output expected data types for this operation:
- IP addresses you want to fang.
Input format
The input IP addresses must follow the format given in the output results of the Defang IP Address operation, that is, dots replaced by brackets (for example, 192[.]168[.]1[.]1
)
- Valid IP addresses.
Suppose you want to fang a series of events that represent IP addresses:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Fang IP Address.
Give your Output field a name and click Save. The IP addresses in your input field will be made valid. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
Current version v0.0.1
This Action is only available in certain Tenants.
This action integrates with the advanced AI model Blip 2 (Bootstrapped Language-Image Pre-training). This multi-modal AI offers improved performance and versatility for tasks that require the simultaneous understanding of images and text.
Integrating Blip 2 into Onum can transform how you interact with and derive value from data, particularly by leveraging the power of visual content and analysis.
Find MLBlip in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
In order to configure this action, you must first link it to a Listener. Go to Building a Pipeline to learn how to link.
Token - the API token of the model you wish to ask. See here for where to find these values.
URL - specify the incoming field that contains the URL value.
Context - add an optional description for your event.
Question - this is the question you wish to ask the AI model.
Temperature - this is the randomness of the responses. If the temperature is low, the data sampled will be more specific and condensed, whereas setting a high temperature will acquire more diverse but less precise answers.
Output - specify a name for the output event.
Click Save to complete.
Current version v0.0.1
This Action is only available in certain Tenants.
This action offers automatic integration with models available on the Replicate platform, whether publicly accessible or privately deployed. This component simplifies accessing and utilizing a wide array of models without manual integration efforts.
Integrating Onum with replicate.com can offer several benefits, enhancing the platform's capabilities and the value it delivers:
Access to a Broad Range of Models
Ease of Model Deployment
Scalability
Continuous Model Updates
Cost-Effective AI Integration
Rapid Prototyping and Experimentation
Enhanced Data Privacy and Security
Find MlReplicate in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
Token - this is the replicate API token.
Version - the version can usually be located by running a command on the machine using Replicate API.
Input - this will be the input IP.
Output - give a name to the outgoing event.
Click Save to complete.
To fill in the values, you need to utilize the information about the user and model from Replicate.com and execute a copy/paste. The following image illustrates how to locate the required parameters on the Replicate.com model website.
Just in case, if the version does not appear, choose the Cog tag and copy the version like in the following picture.
A version identifies every model and requires a set of input parameters.
Current version v0.2.0
Define what data you require and which parameters need to be prioritized.
Find it in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
Fields beginning with _
are internal fields.
To include a field in your message, drag it from the Fields area and drop it into the Message area.
The expressions should be strings that, optionally, may contain field names. For example:
where ${myField}
will be replaced with the actual value in the event.
Optionally, the action provides the following features depending on the argument delimiter behavior and the given delimiter and replacement values:
REPLACE
: replaces delimiter
with replacement
on each event field.
DELETE
: deletes delimiter
on each event field.
QUOTE
: adds double quotes surrounding an event field if it contains delimiter
.
ESCAPE
: adds a backslash (\
) before each delimiter
on each event field.
To select more than one at once, click a field in the Fields area and select the checkboxes next to the name, then select Add fields.
Give your message a name in the Destination Field Name to identify it by.
You can add a Delimiter to separate the fields in your message string.
Click Save when complete.
Let's say you have received raw data in JSON format and wish to extract the fields and format them as a CSV.
It is possible to use all the public models from related to process natural language.
In order to configure this action, you must first link it to a Listener. Go to to learn how to link.
This is where you specify the fields you wish to include in your message, by type.
Net Saved/Increased
Here you can see the difference (in %) of volume saved/increased in comparison to the previous period. Hover the circle icons to see the input/output volumes and see the total GB saved.
Listeners
View the total amount of data ingested by the Listeners in the selected time range compared to the previous, as well as the increased/decreased volume (in %).
Data Sink
You can see at a glance the total amount of data sent out of your Tenant, as well as the difference (in %) with the previous time range selected.
Data Volume
This shows the total volume of ingested data for the selected period. Notice it is the same as the input volume shown in the Net saved/increased metric. You can also see the difference (in %) with the previous time range selected.
Estimation
The estimated volumes ingested and sent over the next 24 hours. This is calculated using the data volume of the time period.
Any format. Any source.
Collect data from anywhere it’s generated, across every aspect of the network.
All data is aggregated, observed, and seamlessly routed to any destination.
Edge observability
Listeners are placed right on the edge to collect all data as close as possible to where it’s generated.
Centralized management
Onum receives data from Listeners and observes and optimizes the data from all nodes. All data is then sent to the proper data sink.
Amazon CloudFront
Amazon CloudWatch Logs
Amazon ELB
Amazon Route 53
Apache Flume
AWS CloudTrail
AWS Lambda
Cisco Umbrella
Cloudflare
Confluent
Crowdstrike
Fastly
Fluent Bit
Juniper
Kafka
Splunk
Zeek/Bro
Zoom