Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Current version v0.2.2
The Field Transformation action acts as a container that enables users to perform a wide range of operations on data, including encoding and decoding various types of encryption, format conversion, file compression and decompression, data structure analysis, and much more. The results are stored in new events fields.
Find it in the Actions tab and drag it onto the canvas to use it.
To open the configuration, click the Action in the canvas and select Configuration.
In order to configure this action, you must first link it to a Listener or other Action. Go to Building a Pipeline to learn how to link.
Choose a field from the linked Listener/Action to transform in your Action using the drop-down.
Add as many fields as required using the Add New Field button.
See a comprehensive list of all the available operations for this Action.
Please bear in mind that the options available in this window will depend on the field to transform.
Add as many Operations as required using Add Operation.
You can also use the arrow keys on your keyboard to navigate up and down the list.
If you have added more than one operation, you can reorder them by dragging and dropping them into position.
Before saving your action, you can test it to see the outcome.
Type a message in the Input field and see it transformed in the Output field after passing through the selected operation(s).
Give a name to the transformed field and click Save to complete.
Here is an example of a data set on the Bytes in/out from IP addresses.
We can use the field transformation operations to reduce the quantity of data sent.
We have a Syslog Listener, connected to a Parser.
Link the Parser to the Field Transformation action and open its configuration.
We will use the To IP Hex and CRC32 operations.
DESTINATION_IP_ADDRESS: 192.168.70.210518
DestinationIPAddressHex: c0.a8.46.d2.224
DESTINATION_HOST: server.example.com
DestinationHostCRC32:
0876633F
Transform the Destination IP to hexadecimal to reduce the number of characters.
192.168.70.210518
c0.a8.46.d2.224
Field>Parser: DESTINATION_IP_ADDRESS
Operation: To IP Hex
Output Field: DestinationIPAddessHex
Add a new field for Destination Host to CRC32
Codify the Destination Host as crc32 to transform the machine name into 8 characters.
server.example.com
0876633F
Field>Parser: DESTINATION_HOST
Operation: Crc32
Output field: DestinationHostCrc32
This operation divides a list of numbers provided in the input string, using the specified delimiter to separate the numbers.
These are the input/output expected data types for this operation:
- List of numbers you want to divide, separated by a specified delimiter.
- Result of the division of the numbers in your input string.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to divide a series of numbers in your input strings. They are separated by colons (:). To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Divide Operation.
Set Delimiter to Colon.
Give your Output field a name and click Save. You'll get the division results. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to encode or "escape" characters in a string so that they can be safely used in different contexts, such as URLs, JSON, HTML, or code. This operation is helpful when you need to format text with special characters in a way that won’t break syntax or cause unintended effects in various data formats.
These are the input/output expected data types for this operation:
- Strings with the characters you want to escape.
- Strings with the required escaped characters.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to escape characters that are between "
in a series of input strings. To do it:
In the Operation field, choose Escape String.
Set Escape Level to Special chars
.
Set Escape Quote to "
.
Set JSON compatible to false
.
Give your Output field a name and click Save. Matching characters will be escaped. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to decode escape sequences in a string back to their original characters. Escaped strings are often used in programming, web development, or data transmission to represent special characters that cannot be directly included in text.
These are the input/output expected data types for this operation:
- String with escape characters.
- Resulting unescaped string.
Suppose you want to unescape characters in a series of input strings. To do it:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Unescape string.
Give your Output field a name and click Save. All the escape characters will be removed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to multiply numbers in a dataset by a specified value. It processes numerical input and applies the multiplication operation to each number individually. This is useful for scaling data, performing simple arithmetic, or manipulating numerical datasets.
These are the input/output expected data types for this operation:
- Input string containing numbers to multiply.
- The result of the multiplication.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to multiply a series of numbers in your input strings. They are separated by commas (,). To do it:
In the Operation field, choose Multiply Operation.
Set Delimiter to Comma.
Give your Output field a name and click Save. You'll get the multiplication of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
A comprehensive list of the operations available in the Field Tranformation Action.
Converts a size in bytes to a human-readable string.
Input data - 134367
Output data - 131.22 KiB
Converts values from one unit of measurement to another.
Input data - 5000
Input units - Square foot (sq ft)
Output units - Square metre (sq m)
Output data - 464.515215
Converts a unit of data to another format.
Input data - 2
Input units - Megabits (Mb)
Output units - Kilobytes (KB)
Output data - 250
Converts values from one unit of length to another.
Input data - 100
Input units - Metres (m)
Output units - Yards (yd)
Output data - 109.3613298
Converts values from one unit of mass to another.
Input data - 100
Input units - Kilogram (kg)
Output units - Pound (lb)
Output data - 220.4622622
Converts values from one unit of speed to another.
Input data - 200
Input units - Kilometres per hour (km/h)
Output units - Miles per hour (mph)
Output data - 124.2841804
Counts the amount of times a given string occurs in your input data.
Input data - This is a sample test
Search - test
Search Type - simple
Output data - 1
Calculates an 8-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - C7
Calculates an 16-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - 57D4
Calculates an 24-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - 3B6473
Calculates an 32-bit Cyclic Redundancy Check (CRC) value for a given input.
Input data - hello 1234
Output data - 7ED8D648
Obfuscates all digits of a credit card number except for the last 4 digits.
Input data - 1111222233334444
Output data - ************4444
Converts a CSV file to JSON format.
Input data -
First name,Last name,Age,City John,Wick,20,New-York Tony,Stark,30,Madrid
Cell delimiter - ,
Format - Array of dictionaries
Output data -
[ { "First name": "John", "Last name": "Wick", "Age": "20", "City": "New-York" }, { "First name": "Tony", "Last name": "Stark", "Age": "30", "City": "Madrid" } ]
Defangs an IP address to prevent it from being recognized.
Input data - 192.168.1.1
Output data - 192[.]168[.]1[.]1
Defangs a URL to prevent it from being recognized as a clickable link.
Input data - https://example.com
Escape Dots - true
Escape HTTP - true
Escape ://* - false
Process Type - Everything
Output data - hxxps://example[.]com
Divides a list of numbers provided in the input string, separated by a specific delimiter.
Input data - 26:2:4
Delimiter - Colon
Output data - 3.25
Analyzes a URI into its individual components.
Input data -
https://user:pass@example.com:8080/path/to/resource?key=value#fragment
Output data -
Scheme: https Host: example.com:8080 Path: /path/to/resource Arguments: map[key:[value]] User: user Password: pass
Escapes specific characters in a string
Input data - She said, "Hello, world!"
Escape Level - Special chars
Escape Quote - "
JSON compatible -false
Output data - She said, \"Hello, world!\"
Extracts all the IPv4 and IPv6 addresses from a block of text or data.
Input data -
User logged in from 192.168.1.1. Another login detected from 10.0.0.5.
Output data -
192.168.1.1
10.0.0.5
Makes defanged IP addresses valid.
Input data - 192[.]168[.]1[.]1
Output data - 192.168.1.1
Makes defanged URLs valid.
Input data - hxxps://example[.]com
Escape Dots - true
Escape HTTP - true
Escape ://* - false
Process Type - Everything
Output data - https://example.com
Splits the input string using a specified delimiter and filters.
Input data -
Error: File not found Warning: Low memory Info: Operation completed Error: Disk full
Delimiter - Line feed
Regex - ^Error
Invert - false
Output data -
Error: File not found Error: Disk full
Finds values in a string and replace them with others.
Input data - The server encountered an error while processing your request.
Substring to find - error
Replacement - issue
Output data - The server encountered an issue while processing your request.
Decodes data from a Base64 string back into its raw format.
Input data - SGVsbG8sIE9udW0h
Strict Mode - true
Output data - Hello, Onum!
Converta hexadecimal-encoded data back into its original form.
Input data - 48 65 6c 6c 6f 20 57 6f 72 6c 64
Delimiter - Space
Output data - Hello World
Converts a timestamp into a human-readable date string.
Input data - 978346800
Time Unit - Seconds
Timezone Output - UTC
Format Output - Mon 2 January 2006 15:04:05 UTC
Output data - Mon 1 January 2001 11:00:00 UTC
Converts an IP address (either IPv4 or IPv6) to its hexadecimal representation.
Input data - 192.168.1.1
Output data - c0a80101
Reduces the size of a JSON file by removing unnecessary characters from it.
Input data -
{ "name": "John Doe", "age": 30, "isActive": true, "address": { "city": "New York", "zip": "10001" } }
Output data -
{"name":"John Doe","age":30,"isActive":true,"address":{"city":"New York","zip":"10001"}}
Converts a JSON file to CSV format.
Input data -
[ { "First name": "John", "Last name": "Wick", "Age": "20", "City": "New-York" }, { "First name": "Tony", "Last name": "Stark", "Age": "30", "City": "Madrid" } ]
Cell delimiter - ,
Row delimiter - /n
Output data -
First name,Last name,Age,City John,Wick,20,New-York Tony,Stark,30,Madrid
Decodes the payload in a JSON Web Token string.
Input data - eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
Output data - {"sub":"1234567890","name":"John Doe","iat":1516239022}
Generates a Keccak cryptographic hash function from a given input.
Input data - Hello World !
Size - 256
Output data -3ea2f1d0abf3fc66cf29eebb70cbd4e7fe762ef8a09bcc06c8edf641230afec0
Produces a MD2 hash string from a given input.
Input data - Hello World!
Output data - 315f7c67223f01fb7cab4b95100e872e
Produces a MD4 hash string from a given input.
Input data - Hello World!
Output data -b2a5cc34fc21a764ae2fad94d56fadf6
Produces a MD5 hash string from a given input.
Input data - Hello World!
Output data -d41d8cd98f00b204e9800998ecf8427e
Calculates the median of given values.
Input data - 10, 5, 20, 15, 25
Delimiter - ,
Output data - 15
Calculates the result of the multiplication of given values.
Input data - 2, 3, 5
Delimiter - ,
Output data - 30
Pads each input line with a specified number of characters.
Input data - Apple Banana Cherry
Pad position - Start
Pad line - 7
Character - >>>
Output data -
>>> >>>Apple >>> >>>Banana >>> >>>Cherry
Parses a string and returns an integer of the specified base.
Input data - 100
Base - 2
Output data -4
Takes UNIX file permission strings and converts them to code format or vice versa.
Input data - -rwxr-xr--
Output data -
Textual representation: -rwxr-xr-- Octal representation: 0754 +---------+-------+-------+-------+ | | User | Group | Other | +---------+-------+-------+-------+ | Read | X | X | X | +---------+-------+-------+-------+ | Write | X | | | +---------+-------+-------+-------+ | Execute | X | X | | +---------+-------+-------+-------+
Extracts or manipulates parts of your input strings that match a specific regular expression pattern.
Input data - 100
Base - 2
Output data -4
Removes whitespace and other characters characters from a string.
Input data -
Hello World!
This is a test.
Spaces - true
Carriage returns - false
Line feeds - true
Tabs - false
Form feeds - false
Full stops - true
Output data -
HelloWorld!Thisisatest
Reverses the order of the characters in a string.
Input data - Hello World!
Reverse mode - Character
Output data - !dlroW olleH
Returns the SHA0 hash of a given string.
Input data - Hello World!
Output data - 1261178ff9a732aacfece0d8b8bd113255a57960
Returns the SHA1 hash of a given string.
Input data - Hello World!
Output data - 2ef7bde608ce5404e97d5f042f95f89f1c232871
Returns the SHA2 hash of a given string.
Input data - Hello World!
Size - 512
Output data - f4d54d32e3523357ff023903eaba2721e8c8cfc7702663782cb3e52faf2c56c002cc3096b5f2b6df870be665d0040e9963590eb02d03d166e52999cd1c430db1
Returns the SHA3 hash of a given string.
Input data - Hello World!
Size - 512
Output data - 32400b5e89822de254e8d5d94252c52bdcb27a3562ca593e980364d9848b8041b98eabe16c1a6797484941d2376864a1b0e248b0f7af8b1555a778c336a5bf48
Returns the SHAKE hash of a given string.
Input data - Hello World!
Capacity - 256
Size - 512
Output data - 35259d2903a1303d3115c669e2008510fc79acb50679b727ccb567cc3f786de3553052e47d4dd715cc705ce212a92908f4df9e653fa3653e8a7855724d366137
Shuffles the characters of a given string.
Input data - Hello World!
Delimiter - Nothing (separate chars)
Output data - rH Wl!odolle
Returns the SM3 cryptographic hash function of a given string.
Input data - Hello World!
Length - 64
Output data - 0ac0a9fef0d212aa
Sorts a list of strings separated by a specified delimiter according to the provided sorting order.
Input data - banana,apple,orange,grape
Delimiter - Comma
Order - Alphabetical (case sensitive)
Reverse - false
Output data - apple,banana,grape,orange
Extracts characters from a given string.
Input data - +34678987678
Start Index - 3
Length - 9
Output data - 678987678
Calculates the result of the subtraction of given values.
Input data - 10, 5, 2
Delimiter - Comma
Output data - 3
Calculates the total of given values.
Input data - 10, 5, 2
Delimiter - Comma
Output data - 17
Swaps the case of a given string.
Input data - Hello World!
Output data - hELLO wORLD!
Encodes raw data into an ASCII Base64 string.
Input data - Hello, Onum!
Output data - SGVsbG8sIE9udW0h
Converts an integer to its corresponding hexadecimal code.
Output data - Hello World!
Delimiter - Space
Input data - 48 65 6c 6c 6f 20 57 6f 72 6c 64
Converts the characters of a string to lower case.
Input data - Hello World!
Output data - hello world!
Transforms a string representing a date into a timestamp.
Input data - 2006-01-02
Format - DateOnly
Output data - 2006-01-02T00:00:00Z
Parses a datetime string in UTC and returns the corresponding UNIX timestamp.
Input data - Mon 1 January 2001 11:00:00
Unit - Seconds
Output data - 978346800
Converts the characters of a string to upper case.
Input data - Hello World!
Output data - HELLO WORLD!
Converts a date and time from one format to another.
Input data - 2024-10-24T14:11:13Z
Input Format - 2006-01-02T15:04:05Z
Input Timezone - UTC+1
Output Format - 02/01/2006 15:04:05
Output Timezone - UTC+8
Output data - 24/10/2024 21:11:13
Removes escape characters from a given string.
Input data - She said, \"Hello, world!\"
Output data - She said, "Hello, world!"
Decodes a URL and returns its corresponding URL-decoded string.
Input data - https%3A%2F%2Fexample.com%2Fsearch%3Fq%3DHello+World%21
Output data - https://example.com/search?q=Hello World!
Encodes a URL-decoded string back to its original URL format,
Input data - https://example.com/search?q=Hello World!
Output data - ttps%3A%2F%2Fexample.com%2Fsearch%3Fq%3DHello+World%21
This operation calculates the total sum of a series of numbers provided as input. It is a simple yet powerful tool for numerical data analysis, enabling quick summation of datasets or values.
These are the input/output expected data types for this operation:
- Input string containing numbers to sum.
- The result of the total sum.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the sum of a series of numbers in your input strings. They are separated by commas (,). To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Sum Operation.
Set Delimiter to Comma
.
Give your Output field a name and click Save. You'll get the sum of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of length or distance.
These are the input/output expected data types for this operation:
- Values whose unit of length you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of length.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from meters into yards:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert distance.
Set Input units to Metres (m)
.
Set Output units to Yards (yd)
.
Give your Output field a name and click Save. The unit of length of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of digital data, such as bits, bytes, kilobytes, megabytes, and so on. It’s especially useful when you’re dealing with data storage or transfer rates, and you need to switch between binary (base 2) and decimal (base 10) units.
These are the input/output expected data types for this operation:
- Values whose unit of data you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from megabits into kilobytes:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert data units.
Set Input units to Megabits (Mb)
.
Set Output units to Kilobytes (KB)
.
Give your Output field a name and click Save. The data type of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values from one unit of measurement to another, such as square feet, acres, square meters, and even smaller or less common units used in physics (like barns or nanobarns).
These are the input/output expected data types for this operation:
- Values whose unit of measurement you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of measurement.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from square feet into square meters:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert area.
Set Input units to Square foot (sq ft)
.
Set Output units to Square metre (sq m)
.
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of speed.
These are the input/output expected data types for this operation:
- Values whose unit of speed you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of speed.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from kilometers per hour into miles per hour:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert speed.
Set Input units to Kilometres per hour (km/h)
.
Set Output units to Miles per hour (mph)
.
Give your Output field a name and click Save. The unit of speed of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation performs arithmetic subtraction between numbers. This operation is useful for calculations, data manipulation, and analyzing numerical differences.
These are the input/output expected data types for this operation:
- Input string containing numbers to subtract.
- The result of the subtraction.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the subtraction of a series of numbers in your input strings. They are separated by commas (,). To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Subtract Operation.
Set Delimiter to Comma
.
Give your Output field a name and click Save. You'll get the subtraction of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to calculate the median value of a set of numbers. The median is a statistical measure representing the middle value of a sorted dataset. It divides the data into two halves, with 50% of the data points below and 50% above the median.
These are the input/output expected data types for this operation:
- List of numbers separated by a specified delimiter.
- The result of the median.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to calculate the median a series of numbers in your input strings. They are separated by commas (,). To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Median.
Set Delimiter to Comma.
Give your Output field a name and click Save. You'll get the median of the numbers in your input data. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation compresses JSON data by removing unnecessary whitespace, line breaks, and formatting while retaining the full structure and functionality of the JSON. It is handy for reducing the size of JSON files or strings when storage or transfer efficiency is required.
These are the input/output expected data types for this operation:
- Strings representing the JSON data you want to optimize.
- Optimized versions of the JSON data in your input strings.
Suppose you want to minify the JSON data in your input strings. To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Json Minify.
Give your Output field a name and click Save. Your JSON data will be optimized and formatted properly.
For example, the following JSON:
will be formatted like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts values between different units of mass.
These are the input/output expected data types for this operation:
- Values whose unit of mass you want to transform. They must be strings representing numbers.
- Resulting values after transforming them to the selected unit of mass.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events from kilograms into pounds:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Convert mass.
Set Input units to Kilogram (kg)
.
Set Output units to Pound (lb)
.
Give your Output field a name and click Save. The unit of mass of the values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts strings representing a date into a RFC 3339 timestamp.
These are the input/output expected data types for this operation:
- Strings representing the dates you want to transform in the format specified.
- Resulting RFC 3339 timestamps.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of strings into timestamps:
In the Operation field, choose To Timestamp.
Set Format to DateOnly
.
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to encode data into a Base64 string. Base64 is a binary-to-text encoding method commonly used to encode binary data (like images or files) into text that can be easily transmitted over text-based protocols such as email, JSON, or XML. It’s also used for data storage, ensuring the data remains ASCII-safe.
These are the input/output expected data types for this operation:
- The string you want to encode.
- Resulting Base64 string.
Suppose you want to encode a series of events into Base64:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose To Base64.
Give your Output field a name and click Save. The values in your input field will be encoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In your Pipeline, open the required configuration and select the input Field.
This operation allows you to convert dates and times from one format to another. This is useful for standardizing timestamps, converting between systems with different date/time formats, or simply making a date more readable.
These are the input/output expected data types for this operation:
- Strings representing the dates you want to convert.
- Output formatted date strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of dates in the following format:
MM-DD-YYYY HH:mm:ss
into this one:
ddd, D MMM YYYY HH:mm:ss ZZ
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Translate Datetime Format.
Set Input Format to 01-02-2006 15:04:05
Set Input Timezone to UTC+1
Set Output Format to Mon, 2 Jan 2006 15:04:05 +0000
Set Output Timezone to UTC+1
Give your Output field a name and click Save. The format of the dates in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to interpret and analyze standard UNIX file permission strings (e.g., -rwxr-xr--
) and provide a detailed breakdown of the permissions, including their binary and octal representations. You can also enter the code formats to get the UNIX file permission strings.
These are the input/output expected data types for this operation:
- UNIX-style file permission strings or codes you want to analyze.
- Details of the provided UNIX file permission strings/codes.
Suppose you want to analyze a series of UNIX file permission strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Parse UNIX file permissions.
Give your Output field a name and click Save. The values in your input field will be decoded.
For example, for the following UNIX file permission string:
you'll get the following breakdown:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a CSV file to JSON format.
These are the input/output expected data types for this operation:
- CSV-formatted strings you want to transform into JSON.
- Resulting JSON-formatted strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events in CSV format into JSON:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose CSV to JSON.
Set Cell delimiter to ,
(comma).
Set Format to Array of dictionaries.
Give your Output field a name and click Save. The CSV strings in your input field will be transformed into JSON.
For example, the following CSV:
will be transformed into this JSON:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation takes a 'defanged' URL and 'fangs' it, meaning, it removes the alterations that render it useless so that it can be used again.
These are the input/output expected data types for this operation:
- URLs you want to fang.
- Valid URLs.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to fang a series of events that represent URLs:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Fang URL.
Set Escape Dots to true.
Set Escape HTTP to true.
Set Escape ://* to false.
Set Process Type to Everything
.
Give your Output field a name and click Save. The URLs in your input field will be made valid. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the MD2 (Message Digest 2) algorithm. MD2 is a cryptographic hash function primarily intended for use in systems based on 8-bit computers. It produces a 128-bit hash value (16 bytes), typically represented as a 32-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to hash.
- MD2 hash values.
Suppose you want to hash your input strings using the MD2 algorithm:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose MD2.
Give your Output field a name and click Save. The strings in your input field will be hashed using the MD2 algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the MD4 (Message Digest 4) algorithm. MD4 is a cryptographic hash function primarily intended for use in systems based on 32-bit computers. It produces a 128-bit hash value (16 bytes), typically represented as a 32-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to hash.
- MD4 hash values.
Suppose you want to hash your input strings using the MD4 algorithm:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose MD4.
Give your Output field a name and click Save. The strings in your input field will be hashed using the MD4 algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute cryptographic hashes using the SHA-3 family of hash functions. SHA-3 offers enhanced security and flexibility compared to its predecessors, including the SHA-2 family.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-3 hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SHA3 hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose SHA3.
Set Size to 512
.
Give your Output field a name and click Save. You'll get the SHA3 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts JSON data into a CSV file.
These are the input/output expected data types for this operation:
- JSON data you want to transform into CSV. They must be strings formatted as JSON data.
- Resulting CSV files.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of events in JSON format into CSV:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose JSON to CSV.
Set Cell delimiter to ,
(comma).
Set Row delimiter to \n
(new line)
Give your Output field a name and click Save. The JSON-formatted strings in your input field will be transformed into CSV.
For example, the following JSON:
will be transformed into this CSV:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute the SHA-0 hash of an input string. SHA-0 is a cryptographic hash function and a predecessor to the more widely known SHA-1.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-0 hash of the input data.
Suppose you want to get the SHA0 hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose SHA0.
Give your Output field a name and click Save. You'll get the SHA0 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the MD5 (Message Digest 5) algorithm. MD5 is a cryptographic hash function primarily intended for use in systems based on 32-bit computers. It produces a 128-bit hash value (16 bytes), typically represented as a 32-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to hash.
- MD5 hash values.
Suppose you want to hash your input strings using the MD5 algorithm:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose MD5.
Give your Output field a name and click Save. The strings in your input field will be hashed using the MD5 algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute a cryptographic hash using the SM3 algorithm.
These are the input/output expected data types for this operation:
- Data you want to process.
- SM3 hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SM3 hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose SM3.
Set Length to 64
.
Give your Output field a name and click Save. You'll get the SM3 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute cryptographic hashes using the SHA-2 family of hash functions. SHA-2 is a widely used and more secure successor to SHA-1.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-2 hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SHA2 hashes a series of strings in your input data:
In the Operation field, choose SHA2.
Set Size to 512
.
Give your Output field a name and click Save. You'll get the SHA2 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute the SHA-1 hash of a given input. SHA-1 (Secure Hash Algorithm 1) is a cryptographic hash function that produces a 160-bit (20-byte) hash value, typically represented as a 40-character hexadecimal string.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHA-1 hash of the input data.
Suppose you want to get the SHA1 hashes a series of strings in your input data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose SHA1.
Give your Output field a name and click Save. You'll get the SHA1 hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In your Pipeline, open the required configuration and select the input Field.
This operation parses a string and returns an integer of the specified base (or radix).
In a positional numeral system, the base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal system, the radix is ten, because it uses the ten digits from 0 through 9.
In any standard positional numeral system, a number is conventionally written as (x)y with x as the string of digits and y as its base. However, for base ten the subscript is usually assumed (and omitted, together with the pair of parentheses), as it is the most common way to express value. For example, (100)10 is equivalent to 100 (the decimal system is implied in the latter) and represents the number one hundred, while (100)2 (in the binary system with base 2) represents the number four.
Commonly used numeral systems include:
2
- Binary numeral system
8
- Octal system
10
- Decimal system
12
- Duodecimal (dozenal) system
16
- Hexadecimal system
20
- Vigesimal system
36
- Base36
60
- Sexagesimal system
These are the input/output expected data types for this operation:
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to parse a series of strings and convert them to integers according to a specific base:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Parse int.
Set Base to 2
.
Give your Output field a name and click Save. The strings in your input field will be parsed according to the specified base. In this example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to break down and analyze a Uniform Resource Identifier (URI) into its individual components, making it easier to understand and work with the data in a URI.
These are the input/output expected data types for this operation:
- URLs you want to analyze.
- Breakdown of the input URLs.
Suppose you want to analyze a series of URLs in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Encode URI.
Give your Output field a name and click Save. The URLs in your input field will be analyzed.
For example, for the following URL:
you will get the following analysis:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to decode data from a Base64 string back into its raw format. Base64 is a binary-to-text encoding method commonly used to encode binary data (like images or files) into text that can be easily transmitted over text-based protocols such as email, JSON, or XML. It’s also used for data storage, ensuring the data remains ASCII-safe.
These are the input/output expected data types for this operation:
- The Base64 strings you want to decode.
- Decoded strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to decode a series of events in the Base64 encoding scheme:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose From Base64.
Set Strict Mode to true
.
Give your Output field a name and click Save. The values in your input field will be decoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to convert hexadecimal-encoded data back into its original form, whether it’s plain text, binary data, or another format. Hexadecimal encoding is often used to represent binary data in a readable, ASCII-compatible format.
These are the input/output expected data types for this operation:
- The hexadecimal-encoded data you want to decode.
- Decoded string.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to decode a series of events including hexadecimal-encoded data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose From Hex.
Set Delimiter to Space
.
Give your Output field a name and click Save. The values in your input field will be decoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to compute the cryptographic hash of an input using the SHAKE (Secure Hash Algorithm with Keccak) family. SHAKE is a customizable hash function based on the Keccak sponge construction, which allows you to specify the length of the output hash.
SHAKE is part of the SHA-3 family, but it differs from other SHA-3 variants in that it is an Extendable Output Function (XOF). This means you can generate a hash of any length, rather than being restricted to fixed-length outputs like SHA3-256 or SHA3-512.
These are the input/output expected data types for this operation:
- Data you want to process.
- SHAKE hash of the input data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to get the SHAKE hashes a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Shake.
Set Capacity to 256
.
Set Size to 512
.
Give your Output field a name and click Save. You'll get the SHAKE hashes of your input strings.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a URL into an encoded string. URL encoding is used to encode special characters in URLs to ensure safe and proper transmission over the internet.
These are the input/output expected data types for this operation:
- URLs to be encoded
- Resulting encoded strings of your URLs.
Suppose you want to encode a series of URLs in your input data. To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose URL encode.
Give your Output field a name and click Save. Your URLs will be converted.
For example the following URL-encoded string:
will be encoded like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a URL-encoded string back to its original format. URL encoding is used to encode special characters in URLs to ensure safe and proper transmission over the internet. When decoding, these encoded characters are translated back to their human-readable form.
These are the input/output expected data types for this operation:
- URL-encoded strings to be decoded.
- Decoded versions of your URLs.
Suppose you want to decode a series of URL-encoded strings in your input data. To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose URL decode.
Give your Output field a name and click Save. Your URL-encoded strings will be converted.
For example the following URL-encoded string:
will be decoded like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to convert a string to a hexadecimal code. Hexadecimal encoding is often used to represent binary data in a readable, ASCII-compatible format.
These are the input/output expected data types for this operation:
- Strings you want to encode.
- Resulting hexadecimal codes.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to encode a series of events into hexadecimal-encoded data:
In the Operation field, choose To Hex.
Set Delimiter to Space
.
Give your Output field a name and click Save. The values in your input field will be encoded. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a date and time into a Unix timestamp. A Unix timestamp is the number of seconds (or milliseconds) that have elapsed since January 1, 1970, 00:00:00 UTC (commonly referred to as the "Epoch").
These are the input/output expected data types for this operation:
- Strings representing the dates you want to transform.
- Integers representing the resulting Unix timestamps.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of dates into Unix timestamps:
In the Operation field, choose To Unix Timestamp.
Set Time Unit to Seconds.
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation is used to decode JSON Web Tokens (JWTs). JWTs are commonly used for authentication and data exchange in web applications, and they consist of three parts:
Header - Encoded metadata about the token.
Payload - Encoded claims or data being transmitted.
Signature - A cryptographic signature to verify the token’s integrity.
This operation helps decode and inspect the header and payload of a JWT without verifying the signature.
These are the input/output expected data types for this operation:
- JWT string you want to decode.
- Decoded JWT strings
Suppose you want to decode a series of events in the Base64 encoding scheme:
In the Operation field, choose JWT Decode.
Give your Output field a name and click Save. The values in your input field will be decoded.
For example, the following JWT:
will be decoded as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation calculates an 8-bit Cyclic Redundancy Check (CRC) value for a given input. A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data.
The algorithm uses a fixed 8-bit polynomial to calculate a short "fingerprint" of the input data, resulting in an 8-bit (one-byte) checksum. This checksum can detect certain types of errors, like single-bit errors or burst errors up to 8 bits, by recalculating the checksum and comparing it with the original.
These are the input/output expected data types for this operation:
- Data you want to compute the CRC-8 checksum for.
- CRC-8 values of your input data.
Suppose you want to calculate the CRC-8 of a specific set of data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose CRC8 Checksum.
Give your Output field a name and click Save. The operation will calculate the CRC8 codes for the strings in your input field. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation calculates a 16-bit Cyclic Redundancy Check (CRC) value for a given input. A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data.
The algorithm uses a fixed 16-bit polynomial to calculate a short "fingerprint" of the input data, resulting in a 16-bit (one-byte) checksum. This checksum can detect certain types of errors, like single-bit errors or burst errors up to 16 bits, by recalculating the checksum and comparing it with the original.
These are the input/output expected data types for this operation:
- Data you want to compute the CRC-16 checksum for.
- CRC-16 values of your input data.
Suppose you want to calculate the CRC-16 of a specific set of data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose CRC16 Checksum.
Give your Output field a name and click Save. The operation will calculate the CRC16 codes for the strings in your input field. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation counts the number of times a provided string (character, word, or phrase) appears in the given input data.
These are the input/output expected data types for this operation:
- Data you want to analyze.
- Count of the specified character, word, or pattern you searched for.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
- Strings you want to parse.
- Integers after applying the specified base.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
In your Pipeline, open the required configuration and select the input Field.
This operation converts Unix timestamps into human-readable date and time formats. Unix timestamps represent the number of seconds (or milliseconds) that have elapsed since the Unix epoch, which began at 00:00:00 UTC on January 1, 1970.
These are the input/output expected data types for this operation:
- Integer values representing the Unix timestamps to be converted.
- A string representing the formatted time.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to convert a series of timestamps into human-readable date strings:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose From Unix Timestamp.
Set Time Unit to Seconds.
Set Timezone Output to UTC
.
Set Format Output to Mon 2 January 2006 15:04:05 UTC
Give your Output field a name and click Save. The values in your input field will be transformed. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts a size in bytes into a more human-friendly format, such as kilobytes (KB), megabytes (MB), gigabytes (GB), and so on. This operation is especially useful for interpreting file sizes or data storage values.
These are the input/output expected data types for this operation:
- Sizes in bytes to be transformed. They must be strings that represent integer numbers.
- Resulting human-readable strings.
Suppose you want to convert a series of events that represent sizes in bytes into human-readable strings:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Byte to Human Readable.
Give your Output field a name and click Save. The byte sizes in your input field will be transformed into human-readable strings. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to add padding characters to the beginning or end of each line of text. This operation is useful for formatting text, aligning data, or preparing output for specific requirements such as indentation or prefixing/suffixing lines.
These are the input/output expected data types for this operation:
- Input text to be padded. The operation processes multiline text where each line is treated as a separate unit.
- Each line in the input will be padded based on the selected options.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to add padding to a series of events:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Pad lines.
Set Pad position to Start
.
Set Pad length to 7
.
Set Characters to >>>
Give your Output field a name and click Save. The strings in your input field will be modified with the specified padding.
For example, the following lines:
will be transformed into:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation calculates a 24-bit Cyclic Redundancy Check (CRC) value for a given input. A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data.
The algorithm uses a fixed 24-bit polynomial to calculate a short "fingerprint" of the input data, resulting in a 24-bit (one-byte) checksum. This checksum can detect certain types of errors, like single-bit errors or burst errors up to 24 bits, by recalculating the checksum and comparing it with the original.
These are the input/output expected data types for this operation:
- Data you want to compute the CRC-24 checksum for.
- CRC-24 values of your input data.
Suppose you want to calculate the CRC-24 of a specific set of data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose CRC24 Checksum.
Give your Output field a name and click Save. The operation will calculate the CRC24 codes for the strings in your input field. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to hash data using the Keccak cryptographic hash algorithm. Keccak is the original algorithm that was standardized as SHA-3 by the National Institute of Standards and Technology (NIST). It is widely used in cryptographic applications, such as blockchain technologies (e.g., Ethereum).
These are the input/output expected data types for this operation:
- Data you want to hash. This could be text, binary, or hexadecimal data.
- Keccak hash value in hexadecimal format.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to hash your input strings using the Keccak algorithm:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Keccak.
Set Size to 256
.
Give your Output field a name and click Save. The strings in your input field will be hashed using the Keccak algorithm.
For example, the following string:
will be hashed as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation obfuscates all digits of a credit card number except for the last 4 digits.
These are the input/output expected data types for this operation:
- The input strings to be obfuscated.
- Resulting credit card numbers with the digits obfuscated.
Suppose you want to obfuscate a series of credit card numbers in your input strings:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Credit card obfuscator.
Give your Output field a name and click Save. The credit card numbers in your input field will be obfuscated. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation calculates a 32-bit Cyclic Redundancy Check (CRC) value for a given input. A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data.
The algorithm uses a fixed 32-bit polynomial to calculate a short "fingerprint" of the input data, resulting in a 32-bit (one-byte) checksum. This checksum can detect certain types of errors, like single-bit errors or burst errors up to 32 bits, by recalculating the checksum and comparing it with the original.
These are the input/output expected data types for this operation:
- Data you want to compute the CRC-32 checksum for.
- CRC-32 values of your input data.
Suppose you want to calculate the CRC-32 of a specific set of data:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose CRC32 Checksum.
Give your Output field a name and click Save. The operation will calculate the CRC32 codes for the strings in your input field. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to reverse the order of characters in a given string. This operation is handy for text manipulation, encoding challenges, and debugging scenarios where reversing the text is required.
These are the input/output expected data types for this operation:
- The strings of text that you want to reverse.
- Reversed strings.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to reverse a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Reverse string.
Set Reverse mode to Character
.
Give your Output field a name and click Save.
With the parameters set above, the following text:
will be transformed like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to search for specific patterns within your input data and replace them with new text. This operation is versatile, offering options for simple text replacements, complex pattern-based replacements using regular expressions (regex), and even case-sensitive or case-insensitive matching.
These are the input/output expected data types for this operation:
- Text or data where you want to perform find-and-replace operations.
- Output strings after the find-and-replace operations.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to replace all the occurrences of the word "error" with "issue". To do it:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Find and replace.
Set Substring to find to error
.
Set Replacement to issue
.
Give your Output field a name and click Save. The count will be displayed in the output field.
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In this example, given the following string:
you'll get this output:
This operation allows you to filter lines of text based on specific conditions or patterns. This operation is useful when you want to keep only lines that match a certain pattern or exclude lines that don't meet specific criteria.
These are the input/output expected data types for this operation:
- The input strings to be split and filtered.
- The filtered results after applying the regular expression and delimiter.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to filter a series of strings to extract only segments that start with error
:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Filter.
Set Delimiter to Line feed
Set Regex to ^Error
Set Invert to false.
Give your Output field a name and click Save. The strings in your input field will be filtered with the specified conditions.
For example, this text:
will be filtered as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation alters the case of each letter in a given input, swapping uppercase letters to lowercase and lowercase letters to uppercase. Non-alphabetic characters remain unchanged.
These are the input/output expected data types for this operation:
- String of text containing letters (upper or lower case), numbers, symbols, or other characters.
- The resulting string with swapped case.
Suppose you want to swap the case of the characters your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Swap case.
Give your Output field a name and click Save.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation rearranges the elements of an input string, list, or dataset into a random order. It’s a useful operation for tasks like randomization, testing, or introducing entropy into datasets.
These are the input/output expected data types for this operation:
- Strings whose characters will be shuffled.
- Resulting strings after shuffling the characters.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to shuffle the characters of a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Shuffle.
Set Delimiter to Nothing (separate chars)
.
Give your Output field a name and click Save. The characters in your input strings will be shuffled.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation makes URLs safe to share by preventing accidental clicks or access. This is especially useful in cybersecurity contexts, where you might need to share potentially malicious URLs without making them active links.
These are the input/output expected data types for this operation:
- URLs you want to defang.
- Defanged URLs.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to defang a series of events that represent URLs:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Defang URL.
Set Escape Dots to true.
Set Escape HTTP to true.
Set Escape ://* to false.
Set Process Type to Everything.
Give your Output field a name and click Save. The URLs in your input field will be defanged. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to extract or manipulate parts of your input strings that match a specific regular expression pattern.
These are the input/output expected data types for this operation:
- The strings from which you want to pull out specific parts.
- The resulting strings that match your regular expression.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to remove spaces, line feeds and full stops from your input strings:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Regex.
Set Regex to [a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,}
Give your Output field a name and click Save.
With the parameters set above, the following text:
will be transformed like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts all characters in a string or text to their uppercase equivalents.
These are the input/output expected data types for this operation:
- Strings to be converted.
- Strings where all lowercase letters (a-z
) have been converted to their uppercase counterparts (A-Z
).
Suppose you want to convert all the characters of a series of strings in your input data to uppercase:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose To upper case.
Give your Output field a name and click Save. The characters in your input strings will be converted to uppercase.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation allows you to clean up text by eliminating unnecessary whitespace characters. This includes spaces, tabs, newlines, and other types of whitespace that might be present in your input data.
These are the input/output expected data types for this operation:
- The strings from which the whitespace characters will be removed.
- The resulting strings after removing the specified characters.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to remove spaces, line feeds and full stops from your input strings:
In the Operation field, choose Remove whitespace.
Set Spaces to true
.
Set Carriage returns to false
.
Set Line feeds to true
.
Set Tabs to false
.
Set Form feeds to false
.
Set Full stops to true
.
Give your Output field a name and click Save.
With the parameters set above, the following text:
will be transformed like this:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts all characters in a string or text to their lowercase equivalents. This transformation is often used to standardize text for tasks like case-insensitive comparisons, sorting, or data normalization.
These are the input/output expected data types for this operation:
- Strings to be converted.
- Strings where all uppercase letters (A-Z
) have been converted to their lowercase counterparts (a-z
). Characters that are already lowercase or are not alphabetic (e.g., numbers, symbols, whitespace) remain unchanged.
Suppose you want to convert all the characters of a series of strings in your input data to lowercase:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose To lower case.
Give your Output field a name and click Save. The characters in your input strings will be converted to lowercase.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation takes an IPv4 or IPv6 address and 'defangs' it, meaning the IP becomes invalid, removing the risk of accidentally using it as an IP address. The operation replaces certain characters with alternatives, making the IP non-functional.
These are the input/output expected data types for this operation:
- IP addresses you want to defang.
- Defanged IP addresses.
Suppose you want to defang a series of events that represent IP addresses:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Defang IP Address.
Give your Output field a name and click Save. The IP addresses in your input field will be defanged. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation extracts all the IPv4 and IPv6 addresses from a block of text or data.
These are the input/output expected data types for this operation:
- Strings with a block of IP addresses you want to extract.
- List of IP addresses.
Suppose you want to extract a list of IP addresses from your input strings. To do it:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose Extract IP Address.
Give your Output field a name and click Save.
For example, in this input text:
this will be the output list of IP addresses detected:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
In your Pipeline, open the required configuration and select the input Field.
This operation extracts a portion of a string based on specified start and end positions or lengths. It is a versatile tool for extracting parts of text or binary data.
These are the input/output expected data types for this operation:
- Input string or data from which you want to extract a portion.
- Specified substring.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to extract a specified substring from your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Substring.
Set Start Index to 3
.
Set Length to 9
.
Give your Output field a name and click Save.
For example, with the parameters specified above, you'll get the following substring from this text:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation takes an invalid IPv4 or IPv6 address and 'fangs' it, meaning the IP becomes valid.
These are the input/output expected data types for this operation:
- IP addresses you want to fang.
Input format
The input IP addresses must follow the format given in the output results of the Defang IP Address operation, that is, dots replaced by brackets (for example, 192[.]168[.]1[.]1
)
- Valid IP addresses.
Suppose you want to fang a series of events that represent IP addresses:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Fang IP Address.
Give your Output field a name and click Save. The IP addresses in your input field will be made valid. For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation arranges the elements of an input (e.g., words, numbers, or lines) in a specific order. It is useful for organizing data, analyzing patterns, or preparing datasets for further processing.
These are the input/output expected data types for this operation:
- Input strings to be sorted.
- Sorted data.
These are the parameters you need to configure to use this operation (mandatory parameters are marked with a *):
Suppose you want to sort the terms of a series of strings in your input data:
In your Pipeline, open the required Action configuration and select the input Field.
In the Operation field, choose Sort.
Set Delimiter to Comma
.
Set Order to Alphabetical (case sensitive)
.
Set Reverse to false.
Give your Output field a name and click Save. The terms in your input field will be sorted following the specified conditions.
For example, this text:
will be sorted as:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.
This operation converts an IP address (either IPv4 or IPv6) to its hexadecimal representation.
These are the input/output expected data types for this operation:
- IP addresses to be converted.
- Hexadecimal representations of the input IP addresses.
Suppose you want to transform your IP addresses into their hexadecimal codes. To do it:
In your Pipeline, open the required configuration and select the input Field.
In the Operation field, choose IP to hexadecimal.
Give your Output field a name and click Save. Your IP addresses will be converted.
For example:
You can try out operations with specific values using the Input field above the operation. You can enter the value in the example above and check the result in the Output field.