Skip to content

Tenzir Node v4.25.0

Download the release on GitHub.

Add to_opensearch and to_elasticsearch sink operators

Section titled “Add to_opensearch and to_elasticsearch sink operators”

A new operator to_opensearch is now available for sending data to OpenSearch-compatible Bulk API providers including ElasticSearch.

By @raxyte in #4871.

The new duration function now allows to parse expressions resulting in strings as duration values.

By @raxyte in #4877.

Start your Tenzir Node with tenzir-node --tql2 or set the TENZIR_TQL2=true environment variable to enable TQL2-only mode for your node. In this mode, all pipelines will run as TQL2, with the old TQL1 pipelines only being available through the legacy operator. In Q1 2025, this option will be enabled by default, and later in 2025 the legacy operator and TQL1 support will be removed entirely.

By @dominiklohmann in #4840.

TQL2 now allows writing x not in y as an equivalent to not (x in y) for better readability.

By @raxyte in #4844.

Implement ip in subnet and subnet in subnet

Section titled “Implement ip in subnet and subnet in subnet”

Whether an IP address is contained in a subnet can now be checked using expressions such as 1.2.3.4 in 1.2.0.0/16. Similarly, to check whether a subnet is included in another subnet, use 1.2.0.0/16 in 1.0.0.0/8.

By @jachris in #4841.

We have added the from operator that allows you to easily onboard data from most sources. For example, you can now write from "https://example.com/file.json.gz" to automatically deduce the load operator, compression, and format.

We have added the to operator that allows you to easily send data to most destinations. For example, you can now write to "ftps://example.com/file.json.gz" to automatically deduce the save operator, compression, and format.

You can use the new subnet(string) function to parse strings as subnets.

By @IyeOnline in #4805.

Several new options are now available for the load_http operator: data, json, form, skip_peer_verification, skip_hostname_verification, chunked, and multipart. The skip_peer_verification and skip_hostname_verification options are now also available for the save_http operator.

By @mavam in #4811.

The read_csv, read_kv, read_ssv, read_tsv and read_xsv operators now support custom quote characters.

The read_csv, read_ssv, read_tsv and read_xsv operators support doubled quote escaping.

The read_csv, read_ssv, read_tsv and read_xsv operators now accept multi-character strings as separators.

The list_sep option for the read_csv, read_ssv, read_tsv and read_xsv operators can be set to an empty string, which will disable list parsing.

The new string.parse_leef() function can be used to parse a string as a LEEF message.

By @IyeOnline in #4837.

We have added a new to_snowflake sink operator, writing events into a snowflake table.

By @IyeOnline in #4589.

Numbers and string expressions containing numbers can now be converted into float type values using the float function.

By @raxyte in #4882.

The deduplicate operator in TQL2 to help you remove events with a common key. The operator provides more flexibility than its TQL1 pendant by letting the common key use any expression, not just a field name. You can also control timeouts with finer granularity.

By @dominiklohmann in #4850.

User-defined operators can now be written and used in TQL2. To use TQL2, start your definition with the comment // tql2, or use the --tql2 flag to opt into TQL2 as the default.

By @jachris in #4884.

The context::erase operator allows you to selectively remove entries from contexts.

By @dominiklohmann in #4864.

The save_email now accepts a tls option to specify TLS usage when establishing the SMTP connection.

By @raxyte in #4848.

Split compress/decompress into separate operators

Section titled “Split compress/decompress into separate operators”

The compress and decompress operators have been deprecated in favor of separate operators for each compression algorithm. These new operators expose additional options, such as compress_gzip level=10, format="deflate".

By @IyeOnline in #4876.

Make the expression evaluator support heterogeneous results

Section titled “Make the expression evaluator support heterogeneous results”

Functions can now return values of different types for the same input types. For example, x.otherwise(y) no longer requires that x has the same type as y.

By @jachris in #4839.

The topic argument for load_kafka and save_kafka is now a positional argument, instead of a named argument.

The array version of from that allowed you to create multiple events has been removed. Instead, you can just pass multiple records to from now.

By @IyeOnline in #4805.

Operator invocations that directly use parenthesis but continue after the closing parenthesis are no longer rejected. For example, where (x or y) and z is now being parsed correctly.

By @jachris in #4885.

Fix handling of empty records in write_parquet

Section titled “Fix handling of empty records in write_parquet”

write_parquet now gracefully handles nested empty records by replacing them with nulls. The Apache Parquet format does fundamentally not support empty nested records.

By @dominiklohmann in #4874.

Make the expression evaluator support heterogeneous results

Section titled “Make the expression evaluator support heterogeneous results”

Metadata such as @name can now be set to a dynamically computed value that does not have to be a constant. For example, if the field event_name should be used as the event name, @name = event_name now correctly assigns the events their name instead of using the first value.

By @jachris in #4839.

The endpoint argument of the save_email operator was documented as optional but was not parsed as so. This has been fixed and the argument is now correctly optional.

By @raxyte in #4848.

Fix pipeline manager discarding parse-time warnings

Section titled “Fix pipeline manager discarding parse-time warnings”

Warnings that happen very early during pipeline startup now correctly show up in the Tenzir Platform.

By @jachris in #4867.

Validate legacy expressions when splitting for predicate pushdown

Section titled “Validate legacy expressions when splitting for predicate pushdown”

Pipelines that begin with export | where followed by an expression that does not depend on the incoming events, such as export | where 1 == 1, no longer cause an internal error.

By @jachris in #4861.

Last updated: