Skip to content

Secret Secrets

Tenzir Node v5.7.0 introduces a new secret type that keeps its sensitive content hidden while enabling flexible secret retrieval. This release also adds support for OCSF extensions and brings several improvements to the operator.

Download the release on GitHub.

Add an option to add extra headers to the platform request

Section titled “Add an option to add extra headers to the platform request”

The new option tenzir.platform-extra-headers causes the Tenzir Node to add the given extra HTTP headers when establishing the connection to the Tenzir Platform, for example to pass additional authentication headers when traversing proxies.

You can set this variable either via configuration file:

tenzir:
platform-extra-headers:
Authentication: Bearer XXXX
Proxy-Authentication: Bearer YYYY

or as environment variable: (note the double underscore before the name of the header)

Terminal window
TENZIR_PLATFORM_EXTRA_HEADERS__AUTHENTICATION="Bearer XXXX"
TENZIR_PLATFORM_EXTRA_HEADERS__PROXY_AUTHENTICATION="Bearer YYYY"

When using the environment variable version, the Tenzir Node always converts the name of the header to lowercase and converts underscores to dashes, so a header specified as TENZIR_PLATFORM_EXTRA_HEADERS__EXTRA_HEADER=extra will be sent as extra-header: extra in the HTTP request.

By @lava in #5287.

The ocsf::apply operator now supports OCSF extensions. This means that metadata.extensions is now also taken into account for casting and validation. At the moment, only the extensions versioned together with OCSF are supported. This includes the win and linux extensions.

By @jachris in #5306.

Enhanced file renaming in from_file operator

Section titled “Enhanced file renaming in from_file operator”

The from_file operator now provides enhanced file renaming capabilities when using the rename parameter. These improvements make file operations more robust and user-friendly.

Directory creation: The operator now automatically creates intermediate directories when renaming files to paths that don’t exist yet. For example, if you rename a file to /new/deep/directory/structure/file.txt, all necessary parent directories (/new, /new/deep, /new/deep/directory, /new/deep/directory/structure) will be created automatically.

from_file "/data/*.json", rename=path => f"/processed/by-date/2024/01/{path.file_name()}"

Trailing slash handling: When the rename target ends with a trailing slash, the operator now automatically appends the original filename. This makes it easy to move files to different directories while preserving their names.

// This will rename "/input/data.json" to "/output/data.json"
from_file "/input/*.json", rename=path => "/output/"

Previously, you would have needed to manually extract and append the filename:

// Old approach - no longer necessary
from_file "/input/*.json", rename=path => f"/output/{path.file_name()}"

By @dominiklohmann in #5303.

Preserving variants when using ocsf::apply

Section titled “Preserving variants when using ocsf::apply”

The ocsf::apply operator now has an additional preserve_variants option, which makes it so that free-form objects are preserved as-is, instead of being JSON-encoded. Most notably, this applies to the unmapped field. For example, if unmapped is {x: 42}, then ocsf::apply would normally JSON-encode it so that it ends up with the value "{\"x\": 42}". If ocsf::apply preserve_variants=true is used instead, then unmapped simply stays a record. Note that this means that the event schema changes whenever the type of unmapped changes.

By @jachris in #5312.

Tenzir now features a new first class type: secret. As the name suggests, this type contains a secret value that cannot be accessed by a user:

from { s: secret("my-secret") }
{
s: "***", // Does not render the secret value
}

A secret is created by the secret function, which changes its behavior with this release.

Operators now accept secrets where appropriate, most notably for username and password arguments, but also for URLs:

let $url = "https://" + secret("splunk-host") + ":8088"
to_splunk $url, hec_token=secret("splunk-hec-token")

However, a string is implicitly convertible to a secret in an operator argument, meaning that you do not have to configure a secret if you are fine with just a string literal:

to_splunk "https://localhost:8088", hec_token="my-plaintext-token"

Along with this feature in the Tenzir Node, we introduced secret stores to the Tenzir Platform. You can now centrally manage secrets in the platform, which will usable by all nodes within the workspace. Read more about this in the release notes for the Tenzir Platform and our Explanations page on secrets.

By @IyeOnline in #5065, #5197.

The secret function now returns a secret, the strong type introduced in this release. Previously it returned a plaintext string. This change protects secrets from being leaked, as only operators can resolve secrets now.

If you want to retain the old behavior , you can enable the configuration option tenzir.legacy-secret-model. In this mode, the secret function can only resolve secrets from the Tenzir Node’s configuration file and not access any external secret store.

By @IyeOnline in #5065, #5197.

Kafka operators now automatically configure SASL mechanism for AWS IAM

Section titled “Kafka operators now automatically configure SASL mechanism for AWS IAM”

The load_kafka and save_kafka operators now automatically set sasl.mechanism option to the expected OAUTHBEARER when using the aws_iam option. If the mechanism has already been set to a different value, an error is emitted.

By @raxyte in #5307.

The pipelines defined as part of the compaction configuration can now use TQL2. For backwards-compatibility, TQL1 pipelines still work, but they are deprecated and emit a warning on start-up.

By @jachris in #5302.

Fixed shutdown hang during storage optimization

Section titled “Fixed shutdown hang during storage optimization”

Nodes periodically merge and optimize their storage over time. We fixed a hang on shutdown for nodes while this process was ongoing.

By @IyeOnline in #5301.

The from_file operator no longer fails when its per-file pipeline argument is a sink. Before this fix, the following pipeline which opens a new TCP connection per file would not work:

from_file "./*.csv" {
read_csv
write_ndjson
save_tcp "localhost:8080"
}

By @dominiklohmann in #5303.

Last updated: