Skip to content

Operators

Tenzir comes with a wide range of built-in pipeline operators.

Shows the least common values.
rare auth.token
Reverses the event order.
reverse
Sorts events by the given expressions.
sort name, -abs(transaction)
Groups events and applies aggregate functions to each group.
summarize name, sum(amount)
Shows the most common values.
top user
Plots events on an area chart.
chart_area
Plots events on an bar chart.
chart_bar
Plots events on an line chart.
chart_line
Plots events on an pie chart.
chart_pie
Publishes events to a channel with a topic.
publish "topic"
Subscribes to events from a channel with a topic.
subscribe "topic"
Creates a Bloom filter context.
context::create_bloom_filter "ctx", capacity=1Mi, fp_probability=0.01
Creates a GeoIP context.
context::create_geoip "ctx", db_path="GeoLite2-City.mmdb"
Creates a lookup table context.
context::create_lookup_table "ctx"
Enriches events with data from a context.
context::enrich "ctx", key=x
Removes entries from a context.
context::erase "ctx", key=x
Resets a context.
context::inspect "ctx"
Lists all contexts
context::list
Loads context state.
context::load "ctx"
Deletes a context.
context::remove "ctx"
Resets a context.
context::reset "ctx"
Saves context state.
context::save "ctx"
Updates a context with new data.
context::update "ctx", key=x, value=y
Filter the input with Sigma rules and output matching events.
sigma "/tmp/rules/"
Executes YARA rules on byte streams.
yara "/path/to/rules", blockwise=true
Compresses a stream of bytes.
compress "zstd"
Compresses a stream of bytes using Brotli compression.
compress_brotli, level=10
Compresses a stream of bytes using bz2 compression.
compress_bz2, level=9
Compresses a stream of bytes using gzip compression.
compress_gzip, level=8
Compresses a stream of bytes using lz4 compression.
compress_lz4, level=7
Compresses a stream of bytes using zstd compression.
compress_zstd, level=6
Decompresses a stream of bytes.
decompress "gzip"
Decompresses a stream of bytes in the Brotli format.
decompress_brotli
Decompresses a stream of bytes in the Bzip2 format.
decompress_bz2
Decompresses a stream of bytes in the Gzip format.
decompress_gzip
Decompresses a stream of bytes in the Lz4 format.
decompress_lz4
Decompresses a stream of bytes in the Zstd format.
decompress_zstd
Executes Python code against each event of the input.
python "self.x = self.y"
Executes a system command and hooks its stdin and stdout into the pipeline.
shell "echo hello"
Drops events and emits a warning if the invariant is violated.
assert name.starts_with("John")
Emits a warning if the pipeline does not have the expected throughput
assert_throughput 1000, within=1s
Removes duplicate events based on a common key.
deduplicate src_ip
Limits the input to the first n events.
head 20
Dynamically samples events from a event stream.
sample 30s, max_samples=2k
Keeps a range of events within the interval [begin, end) stepping by stride.
slice begin=10, end=30
Limits the input to the last n events.
tail 20
Limits the input to n events per unique schema.
taste 1
Keeps only events for which the given predicate is true.
where name.starts_with("John")
Runs a pipeline periodically according to a cron expression.
cron "* */10 * * * MON-FRI" { from "https://example.org" }
Delays events relative to a given start time, with an optional speedup.
delay ts, speed=2.5
Discards all incoming events.
discard
Runs a pipeline periodically at a fixed interval.
every 10s { summarize sum(amount) }
Executes a subpipeline with a copy of the input.
fork { to "copy.json" }
Routes the data to one of multiple subpipelines.
load_balance $over { publish $over }
Does nothing with the input.
pass
Repeats the input a number of times.
repeat 100
Limits the bandwidth of a pipeline.
throttle 100M, within=1min
Shows file information for a given directory.
files "/var/log/", recurse=true
Shows a snapshot of available network interfaces.
nics
Shows a snapshot of running processes.
processes
Shows a snapshot of open sockets.
sockets
Use Tenzir's REST API directly from a pipeline.
api "/pipeline/list"
The batch operator controls the batch size of events.
batch timeout=1s
An in-memory buffer to improve handling of data spikes in upstream operators.
buffer 10M, policy="drop"
An in-memory cache shared between pipelines.
cache "w01wyhTZm3", ttl=10min
Provides a compatibility fallback to TQL1 pipelines.
legacy "chart area"
Forces a pipeline to run locally.
local { sort foo }
Replaces the input with metrics describing the input.
measure
Forces a pipeline to run remotely at a node.
remote { version }
Make events available under the /serve REST API endpoint
serve "abcde12345"
Treats all warnings as errors.
strict { assert false }
Removes ordering assumptions from a pipeline.
unordered { read_ndjson }
Removes fields from the event.
drop name, metadata.id
Add a field with the number of preceding events.
enumerate num
Sends HTTP/1.1 requests and forwards the response.
http "example.com"
Moves values from one field to another, removing the original field.
move id=parsed_id, ctx.message=incoming.status
Selects some values and discards the rest.
select name, id=metadata.id
Assigns a value to a field, creating it if necessary.
name = "Tenzir"
Adjusts timestamps relative to a given start time, with an optional speedup.
timeshift ts, start=2020-01-01
Returns a new event for each member of a list or a record in an event, duplicating the surrounding event.
unroll names
Casts incoming events to their OCSF type.
ocsf::apply
Installs a package.
package::add "suricata-ocsf"
Shows installed packages.
package::list
Uninstalls a package.
package::remove "suricata-ocsf"
Shows managed pipelines.
pipeline::list
Parses bytes as BITZ format.
read_bitz
Parses an incoming Common Event Format (CEF) stream into events.
read_cef
Read CSV (Comma-Separated Values) from a byte stream.
read_csv null_value="-"
Parses an incoming bytes stream into events using a string as delimiter.
read_delimited "|"
Parses an incoming bytes stream into events using a regular expression as delimiter.
read_delimited_regex r"\s+"
Parses an incoming Feather byte stream into events.
read_feather
Parses an incoming GELF stream into events.
read_gelf
Parses lines of input with a grok pattern.
read_grok "%{IP:client} %{WORD:action}"
Parses an incoming JSON stream into events.
read_json arrays_of_objects=true
Read Key-Value pairs from a byte stream.
read_kv r"(\s+)[A-Z_]+:", r":\s*"
Parses an incoming [LEEF][leef] stream into events.
read_leef
Parses an incoming bytes stream into events.
read_lines
Parses an incoming NDJSON (newline-delimited JSON) stream into events.
read_ndjson
Reads events from a Parquet byte stream.
read_parquet
Reads raw network packets in PCAP file format.
read_pcap
Read SSV (Space-Separated Values) from a byte stream.
read_ssv header="name count"
Parse an incoming [Suricata EVE JSON][eve-json] stream into events.
read_suricata
Parses an incoming Syslog stream into events.
read_syslog
Read TSV (Tab-Separated Values) from a byte stream.
read_tsv auto_expand=true
Read XSV from a byte stream.
read_xsv ";", ":", "N/A"
Parses an incoming YAML stream into events.
read_yaml
Parse an incoming Zeek JSON stream into events.
read_zeek_json
Parses an incoming Zeek TSV stream into events.
read_zeek_tsv
Summarizes the activity of pipelines.
pipeline::activity range=1d, interval=1h
Starts a pipeline in the node.
pipeline::detach { … }
Shows managed pipelines.
pipeline::list
Starts a pipeline in the node and waits for it to complete.
pipeline::run { … }
Writes events in BITZ format.
write_bitz
Transforms event stream to CSV (Comma-Separated Values) byte stream.
write_csv
Transforms the input event stream to Feather byte stream.
write_feather
Transforms the input event stream to a JSON byte stream.
write_json
Writes events in a Key-Value format.
write_kv
Writes events as key-value pairsthe values of an event.
write_lines
Transforms the input event stream to a Newline-Delimited JSON byte stream.
write_ndjson
Transforms event stream to a Parquet byte stream.
write_parquet
Transforms event stream to PCAP byte stream.
write_pcap
Transforms event stream to SSV (Space-Separated Values) byte stream.
write_ssv
Writes events as syslog.
write_syslog
Transforms the input event stream to a TQL notation byte stream.
write_tql
Transforms event stream to TSV (Tab-Separated Values) byte stream.
write_tsv
Transforms event stream to XSV byte stream.
write_xsv
Transforms the input event stream to YAML byte stream.
write_yaml
Transforms event stream into Zeek Tab-Separated Value byte stream.
write_zeek_tsv
Loads a byte stream via AMQP messages.
load_amqp
Loads bytes from Azure Blob Storage.
load_azure_blob_storage "abfs://container/file"
Loads the contents of the file at path as a byte stream.
load_file "/tmp/data.json"
Loads a byte stream via FTP.
load_ftp "ftp.example.org"
Loads bytes from a Google Cloud Storage object.
load_gcs "gs://bucket/object.json"
Subscribes to a Google Cloud Pub/Sub subscription and obtains bytes.
load_google_cloud_pubsub project_id="my-project"
Loads a byte stream via HTTP.
load_http "example.org", params={n: 5}
Loads a byte stream from a Apache Kafka topic.
load_kafka topic="example"
Loads bytes from a network interface card (NIC).
load_nic "eth0"
Loads from an Amazon S3 object.
load_s3 "s3://my-bucket/obj.csv"
Loads bytes from [Amazon SQS][sqs] queues.
load_sqs "sqs://tenzir"
Accepts bytes from standard input.
load_stdin
Loads bytes from a TCP or TLS connection.
load_tcp "0.0.0.0:8090" { read_json }
Loads bytes from a UDP socket.
load_udp "0.0.0.0:8090"
Receives ZeroMQ messages.
load_zmq
Obtains events from an URI, inferring the source, compression and format.
from "data.json"
Reads one or multiple files from a filesystem.
from_file "s3://data/**.json"
Receives events via Fluent Bit.
from_fluent_bit "opentelemetry"
Sends and receives HTTP/1.1 requests.
from_http "0.0.0.0:8080
Receives events via Opensearch Bulk API.
from_opensearch
Submits VQL to a Velociraptor server and returns the response as events.
from_velociraptor subscribe="Windows"
Retrieves diagnostic events from a Tenzir node.
diagnostics
Retrieves metrics events from a Tenzir node.
metrics "cpu"
Shows the node's OpenAPI specification.
openapi
Shows all available plugins and built-ins.
plugins
Shows the current version.
version
Retrieves events from a Tenzir node.
export
Retrieves all fields stored at a node.
fields
Imports events into a Tenzir node.
import
Retrieves metadata about events stored at a node.
partitions src_ip == 1.2.3.4
Retrieves all schemas for events stored at a node.
schemas
Saves a byte stream via AMQP messages.
save_amqp
Saves bytes to Azure Blob Storage.
save_azure_blob_storage "abfs://container/file"
Saves bytes through an SMTP server.
save_email "user@example.org"
Writes a byte stream to a file.
save_file "/tmp/out.json"
Saves a byte stream via FTP.
save_ftp "ftp.example.org"
Saves bytes to a Google Cloud Storage object.
save_gcs "gs://bucket/object.json"
Publishes to a Google Cloud Pub/Sub topic.
save_google_cloud_pubsub project_id="my-project"
Sends a byte stream via HTTP.
save_http "example.org/api"
Saves a byte stream to a Apache Kafka topic.
save_kafka topic="example"
Saves bytes to an Amazon S3 object.
save_s3 "s3://my-bucket/obj.csv"
Saves bytes to [Amazon SQS][sqs] queues.
save_sqs "sqs://tenzir"
Writes a byte stream to standard output.
save_stdout
Saves bytes to a TCP or TLS connection.
save_tcp "0.0.0.0:8090", tls=true
Saves bytes to a UDP socket.
save_udp "0.0.0.0:8090"
Sends bytes as ZeroMQ messages.
save_zmq
Saves to an URI, inferring the destination, compression and format.
to "output.json"
Sends events to Amazon Security Lake (ASL).
to_asl "s3://…"
Sends events to the Microsoft Azure Logs Ingestion API.
to_azure_log_analytics tenant_id="...", workspace_id="..."
Sends events to a ClickHouse table.
to_clickhouse table="my_table"
Sends events via Fluent Bit.
to_fluent_bit "elasticsearch"
Sends events to Google Cloud Logging.
to_google_cloud_logging
Sends unstructured events to a Google SecOps Chronicle instance.
to_google_secops
Writes events to a URI using hive partitioning.
to_hive "s3://…", partition_by=[x]
Sends events to an OpenSearch-compatible Bulk API.
to_opensearch "localhost:9200", …
Sends events to a Snowflake database.
to_snowflake account_identifier="…
Sends events to a Splunk [HTTP Event Collector (HEC)][hec].
to_splunk "localhost:8088", …

Last updated: