Skip to main content
Version: v4.25

to

Saves to an URI, inferring the destination, compression and format.

to uri:string, [saver_args… { … }]
Use to if you can

The to operator is designed as an easy way to get data out of Tenzir, without having to manually write the separate steps of data formatting, compression and writing.

Description

The to operator is an easy way to get data out of Tenzir into It will try to infer the connector, compression and format based on the given URI.

uri: string

The URI to load from.

saver_args… (optional)

An optional set of arguments passed to the saver. This can be used to e.g. pass credentials to a connector:

to "https://example.org/file.json", headers={Token: "XYZ"}

{ … } (optional)

The optional pipeline argument allows for explicitly specifying how to compresses and writes data. By default, the pipeline is inferred based on a set of rules.

If inference is not possible, or not sufficient, this argument can be used to control compression and writing. Providing this pipeline disables the inference.

Explanation

Saving Tenzir data into some resource consists of three steps:

The to operator tries to infer all three steps from the given URI.

Writing

The format to write inferred from the file-ending. Supported file formats are the common file endings for our read_* operators.

If you want to provide additional arguments to the writer, you can use the pipeline argument to specify the parsing manually.

Compressing

The compression, just as the format, is inferred from the "file-ending" in the URI. Under the hood, this uses the decompress_* operators. Supported compressions can be found in the list of compression extensions.

The compression step is optional and will only happen if a compression could be inferred. If you want to write with specific compression settings, you can use the pipeline argument to specify the decompression manually.

Saving

The connector is inferred based on the URI scheme://. If no scheme is present, the connector attempts to save to the local filesystem.

Supported Deductions

URI schemes

SchemeOperatorExample
abfs,abfsssave_azure_blob_storageto "abfs://path/to/file.json"
amqpsave_amqpto "amqp://…
elasticsearchto_opensearchto "elasticsearch://…
filesave_fileto "file://path/to/file.json"
fluent-bitto_fluent_bitto "fluent-bit://elasticsearch"
ftp, ftpssave_ftpto "ftp://example.com/file.json"
gcpssave_google_cloud_pubsubfrom "gcps://project_id/subscription_id" { … }
http, httpssave_httpto "http://example.com/file.json"
inprocsave_zmqto "inproc://127.0.0.1:56789" { write_json }
kafkasave_kafkato "kafka://topic" { write_json }
opensearchto_opensearchto "opensearch://…
s3save_s3to "s3://bucket/file.json"
sqssave_sqsto "sqs://my-queue" { write_json }
tcpsave_tcpto "tcp://127.0.0.1:56789" { write_json }
udpsave_udpto "udp://127.0.0.1:56789" { write_json }
zmqsave_zmqto "zmq://127.0.0.1:56789" { write_json }

Please see the respective operator pages for details on the URI's locator format.

File extensions

Format

The from operator can deduce the file format based on these file-endings:

FormatFile EndingsOperator
CSV.csvwrite_csv
Feather.feather, .arrowwrite_feather
JSON.jsonwrite_json
NDJSON.ndjson, .jsonlwrite_ndjson
Parquet.parquetwrite_parquet
Pcap.pcapwrite_pcap
SSV.ssvwrite_ssv
TSV.tsvwrite_tsv
YAML.yamlwrite_yaml

Compression

The from operator can deduce the following compressions based on these file-endings:

CompressionFile Endings
Brotli.br, .brotli
Bzip2.bz2
Gzip.gz, .gzip
LZ4.lz4
Zstd.zst, .zstd

Example transformation:

to operator
to "myfile.json.gz"
Effective pipeline
write_json
compress_gzip
save_file "myfile.json.gz"

Examples

Save to a local file

to "path/to/my/output.csv"

Save to a compressed file

to "path/to/my/output.csv.bz2"

See Also

from