This guide shows you how to create integration tests with the
tenzir-test framework. You’ll set up a standalone
repository, write test scenarios, and record reference output to verify your
pipelines work as expected. If you already have tests and want to run them, see
the run tests guide.
Prerequisites
Section titled “Prerequisites”-
Python 3.12 or newer.
-
uvinstalled locally. -
A working installation of Tenzir. The harness automatically detects
tenzirandtenzir-nodeusing this precedence:TENZIR_BINARY/TENZIR_NODE_BINARYenvironment variables- Local binary on
PATH - Fallback to
uvx tenzir/uvx --from tenzir tenzir-nodewhenuvis installed
Most users need no configuration because the harness uses
uvxto fetch Tenzir on demand.
Step 1: Scaffold a project
Section titled “Step 1: Scaffold a project”Create a clean directory that holds nothing but integration tests and their shared assets. The harness treats this directory as the project root.
mkdir democd demoStep 2: Check the harness
Section titled “Step 2: Check the harness”Run the harness through uvx to make sure the tooling works without setting up
a virtual environment. uvx downloads and caches the latest release when
needed.
uvx tenzir-test --helpIf the command succeeds, you’re ready to add tests.
Step 3: Add shared data
Section titled “Step 3: Add shared data”Populate inputs/ with artifacts that tests will read. The example below stores
a short NDJSON dataset that models a few alerts.
{"id": 1, "severity": 5, "message": "Disk usage above 90%"}{"id": 2, "severity": 2, "message": "Routine backup completed"}{"id": 3, "severity": 7, "message": "Authentication failure on admin"}Save the snippet as inputs/alerts.ndjson.
Step 4: Author a pipeline test
Section titled “Step 4: Author a pipeline test”Create your first scenario under tests/. The harness discovers tests
recursively, so you can organize them by feature or risk level. Here, you create
tests/high-severity.tql.
from_file f"{env("TENZIR_INPUTS")}/alerts.ndjson"where severity >= 5project id, messagesort idThe harness also injects a unique scratch directory into TENZIR_TMP_DIR while
each test executes. Use it for transient files you do not want under version
control; pass --keep when you run tenzir-test if you need to inspect the
generated artifacts afterwards.
Stream raw output while iterating
Section titled “Stream raw output while iterating”During early iterations you may want to inspect command output before you record
reference artifacts. Enable passthrough mode via --passthrough (-p) to
pipe the tenzir process output directly to your terminal while the harness
still provisions fixtures and environment variables:
uvx tenzir-test --passthrough tests/high-severity.tqlThe harness enforces the exit code but skips comparisons, letting you decide
when to capture the baseline with --update.
Step 5: Capture the reference output
Section titled “Step 5: Capture the reference output”Run the harness once in update mode to execute the pipeline and write the expected output next to the test.
uvx tenzir-test --updateThe command produces tests/high-severity.txt with the captured stdout.
{"id":1,"message":"Disk usage above 90%"}{"id":3,"message":"Authentication failure on admin"}Review the reference file, adjust the pipeline if needed, and rerun --update
until you are satisfied with the results. Commit the .tql test and .txt
baseline together so future runs can compare against known-good output. At this
point you can run the test suite without
--update to verify that the actual output matches the baseline.
Step 6: Provide stdin input
Section titled “Step 6: Provide stdin input”Some tests need data piped to stdin rather than read from files. Place a
.stdin file next to the test to provide this content automatically. This
simplifies TQL tests by letting pipelines start with a parser directly.
TQL pipelines with stdin
Section titled “TQL pipelines with stdin”Create tests/parsing/csv.stdin with your test data:
name,countalice,42bob,23Create tests/parsing/csv.tql that reads from stdin:
read_csvsort nameRun the test with --update to capture the baseline:
uvx tenzir-test --update tests/parsing/csv.tqlThe harness pipes the CSV data to tenzir’s stdin, so read_csv processes it
directly. This is an alternative to using .input files with
from_file env("TENZIR_INPUT")—choose whichever fits your test better.
Shell scripts with stdin
Section titled “Shell scripts with stdin”The same mechanism works for shell scripts. Create tests/shell/echo.sh:
#!/bin/shcatCreate tests/shell/echo.stdin with the input data:
Hello from stdin!The harness pipes the contents of echo.stdin to the script’s stdin.
Combine stdin with input files
Section titled “Combine stdin with input files”Tests can use both .stdin and .input files together. The stdin content gets
piped to the process, while the input file path is available via TENZIR_INPUT.
Create tests/shell/process.sh:
#!/bin/shecho "from stdin:"catecho "from TENZIR_INPUT:"cat "$TENZIR_INPUT"Create the corresponding files:
stdin contentinput file contentRun --update to capture both sources in the baseline output. The output looks
like this:
from stdin:stdin contentfrom TENZIR_INPUT:input file contentStep 7: Introduce a fixture
Section titled “Step 7: Introduce a fixture”Fixtures let you bootstrap external resources and expose their configuration
through environment variables. Add a simple node-driven test to exercise a
running Tenzir node.
Create tests/node/ping.tql with the following contents:
---fixtures: [node]timeout: 10---
// Get the version from the running node.remote { version}Because the test needs a node to run, include the built-in node fixture and
give it a reasonable timeout. The fixture starts tenzir-node, injects connection
details into the environment, and tears the process down after the run. Capture
the baseline via --update just like before.
The fixture launches tenzir-node from the directory that owns the test file, so
tenzir-node.yaml placed next to the scenario can refer to files with relative
paths (for example ../inputs/alerts.ndjson).
Reuse fixtures with suites
Section titled “Reuse fixtures with suites”When several tests should share the same fixture lifecycle, promote their
directory to a suite. Add suite: to the directory’s test.yaml and keep
the fixture selection alongside the other defaults:
suite: smoke-httpfixtures: [http]timeout: 45retry: 2Key behaviour:
- Suites are directory-scoped. Once a
test.yamldeclaressuite, every test in that directory and its subdirectories joins automatically. Move the scenarios that should remain independent into a sibling directory. - Suites run sequentially on a single worker. The harness activates the shared
fixtures once, executes members in lexicographic order of their relative
paths, and tears the fixtures down afterwards. Other suites (and standalone
tests) still run in parallel when
--jobsallows it. - Per-test frontmatter cannot introduce
suite, and suite members may not define their ownfixturesorretry. Keep those policies in the directory defaults so every member agrees on the shared lifecycle. Outside a suite, frontmatter can still setfixtures,retry, ortimeoutas before. - Tests can override other keys (for example
inputs:or additional metadata) on a per-file basis when necessary.
Run the http directory that defines the suite when you iterate on it:
uvx tenzir-test tests/httpSelecting a single file inside that suite fails fast with a descriptive error, which keeps the fixture lifecycle predictable and prevents partial runs from leaving shared state behind.
Drive fixtures manually
Section titled “Drive fixtures manually”When you switch to the Python runner you can drive fixtures manually. The
controller API makes it easy to start, stop, or even crash the same node
fixture inside a single test:
# runner: python# fixtures: [node]
import signal
# Context-manager style: `with` automatically calls `start()` and `stop()` on# the fixture.with acquire_fixture("node") as node: tenzir = Executor.from_env(node.env) tenzir.run("remove { version }") # talk to the running node
# Without the context manager, you need to call `start()` and `stop()` manually.node.start()Executor.from_env(node.env).run("version")node.stop()This imperative style complements the declarative fixtures: [node] flow and
is especially useful for fault-injection scenarios. The harness preloads
helpers like acquire_fixture, Executor, and fixtures(), so Python-mode
tests can call them directly.
When you restart the same controller, the node keeps using the state and cache
directories it created during the first start(). Those paths (exported via
TENZIR_NODE_STATE_DIRECTORY and TENZIR_NODE_CACHE_DIRECTORY) live inside the
test’s scratch directory by default and are cleaned up automatically when the
controller goes out of scope. Acquire a fresh controller when you need a brand
new workspace.
Step 8: Organize defaults with test.yaml
Section titled “Step 8: Organize defaults with test.yaml”As suites grow, you can extract shared configuration into directory-level
defaults. Place a tests/node/test.yaml file with convenient settings:
fixtures: [node]timeout: 120# Optional: reuse datasets that live in tests/data/ instead of the project root.inputs: ../dataThe harness merges this mapping into every test under tests/node/. Relative
paths resolve against the directory that owns the YAML file, so inputs: ../data
points at tests/data/. Individual files still override keys in their
frontmatter when necessary.
Next steps
Section titled “Next steps”You now have a project that owns its inputs, tests, fixtures, and baselines. From here you can:
- Run tests to learn about executing the suite, selecting tests, and setting up CI.
- Add custom runners under
runners/when you need specialized logic aroundtenzirinvocations. - Build Python fixtures that publish or verify data through the helper APIs in
tenzir_test.fixtures. - Explore coverage collection by passing
--coverageto the harness.
Refer back to the test framework reference whenever you need deeper details about runners, fixtures, or configuration knobs.