Skip to content

This guide shows you how to add tests to your package. You’ll learn how to write test files, use inline inputs, and run the test harness.

Place test files in the tests directory of your package:

  • Directoryacme/
    • Directorytests/
      • normalize.input Sample data for the test
      • normalize.tql Test file
      • normalize.txt Expected output baseline
      • Directorycontext/
        • test.yaml Suite configuration
        • 01-update.tql First test in suite
        • 01-update.txt
        • 02-inspect.tql Second test in suite
        • 02-inspect.txt

Each test consists of:

  • A .tql file containing the test pipeline
  • An optional .input file with test-specific data
  • A .txt file with the expected output baseline

Inline inputs are the preferred way to provide test data. Place a .input file next to your test file with the same base name:

  • Directorytests/
    • parse-csv.input Input data
    • parse-csv.tql Test file
    • parse-csv.txt Expected baseline

The harness exposes the input file path via TENZIR_INPUT:

tests/parse-csv.tql
from_file env("TENZIR_INPUT")
read_csv
tests/parse-csv.input
name,value
Alice,42
Bob,17

Inline inputs keep test data next to the test that uses it, making tests self-contained and easy to understand.

For data shared across multiple tests in a subdirectory, create a local inputs/ directory. The harness uses the nearest inputs/ directory when resolving TENZIR_INPUTS:

  • Directorytests/
    • Directorynetwork/
      • Directoryinputs/ Shared by tests in network/ and children
        • packets.pcap
        • flows.json
      • Directorytcp/
        • analysis.tql TENZIR_INPUTS → ../inputs/
      • Directoryudp/
        • stats.tql TENZIR_INPUTS → ../inputs/
    • Directoryinputs/ Fallback for tests without a closer inputs/
      • common.json

Access shared inputs in TQL:

tests/network/tcp/analysis.tql
from_file f"{env("TENZIR_INPUTS")}/packets.pcap"
acme::analyze

Place inputs/ directories as close to the tests that use them as possible. This keeps related data together and makes it clear which tests depend on which files. Prefer inline .input files for single-test data and local inputs/ directories for data shared within a test group.

Test pipelines exercise your package logic with known input and produce deterministic output. The most common pattern is testing user-defined operators (UDOs), which are the primary way to build reusable building blocks. However, you can test any TQL code, including standalone pipelines or complex workflows.

tests/normalize.tql
from_file env("TENZIR_INPUT")
acme::normalize
tests/normalize.input
{"@timestamp": "2024-01-15T10:30:00Z", "msg": "test"}

Create separate test files for different argument combinations:

tests/tag-defaults.tql
from {hash: "abc123"}
acme::tag indicator
tests/tag-with-prefix.tql
from {hash: "abc123"}
acme::tag indicator, prefix="IOC: "

Use the error frontmatter to expect non-zero exit codes:

tests/invalid-input.tql
---
error: true
---
from {invalid: null}
acme::strict_parse

Run tenzir-test from the package root (where package.yaml lives) or from the tests/ subdirectory. The harness auto-detects package mode and configures paths accordingly.

First, run tests in passthrough mode to see the actual output:

Terminal window
uvx tenzir-test --passthrough

This streams output directly to the terminal without comparing against baselines.

When the output looks correct, save it as the baseline:

Terminal window
uvx tenzir-test --update

This creates or updates .txt files next to each test. For example, tests/normalize.tql produces tests/normalize.txt.

Run all tests and compare against saved baselines:

Terminal window
uvx tenzir-test

The harness reports differences between actual output and baselines. Use --verbose for detailed output during debugging.

Target individual tests or directories:

Terminal window
uvx tenzir-test tests/normalize.tql
uvx tenzir-test tests/context/

Control test behavior with YAML frontmatter:

tests/slow-test.tql
---
timeout: 60
---
// Long-running test pipeline
OptionTypeDefaultDescription
timeoutinteger30Command timeout in seconds
errorbooleanfalseExpect non-zero exit code
skipstringunsetSkip test with reason
fixtureslist[]Fixtures to request
runnerstringby suffixRunner name (tenzir, python, shell)

Verify the .input file exists next to the test file with the same base name. Check that you’re using env("TENZIR_INPUT") (singular) for inline inputs.

Ensure the test suite has fixtures: [node] in test.yaml. The node fixture automatically installs the package, creating defined contexts.

Tests must produce deterministic output. Use sort to order results, and avoid timestamps or random values in output. For time-based tests, use fixed input data rather than now().

Run uvx tenzir-test --update to regenerate baselines after intentional changes. Review the diff to verify the changes are expected.

Last updated: