Skip to content

Work with time

Time is fundamental in data analysis. Whether you’re analyzing logs, tracking events, or monitoring systems, you need to parse timestamps, calculate durations, and format dates. This guide shows you how to work with time values in TQL.

TQL has two main time-related types:

  • time: A specific point in time (timestamp)
  • duration: A span of time (interval)
from {
timestamp: 2024-01-15T10:30:45.123456,
interval: 5min
}
later = timestamp + interval
earlier = timestamp - 2h
{
timestamp: 2024-01-15T10:30:45.123456Z,
interval: 5min,
later: 2024-01-15T10:35:45.123456Z,
earlier: 2024-01-15T08:30:45.123456Z
}

Use now() to get the current timestamp:

from {
current_time: now()
}
today = current_time.round(1d)
{
current_time: 2025-07-21T19:06:55.047259Z,
today: 2025-07-22T00:00:00Z,
}

Convert string representations to proper timestamps with parse_time():

from {
iso: "2024-01-15T10:30:45",
custom: "15/Jan/2024:10:30:45",
unix: "1705316445"
}
iso_time = iso.parse_time("%Y-%m-%dT%H:%M:%S")
custom_time = custom.parse_time("%d/%b/%Y:%H:%M:%S")
unix_time = unix.int().seconds().from_epoch()
{
iso: "2024-01-15T10:30:45",
custom: "15/Jan/2024:10:30:45",
unix: "1705316445",
iso_time: 2024-01-15T10:30:45Z,
custom_time: 2024-01-15T10:30:45Z,
unix_time: 2024-01-15T11:00:45Z
}

Common format specifiers:

  • %Y - 4-digit year
  • %m - Month (01-12)
  • %d - Day (01-31)
  • %H - Hour (00-23)
  • %M - Minute (00-59)
  • %S - Second (00-59)
  • %b - Month name (Jan, Feb, etc.)
  • %a - Weekday name (Mon, Tue, etc.)

Convert timestamps to custom string formats with format_time():

from {event_time: 2024-01-15T10:30:45.123456}
iso = event_time.format_time("%Y-%m-%dT%H:%M:%S.%f")
date_only = event_time.format_time("%Y-%m-%d")
us_format = event_time.format_time("%m/%d/%Y %I:%M %p")
log_format = event_time.format_time("%d/%b/%Y:%H:%M:%S")
{
event_time: 2024-01-15T10:30:45.123456Z,
iso: "2024-01-15T10:30:45.123456000.%f",
date_only: "2024-01-15",
us_format: "01/15/2024 10:30 AM",
log_format: "15/Jan/2024:10:30:45.123456000"
}

Get individual parts of a timestamp using year(), month(), day(), hour(), minute(), and second():

from {timestamp: 2024-01-15T10:30:45.123456}
year = timestamp.year()
month = timestamp.month()
day = timestamp.day()
hour = timestamp.hour()
minute = timestamp.minute()
second = timestamp.second()
{
timestamp: 2024-01-15T10:30:45.123456Z,
year: 2024,
month: 1,
day: 15,
hour: 10,
minute: 30,
second: 45.123456
}

Create and manipulate time intervals:

from {
start: 2024-01-15T10:00:00,
end: 2024-01-15T14:30:00
}
elapsed = end - start
hours = elapsed.count_hours()
minutes = elapsed.count_minutes()
seconds = elapsed.count_seconds()
{
start: 2024-01-15T10:00:00Z,
end: 2024-01-15T14:30:00Z,
elapsed: 4.5h,
hours: 4.5,
minutes: 270.0,
seconds: 16200.0
}

Extract different time units from durations:

from {
duration: 90d + 4h + 30min + 45s + 123ms + 456us + 789ns
}
years = duration.count_years()
months = duration.count_months()
weeks = duration.count_weeks()
days = duration.count_days()
hours = duration.count_hours()
minutes = duration.count_minutes()
seconds = duration.count_seconds()
milliseconds = duration.count_milliseconds()
microseconds = duration.count_microseconds()
nanoseconds = duration.count_nanoseconds()
{
duration: 90.18802226223136d,
years: 0.24692641809819874,
months: 2.963117017178385,
weeks: 12.884003180318764,
days: 90.18802226223136,
hours: 2164.5125342935526,
minutes: 129870.75205761315,
seconds: 7792245.123456789,
milliseconds: 7792245123.456789,
microseconds: 7792245123456.789,
nanoseconds: 7792245123456789,
}

Use months() to create month-based durations:

from {
quarterly_period: 3
}
quarter = quarterly_period.months()
days_in_quarter = quarter.count_days()
weeks_in_quarter = quarter.count_weeks()
{
quarterly_period: 3,
quarter: 91.310625d,
days_in_quarter: 91.310625,
weeks_in_quarter: 13.044375
}

Use duration literals or functions:

from {
five_minutes: 5min,
one_hour: 1h,
custom: 90.seconds(),
from_parts: 2.hours() + 30.minutes(),
}
{
five_minutes: 5min,
one_hour: 1h,
custom: 1.5min,
from_parts: 2.5h,
}

Duration units:

  • ns or nanoseconds() - Nanoseconds
  • us or microseconds() - Microseconds
  • ms or milliseconds() - Milliseconds
  • s or seconds() - Seconds
  • min or minutes() - Minutes
  • h or hours() - Hours
  • d or days() - Days (24 hours)
  • w or weeks() - Weeks (7 days)
  • y or years() - Years (365 days)

Find elapsed time between events:

from {
login: 2024-01-15T09:00:00,
first_action: 2024-01-15T09:05:30,
logout: 2024-01-15T17:30:00,
}
time_to_action = first_action - login
session_duration = logout - login
active_hours = session_duration.count_hours()
{
login: 2024-01-15T09:00:00Z,
first_action: 2024-01-15T09:05:30Z,
logout: 2024-01-15T17:30:00Z,
time_to_action: 5.5min,
session_duration: 8.5h,
active_hours: 8.5,
}

Perform time arithmetic:

from {
event_time: 2024-01-15T10:30:00
}
one_hour_later = event_time + 1h
yesterday = event_time - 1d
next_week = event_time + 7d
thirty_mins_ago = event_time - 30min
{
event_time: 2024-01-15T10:30:00Z,
one_hour_later: 2024-01-15T11:30:00Z,
yesterday: 2024-01-14T10:30:00Z,
next_week: 2024-01-22T10:30:00Z,
thirty_mins_ago: 2024-01-15T10:00:00Z
}

Round timestamps to specific intervals:

from {
precise_time: 2024-01-15T10:37:42.847621
}
to_minute = precise_time.round(1min)
to_hour = precise_time.round(1h)
to_day = precise_time.round(1d)
to_5min = precise_time.round(5min)
{
precise_time: 2024-01-15T10:37:42.847621Z,
to_minute: 2024-01-15T10:38:00Z,
to_hour: 2024-01-15T11:00:00Z,
to_day: 2024-01-15T00:00:00Z,
to_5min: 2024-01-15T10:40:00Z
}

Work with Unix epoch timestamps using from_epoch() and since_epoch():

from {
unix_seconds: 1705316445,
unix_millis: 1705316445123,
unix_micros: 1705316445123456
}
from_seconds = unix_seconds.seconds().from_epoch()
from_millis = unix_millis.milliseconds().from_epoch()
from_micros = unix_micros.microseconds().from_epoch()
back_to_unix = from_seconds.since_epoch().count_seconds()
{
unix_seconds: 1705316445,
unix_millis: 1705316445123,
unix_micros: 1705316445123456,
from_seconds: 2024-01-15T11:00:45Z,
from_millis: 2024-01-15T11:00:45.123Z,
from_micros: 2024-01-15T11:00:45.123456Z,
back_to_unix: 1705316445.0
}
from {
request_start: 2024-01-15T10:30:45.123,
request_end: 2024-01-15T10:30:47.456,
}
duration = request_end - request_start
duration_ms = duration.count_milliseconds()
{
request_start: 2024-01-15T10:30:45.123Z,
request_end: 2024-01-15T10:30:47.456Z,
duration: 2.333s,
duration_ms: 2333.0,
}
from {
event_time: 2024-01-15T10:37:42.847621,
event_type: "login",
}
hour_bucket = event_time.round(1h)
day_bucket = event_time.round(1d)
five_min_bucket = event_time.round(5min)
{
event_time: 2024-01-15T10:37:42.847621Z,
event_type: "login",
hour_bucket: 2024-01-15T11:00:00Z,
day_bucket: 2024-01-15T00:00:00Z,
five_min_bucket: 2024-01-15T10:40:00Z,
}
from {
created_at: 2024-01-01T00:00:00
}
age = now() - created_at
days_old = age.count_days()
hours_old = age.count_hours()
human_readable = f"{days_old.round()} days ago"
{
created_at: 2024-01-01T00:00:00Z,
age: 567.7989434994097d,
days_old: 567.7989434994097,
hours_old: 13627.174643985833,
human_readable: "568 days ago",
}
from {
apache: "15/Jan/2024:10:30:45 +0000",
nginx: "2024/01/15 10:30:45",
syslog: "Jan 15 10:30:45"
}
apache_time = apache.parse_time("%d/%b/%Y:%H:%M:%S %z")
nginx_time = nginx.parse_time("%Y/%m/%d %H:%M:%S")
// For syslog, we need to add the year
syslog_time = ("2024 " + syslog).parse_time("%Y %b %d %H:%M:%S")
{
apache: "15/Jan/2024:10:30:45 +0000",
nginx: "2024/01/15 10:30:45",
syslog: "Jan 15 10:30:45",
apache_time: 2024-01-15T10:30:45Z,
nginx_time: 2024-01-15T10:30:45Z,
syslog_time: 2024-01-15T10:30:45Z,
}

When working with historical data, you often need to replay events with their original timing or adjust timestamps for analysis. TQL provides two operators for this: delay and timeshift.

The timeshift operator adjusts timestamps to a new baseline while preserving relative time differences. This is essential when you need to merge datasets from different time periods into a coherent timeline for comparative analysis. For example, you might want to overlay security incidents from multiple years to identify recurring patterns, or align test data from different runs to compare performance metrics side-by-side.

from {event: "login", ts: 2020-06-15T09:00:00},
{event: "action", ts: 2020-06-15T09:05:30},
{event: "logout", ts: 2020-06-15T17:30:00}
timeshift ts, start=2024-01-01
{event: "login", ts: 2024-01-01T00:00:00Z}
{event: "action", ts: 2024-01-01T00:05:30Z}
{event: "logout", ts: 2024-01-01T08:30:00Z}

Notice how the 5.5-minute gap between login and action, and the 8.5-hour session duration are preserved, but all timestamps now start from January 1, 2024.

You can also scale the time intervals with the speed parameter:

from {event: "start", ts: 2020-01-01T00:00:00},
{event: "middle", ts: 2020-01-01T00:30:00},
{event: "end", ts: 2020-01-01T01:00:00}
// Make intervals 10x longer
timeshift ts, start=2024-01-01, speed=0.1
{event: "start", ts: 2024-01-01T00:00:00Z}
{event: "middle", ts: 2024-01-01T05:00:00Z}
{event: "end", ts: 2024-01-01T10:00:00Z}

The 30-minute intervals became 5-hour intervals (10x longer with speed=0.1).

The delay operator replays events according to their timestamps by introducing sleep periods between events:

from {ts: 2024-01-01T00:00:00, msg: "first"},
{ts: 2024-01-01T00:00:02, msg: "second"},
{ts: 2024-01-01T00:00:03, msg: "third"}
delay ts, start=now(), speed=0.1

With speed=0.1, the 2-second gap between first and second events becomes 20 seconds, and the 1-second gap between second and third becomes 10 seconds. This slower replay makes it easy to observe the delay in action.

For replaying historical data with original timing:

let $zeek_logs = "https://storage.googleapis.com/tenzir-datasets/M57/zeek-all.log.zst"
from_http $zeek_logs {
decompress_zstd
read_zeek_tsv
}
delay ts

This replays the logs with real-world inter-arrival times. If an event occurred at 10:00:00 and the next at 10:00:05, the operator waits 5 seconds between emitting them.

Use every to generate events periodically, then replay them with modified timing:

every 1s {
from {
ts: now(),
message: "Periodic event"
}
}
head 5
delay ts, speed=0.5

This generates events every second but replays them at half speed (2 seconds between events).

Practical example: Simulate real-time monitoring

Section titled “Practical example: Simulate real-time monitoring”

Combine both operators to replay historical logs as if they’re happening now:

// Load historical logs
from_file "/path/to/historical-logs.json"
// Shift timestamps to current time
timeshift timestamp, start=now()
// Replay at 5x speed to quickly review a day's worth of logs
delay timestamp, speed=5.0
// Continue with normal processing
// ...
  • timeshift: Instantly adjusts all timestamps without delays
  • delay: Introduces real-time delays between events based on their timestamps

Use timeshift when you need to analyze historical data with updated timestamps. Use delay when you want to replay events with realistic timing, such as for testing real-time processing systems or simulating live data streams.

  1. Use proper types: Convert strings to time values early in your pipeline
  2. Be consistent: Standardize timestamp formats across your data
  3. Consider timezones: Be aware that TQL timestamps are timezone-aware
  4. Round appropriately: Use rounding to group events into time buckets
  5. Handle null values: Check for missing timestamps before calculations

Last updated: