Example: new TimeSeries({ name, schema, rows }). Creates an immutable time series from a schema and row-oriented input data.
Example: series.firstColumnKind. Returns the first-column kind from the series schema.
Example: series.length. Returns the number of events in the series.
Example: series.rows. Returns the normalized row view of the series.
Example: for (const event of series) { ... }. Iterates events in order.
Example: series.after(Date.now()). Returns the events beginning strictly after the supplied temporal boundary.
Example: series.aggregate(Sequence.every("1m"), { value: "avg" }).
Aggregates events into sequence buckets using built-in reducer names or custom reducers.
Buckets use half-open membership semantics: [begin, end). Point events contribute to the
bucket containing their timestamp. Interval-like events contribute to every bucket they
overlap under half-open overlap rules.
Defaults:
range: series.timeRange()As with align(...), Sequence defines the underlying grid and range selects which portion
of that grid is bounded. With Sequence.every(...), the default grid anchor is Unix epoch 0,
but the default aggregation range is always the source series extent. When a
BoundedSequence is supplied, its intervals are used directly.
Override range when you need multiple series aggregated over the same reporting window,
including leading or trailing empty buckets outside an individual series extent.
Custom reducer contract:
ReadonlyArray<ScalarValue | undefined>ScalarValue | undefinedTo align buckets to the beginning of the current series instead of epoch boundaries, override the sequence anchor rather than the aggregation range:
const range = series.timeRange();
if (!range) {
throw new Error("empty series");
}
const aggregated = series.aggregate(
Sequence.every("1m", { anchor: range.begin() }),
{ value: "avg" },
);
Multi-entity series: every entity's events go into the same
bucket and are aggregated together — the result is one number per
bucket spanning all entities, not per-entity. On a series
carrying multiple entities (host, region, device id), use
series.partitionBy(col).aggregate(seq, mapping).collect() to
aggregate per entity. See TimeSeries.partitionBy.
Optionaloptions: { range?: TemporalLike }Example: series.aggregate(Sequence.every("1m"), { value: "avg" }).
Aggregates events into sequence buckets using built-in reducer names or custom reducers.
Buckets use half-open membership semantics: [begin, end). Point events contribute to the
bucket containing their timestamp. Interval-like events contribute to every bucket they
overlap under half-open overlap rules.
Defaults:
range: series.timeRange()As with align(...), Sequence defines the underlying grid and range selects which portion
of that grid is bounded. With Sequence.every(...), the default grid anchor is Unix epoch 0,
but the default aggregation range is always the source series extent. When a
BoundedSequence is supplied, its intervals are used directly.
Override range when you need multiple series aggregated over the same reporting window,
including leading or trailing empty buckets outside an individual series extent.
Custom reducer contract:
ReadonlyArray<ScalarValue | undefined>ScalarValue | undefinedTo align buckets to the beginning of the current series instead of epoch boundaries, override the sequence anchor rather than the aggregation range:
const range = series.timeRange();
if (!range) {
throw new Error("empty series");
}
const aggregated = series.aggregate(
Sequence.every("1m", { anchor: range.begin() }),
{ value: "avg" },
);
Multi-entity series: every entity's events go into the same
bucket and are aggregated together — the result is one number per
bucket spanning all entities, not per-entity. On a series
carrying multiple entities (host, region, device id), use
series.partitionBy(col).aggregate(seq, mapping).collect() to
aggregate per entity. See TimeSeries.partitionBy.
Optionaloptions: { range?: TemporalLike }Example: series.align(Sequence.every("1m")).
Aligns the series onto a Sequence grid or BoundedSequence and returns an interval-keyed series.
hold carries forward the latest known value to each sample position. linear interpolates
numeric columns between neighboring time-keyed events and falls back to hold behavior for
non-numeric columns. Aligned columns are optional because edge buckets may have no value.
Defaults:
method: "hold"sample: "begin"range: series.timeRange()For Sequence inputs, the sequence anchor still comes from the grid definition itself. For
procedural sequences created with Sequence.every(...), that anchor defaults to Unix epoch
0. The range only decides which finite slice of that grid is bounded for this alignment.
When a BoundedSequence is supplied, its intervals are used directly.
Example:
Sequence.every("1m") defines an epoch-anchored minute gridseries.align(Sequence.every("1m")) aligns onto the slice of that minute grid spanning the
current series extentMulti-entity series: alignment samples cross entity boundaries —
host-A's aligned bucket would interpolate or hold against
host-B's value. On a series carrying multiple entities (host,
region, device id), use
series.partitionBy(col).align(...).collect() to scope per entity.
See TimeSeries.partitionBy.
Example: series.arrayAggregate("tags", "count").
Per-event reduction of an array column. Feeds each event's array into
the reducer as if it were a bucket, reusing the built-in reducer
registry (count, sum, avg, min, max, median, stdev,
difference, pNN, first, last, keep, unique) and any custom
(values) => result function. Output kind is inferred:
count, sum, avg, min, max, median,
stdev, difference, pNN) → "number""unique" → "array" (dedupes within the event's array)"first" / "last" / "keep" / custom → "string" by default;
override with { kind: "..." }Without as, the source column is replaced in place. With
{ as: "name" }, a new column is appended and the source array column
is preserved.
Example: series.arrayAggregate("tags", "count", { as: "tagCount" }).
Optionaloptions: { kind?: ExplicitKind }Example: series.arrayAggregate("tags", "count").
Per-event reduction of an array column. Feeds each event's array into
the reducer as if it were a bucket, reusing the built-in reducer
registry (count, sum, avg, min, max, median, stdev,
difference, pNN, first, last, keep, unique) and any custom
(values) => result function. Output kind is inferred:
count, sum, avg, min, max, median,
stdev, difference, pNN) → "number""unique" → "array" (dedupes within the event's array)"first" / "last" / "keep" / custom → "string" by default;
override with { kind: "..." }Without as, the source column is replaced in place. With
{ as: "name" }, a new column is appended and the source array column
is preserved.
Example: series.arrayAggregate("tags", "count", { as: "tagCount" }).
Example: series.arrayContains("tags", "critical").
Keeps events whose array column col contains value. Events with an
undefined value are dropped. Use on array-kind columns produced by
reducers like "unique", or on tag-style columns where each event
carries a list of scalars.
Example: series.arrayContainsAll("tags", ["web", "east"]).
Keeps events whose array column col contains every value in
values (subset / set-containment AND). values of length 0 keeps
every event with a defined array on col. Events with an undefined
array are dropped.
Example: series.arrayContainsAny("tags", ["critical", "warning"]).
Keeps events whose array column col contains at least one value in
values (set-intersection OR). values of length 0 always returns
an empty series. Events with an undefined array are dropped.
Example: series.arrayExplode("tags").
Fans each event out into one event per element of the array column
col. Events with an empty or undefined array are dropped. Emitted
events share the source event's key, so the result may contain events
with duplicate timestamps.
Without as, the array column is replaced by a scalar column of the
chosen kind (default "string").
Example: series.arrayExplode("tags", { as: "tag" }).
With as, a new scalar column is appended carrying the per-element
value and the source array column is kept intact (every fanned-out
event still carries the full array on col).
Optionaloptions: { kind?: OutputKind }Example: series.arrayExplode("tags").
Fans each event out into one event per element of the array column
col. Events with an empty or undefined array are dropped. Emitted
events share the source event's key, so the result may contain events
with duplicate timestamps.
Without as, the array column is replaced by a scalar column of the
chosen kind (default "string").
Example: series.arrayExplode("tags", { as: "tag" }).
With as, a new scalar column is appended carrying the per-element
value and the source array column is kept intact (every fanned-out
event still carries the full array on col).
Example: series.asInterval(event => event.begin()). Converts the series key type to "interval" while preserving each event extent and supplying interval labels.
Example: series.asInterval(event => event.begin()). Converts the series key type to "interval" while preserving each event extent and supplying interval labels.
Example: series.asTime({ at: "center" }). Converts the series key type to "time" using the supplied anchor within each event extent.
Example: series.asTimeRange(). Converts the series key type to "timeRange" while preserving each event extent.
Example: series.at(0). Returns the event at the supplied zero-based position, if present.
Example: series.atOrAfter(new Time(Date.now())). Returns the event with the exact key or the nearest later event, if any.
Example: series.atOrBefore(new Time(Date.now())). Returns the event with the exact key or the nearest earlier event, if any.
Example: series.baseline('cpu', { window: '1m', sigma: 2 }).
Appends rolling-baseline statistics as four new columns on every
event: the rolling average (avg), rolling standard deviation
(sd), and the band edges (upper = avg + sigma * sd,
lower = avg - sigma * sd). The source schema is preserved
intact, so downstream code can filter, render, and compose freely.
This is the primitive behind band charts and outlier detection:
const baseline = series.baseline('cpu', { window: '1m', sigma: 2 });
// Band charts: one wide-row export covers every column at once.
const data = baseline.toPoints();
// [{ ts, cpu, ..., avg, sd, upper, lower }, ...]
// Anomaly detection: one filter, no extra rolling pass.
const anomalies = baseline.filter((e) => {
const cpu = e.get('cpu');
const upper = e.get('upper');
const lower = e.get('lower');
return cpu != null && upper != null && lower != null
&& (cpu > upper || cpu < lower);
});
The sigma option controls band width — sigma: 2 is the common
"95% envelope" for normally distributed data. Opening events
before the rolling window has a meaningful baseline get
undefined for all four new columns. Inside the warm-up region,
events where sd === 0 (a flat window) keep the avg / sd
values but emit undefined for upper / lower — a zero-width
band would flag every non-equal point as anomalous, which is not
the primitive callers want. Filters that compare against the
band should null-check upper / lower.
Custom column names via the names option if the defaults would
collide with source columns.
Internally a single rolling(window, { avg, sd }) pass over the
source; band edges are derived arithmetically per event.
Multi-entity series: the baseline window aggregates across
every entity, so host-A's avg/sd reflect the cross-entity
mean/spread rather than host-A's own. Anomaly detection on a
multi-entity baseline flags events relative to the wrong
population. On a series carrying multiple entities (host, region,
device id), use
series.partitionBy(col).baseline(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Example: series.before(Date.now()). Returns the events ending strictly before the supplied temporal boundary.
Example: series.bisect(new Time(Date.now())). Returns the insertion index for the supplied key in the ordered event sequence.
Example: series.collapse(["in", "out"], "avg", fn). Collapses selected payload fields into a single derived field across each event in the series.
Example: series.collapse(["in", "out"], "avg", fn). Collapses selected payload fields into a single derived field across each event in the series.
Example: series.containedBy(range).
Returns the portion of the series whose event extents are fully contained by the supplied range.
This is the strict containment selector:
events must start at or after the range start and end at or before the range end.
Unlike overlapping(...), partially overlapping events are excluded.
Example: series.contains(range). Returns true when the overall series extent fully contains the supplied temporal value.
Example: series.cumulative({ requests: "sum" }).
Computes running accumulations for the specified numeric columns.
Non-accumulated columns pass through unchanged.
Built-in accumulators: "sum", "max", "min", "count".
Custom accumulators: (acc: number, value: number) => number.
Multi-entity series: the running accumulation interleaves
across entities — host-A's next event sums on top of
host-B's last value rather than host-A's. On a series carrying
multiple entities (host, region, device id), use
series.partitionBy(col).cumulative(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Example: series.dedupe().
Collapses events that share a key. The default key is the full
event key — begin() for time-keyed series, begin()+end() for
time-range, and begin()+end()+value for interval-keyed
series. Two events with the same full key are treated as
duplicates. The default resolution is 'last' wins.
Multi-entity series: events from different entities at the
same key collapse as if they were duplicates of each other —
host-A
collide on the timestamp alone. On a
series carrying multiple entities (host, region, device id), use
series.partitionBy(col).dedupe(...).collect() so the partition
column is part of the duplicate identity. See
TimeSeries.partitionBy.
// Per-host dedupe — same time AND same host is the duplicate key.
series.partitionBy('host').dedupe({ keep: 'last' }).collect();
The keep option chooses the resolution policy:
'first' — keep the first occurrence at each key.
'last' — keep the last occurrence (default; matches WebSocket
replay semantics).
'error' — throw on the first duplicate seen. Useful for
ingestion paths that want to fail loudly on shape violations.
'drop' — discard every event at any duplicate key.
Conservative; the value of "1.5 events at this timestamp" is
rarely defensible.
{ min: col } / { max: col } — keep the event with the
smallest / largest value at the named numeric column. Ties keep
the earliest tied event. Events with undefined at that column
lose to any event with a defined value.
(events) => Event — custom resolver. Receives all duplicates
at a single key (length ≥ 2) and returns one. The cleanest
pattern is to start from one of the input events and use
event.set(field, value) so the type stays narrow:
series.dedupe({
keep: (events) => {
const last = events[events.length - 1];
const avg =
events.reduce((a, e) => a + (e.get('cpu') ?? 0), 0) /
events.length;
return last.set('cpu', avg);
},
});
Real-world ingest produces duplicates: WebSocket replays, Kafka
at-least-once, retried HTTP fetches, polling overlaps. dedupe()
is the post-ingest cleanup primitive.
Example: series.diff("requests").
Computes per-event differences for the specified numeric columns.
Non-specified columns pass through unchanged. The first event gets
undefined in affected columns unless { drop: true } is passed,
which removes the first event entirely.
Example: series.diff(["requests", "cpu"]).
Multiple columns can be diffed in a single call.
Example: series.diff("requests", { drop: true }).
Drops the first event instead of keeping it with undefined values.
Multi-entity series: the "previous event" may belong to a
different entity, producing meaningless deltas across entity
boundaries. On a series carrying multiple entities (host, region,
device id), use
series.partitionBy(col).diff(...).collect() to scope per entity.
See TimeSeries.partitionBy.
Example: series.every(event => event.get("healthy")). Returns true when every event matches the predicate.
Example: series.fill("hold").
Fills undefined values using the given strategy for all payload columns.
Example: series.fill({ cpu: "linear", host: "hold" }).
Per-column fill strategies. Unmentioned columns are left as-is.
Strategy names: "hold" (forward fill), "bfill" (backward fill),
"linear" (time-interpolated), "zero" (fill with 0). A non-string
value is used as a literal fill value.
Gap semantics — all-or-nothing. A "gap" is a run of consecutive
undefined cells in one column. For each gap:
{ limit: N }: fill only if the gap length is at most N
cells. Otherwise leave the gap fully unfilled.{ maxGap: '3m' }: fill only if the gap's temporal span
(from the prior known value to the next known value) is at most
the duration. Otherwise leave the gap fully unfilled.The all-or-nothing semantic is the v0.9.0 default. Earlier
versions partially filled (limit: 3 on a 5-cell gap filled 3,
left 2 unfilled). The new semantic avoids fabricating data
across what's actually a long outage — partial fills propagate
stale values past their useful lifetime.
"linear" requires known values on both sides of a gap; leading
and trailing gaps are unfilled. "hold" fills any internal or
trailing gap (leading has no prior value). "bfill" fills any
internal or leading gap (trailing has no next value). "zero"
and literal fills work on any gap that fits the size caps.
Multi-entity series: fill walks one chronological event
sequence — host-A's missing cell would linear-interpolate or
hold-carry against host-B's neighboring value. On a series
carrying multiple entities (host, region, device id), use
series.partitionBy(col).fill(...).collect() to scope per entity.
See TimeSeries.partitionBy.
Optionaloptions: { limit?: number; maxGap?: DurationInput }Example: series.filter(event => event.get("active")). Returns a new series containing only events that match the predicate.
Example: series.find(event => event.get("value") > 0). Returns the first event that matches the predicate, if any.
Example: series.first(). Returns the first event in the series, if present.
Example: series.groupBy("host").
Partitions the series into groups keyed by the distinct values of a payload column.
Each group is a TimeSeries with the same schema, preserving event order.
Example: series.groupBy("host", group => group.rolling("5m", { cpu: "avg" })).
When a transform callback is supplied, it is applied to each group and the result
map contains the transform outputs instead of raw sub-series.
Example: series.groupBy("host").
Partitions the series into groups keyed by the distinct values of a payload column.
Each group is a TimeSeries with the same schema, preserving event order.
Example: series.groupBy("host", group => group.rolling("5m", { cpu: "avg" })).
When a transform callback is supplied, it is applied to each group and the result
map contains the transform outputs instead of raw sub-series.
Example: series.includesKey(new Time(Date.now())). Returns true when the series contains an event with an exactly matching key.
Example: series.intersection(range). Returns the overlap between the overall series extent and the supplied temporal value, if any.
Example: left.join(right, { type: "left" }).
Performs an exact-key join of two series with the same key kind.
Join types:
"outer": keep keys from either side"left": keep all keys from the left series"right": keep all keys from the right series"inner": keep only keys present on both sidesDefaults:
type: "outer"onConflict: "error"Value columns from both series are included in the result and are optional because joined rows
may have missing values on either side. If both series use the same payload column name,
you can either rename one side before joining or use { onConflict: "prefix", prefixes: [...] }.
Optionaloptions: ErrorJoinOptionsExample: left.join(right, { type: "left" }).
Performs an exact-key join of two series with the same key kind.
Join types:
"outer": keep keys from either side"left": keep all keys from the left series"right": keep all keys from the right series"inner": keep only keys present on both sidesDefaults:
type: "outer"onConflict: "error"Value columns from both series are included in the result and are optional because joined rows
may have missing values on either side. If both series use the same payload column name,
you can either rename one side before joining or use { onConflict: "prefix", prefixes: [...] }.
Example: series.last(). Returns the last event in the series, if present.
Example: series.map(nextSchema, event => event). Maps each event into a new typed schema and returns a new series.
Example: series.materialize(Sequence.every("1m")).
Materializes the series onto a sequence grid, emitting one
time-keyed row per bucket. For each bucket, populate value
columns from a chosen source event whose begin() falls in
[bucket.begin, bucket.end); for empty buckets, emit a row with
all value columns undefined.
The natural pre-step to gap-capped fill — materialize only
regularizes the grid, leaving fill policy as a separate decision:
series
.partitionBy('host')
.dedupe({ keep: 'last' })
.materialize(Sequence.every('1m')) // regularize, undefined for empty
.fill({ cpu: 'linear' }, { maxGap: '3m' }) // explicit fill policy
.collect();
Distinct from align() (which mandates a 'hold' or 'linear'
fill method and returns interval-keyed) and aggregate() (which
applies a per-column reducer). materialize does only the grid
step; fill is a separate composition.
Options:
sample ('begin' | 'center' | 'end', default 'begin')
— bucket anchor for the output time. Matches align's
convention.select ('first' | 'last' | 'nearest', default 'last')
— which source event in each bucket wins. 'first' /
'last' pick the boundary event by begin() order.
'nearest' picks the source event whose begin() is closest
to the bucket's sample time among events in the bucket.
All three use half-open [bucket.begin, bucket.end)
membership; an empty bucket emits undefined regardless of
select.range (TemporalLike, default series.timeRange()) —
bounded slice for procedural sequences (Sequence.every(...)).
When a BoundedSequence is supplied directly, its intervals
are used as-is.Multi-entity series: every cell of an empty-bucket row is
undefined — including string/categorical columns like host.
On a series carrying multiple entities, use
series.partitionBy(col).materialize(seq).collect() so the
partition column auto-populates on every output row (including
empty buckets) — host's value is known per partition.
See TimeSeries.partitionBy.
Example: series.outliers('cpu', { window: '1m', sigma: 2 }).
Rolling-baseline outlier detection: returns the subset of events
whose value on col deviates from the trailing rolling average
by more than sigma * rolling_stdev. Same schema as the input,
so the result composes with every other TimeSeries method —
.aggregate(seq, { col: 'count' }) for bucketed anomaly counts,
.groupBy('host') for per-host outlier lists, etc.
Events before the rolling window has a meaningful baseline (stdev is zero or undefined) are not flagged — can't detect deviation against a flat or empty reference.
Conceptually equivalent to baseline(col, { window, sigma })
followed by a |value - avg| > sigma * sd filter — both share
the same flat-window skip behavior. Implemented independently
(one rolling pass, no intermediate schema), so reach for
baseline(...) directly when you also want to render the
avg / upper / lower columns.
Internally: computes rolling(window, { avg, sd }) using the
output-map form, zips with the source events by index, and keeps
events where |value - avg| > sigma * sd.
Multi-entity series: the rolling baseline aggregates across
every entity, so the deviation threshold reflects the wrong
population — host-A's "outlier" status is decided against the
cross-entity mean rather than host-A's own. On a series carrying
multiple entities (host, region, device id), use
series.partitionBy(col).outliers(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Example: series.overlapping(range).
Returns the portion of the series whose event extents overlap the supplied range.
Unlike within(...), this keeps partially overlapping events without modifying their keys.
Use trim(...) when you want those overlapping keys clipped to the supplied range.
Example: series.overlaps(range). Returns true when the overall series extent overlaps the supplied temporal value.
Example: series.partitionBy('host').fill({ cpu: 'linear' }).
Returns a PartitionedTimeSeries view that scopes stateful
transforms to within each partition. Most stateful operators
(fill, align, rolling, smooth, baseline, outliers,
diff, rate, pctChange, cumulative, shift, aggregate,
dedupe, materialize) read neighboring events when computing
each output and silently cross entity boundaries on multi-entity
series — partitionBy fixes that by running the op independently
per partition and reassembling.
Composite partitioning by multiple columns is supported by passing
an array: series.partitionBy(['host', 'region']).
Typed groups (single-column only). Passing
{ groups: HOSTS as const } declares the expected partition values
up front. The returned view's K type narrows from string to
the literal union of declared values, propagating through
toMap() so its return type becomes
Map<typeof HOSTS[number], TimeSeries<S>>. Behavior changes:
toMap iterates in declared order (not insertion order), empty
declared groups still produce empty TimeSeries entries, and
partition values not in the declared set throw at construction
time. Mirrors TimeSeries.pivotByGroup's typed-groups
pattern. Composite partitions, empty groups, and duplicate
values throw upfront. Numeric and boolean partition columns are
stringified by the encoder, so declared groups must be the
stringified form (groups: ['1', '2'] as const for a numeric
column with values 1 and 2).
// Per-host fill — no cross-host interpolation
series.partitionBy('host').fill({ cpu: 'linear' });
// Composite partitioning
series.partitionBy(['host', 'region']).rolling('5m', { cpu: 'avg' });
// Typed groups — narrows toMap key type
const HOSTS = ['api-1', 'api-2', 'api-3'] as const;
const byHost = series
.partitionBy('host', { groups: HOSTS })
.fill({ cpu: 'linear' })
.toMap();
// byHost: Map<'api-1' | 'api-2' | 'api-3', TimeSeries<S>>
// Arbitrary composition via .apply()
series.partitionBy('host').apply(g =>
g.fill({ cpu: 'linear' }).rolling('5m', { cpu: 'avg' }),
);
Example: series.partitionBy('host').fill({ cpu: 'linear' }).
Returns a PartitionedTimeSeries view that scopes stateful
transforms to within each partition. Most stateful operators
(fill, align, rolling, smooth, baseline, outliers,
diff, rate, pctChange, cumulative, shift, aggregate,
dedupe, materialize) read neighboring events when computing
each output and silently cross entity boundaries on multi-entity
series — partitionBy fixes that by running the op independently
per partition and reassembling.
Composite partitioning by multiple columns is supported by passing
an array: series.partitionBy(['host', 'region']).
Typed groups (single-column only). Passing
{ groups: HOSTS as const } declares the expected partition values
up front. The returned view's K type narrows from string to
the literal union of declared values, propagating through
toMap() so its return type becomes
Map<typeof HOSTS[number], TimeSeries<S>>. Behavior changes:
toMap iterates in declared order (not insertion order), empty
declared groups still produce empty TimeSeries entries, and
partition values not in the declared set throw at construction
time. Mirrors TimeSeries.pivotByGroup's typed-groups
pattern. Composite partitions, empty groups, and duplicate
values throw upfront. Numeric and boolean partition columns are
stringified by the encoder, so declared groups must be the
stringified form (groups: ['1', '2'] as const for a numeric
column with values 1 and 2).
// Per-host fill — no cross-host interpolation
series.partitionBy('host').fill({ cpu: 'linear' });
// Composite partitioning
series.partitionBy(['host', 'region']).rolling('5m', { cpu: 'avg' });
// Typed groups — narrows toMap key type
const HOSTS = ['api-1', 'api-2', 'api-3'] as const;
const byHost = series
.partitionBy('host', { groups: HOSTS })
.fill({ cpu: 'linear' })
.toMap();
// byHost: Map<'api-1' | 'api-2' | 'api-3', TimeSeries<S>>
// Arbitrary composition via .apply()
series.partitionBy('host').apply(g =>
g.fill({ cpu: 'linear' }).rolling('5m', { cpu: 'avg' }),
);
Example: series.pctChange("requests").
Computes the percentage change (curr - prev) / prev for the specified
numeric columns. Non-specified columns pass through unchanged. The first
event gets undefined in affected columns unless { drop: true } is
passed.
Multi-entity series: the "previous event" may belong to a
different entity, producing meaningless percentages across entity
boundaries. On a series carrying multiple entities (host, region,
device id), use
series.partitionBy(col).pctChange(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Example: series.pivotByGroup("host", "cpu").
Reshapes long-form data into wide rows. Each distinct value of
groupCol becomes its own column in the output schema named
${group}_${valueCol}, holding the value from valueCol at that
timestamp.
Rows sharing a timestamp collapse into one output row. Cells where
a group has no event at a given timestamp are undefined. The
wide-row counterpart of groupBy for the case where you want one
wide TimeSeries instead of N separate ones — typically because
the downstream chart expects wide rows.
If two events share both a timestamp AND a group value the call
throws by default. Pass { aggregate: "avg" } (or any reducer name
that aggregate() accepts: "sum", "first", "last", "min",
"max", "median", percentiles like "p95", etc.) to combine
duplicates instead. The aggregator's output kind must match the
value column's kind — e.g. count, unique, topN produce
non-source kinds and are rejected upfront. Use aggregate() first
if you need a kind-changing reduction.
Output schema is dynamic — column names depend on runtime data —
so the return type is TimeSeries<SeriesSchema> (loosely typed).
Group values are sorted alphabetically for stable column order.
Requires a time-keyed input series.
Known limitation: a group column containing both literal
"undefined" strings and actually-undefined values collapses both
into a single "undefined" output column. Edge case — open an
issue if you hit it.
Example: series.pivotByGroup("host", "cpu", { aggregate: "avg" }).
Averages values when multiple rows share (timestamp, host).
Example (typed output via declared groups):
const HOSTS = ['api-1', 'api-2'] as const;
const wide = series.pivotByGroup('host', 'cpu', { groups: HOSTS });
// wide.schema is now literal-typed:
// [time, { name: 'api-1_cpu', kind: 'number', required: false },
// { name: 'api-2_cpu', kind: 'number', required: false }]
wide.baseline('api-1_cpu', { window: '1m', sigma: 2 }); // no cast
When groups is supplied:
groups throw upfront. Use the untyped form
(no groups option) when the group set is open or unknown.Example: series.pivotByGroup("host", "cpu").
Reshapes long-form data into wide rows. Each distinct value of
groupCol becomes its own column in the output schema named
${group}_${valueCol}, holding the value from valueCol at that
timestamp.
Rows sharing a timestamp collapse into one output row. Cells where
a group has no event at a given timestamp are undefined. The
wide-row counterpart of groupBy for the case where you want one
wide TimeSeries instead of N separate ones — typically because
the downstream chart expects wide rows.
If two events share both a timestamp AND a group value the call
throws by default. Pass { aggregate: "avg" } (or any reducer name
that aggregate() accepts: "sum", "first", "last", "min",
"max", "median", percentiles like "p95", etc.) to combine
duplicates instead. The aggregator's output kind must match the
value column's kind — e.g. count, unique, topN produce
non-source kinds and are rejected upfront. Use aggregate() first
if you need a kind-changing reduction.
Output schema is dynamic — column names depend on runtime data —
so the return type is TimeSeries<SeriesSchema> (loosely typed).
Group values are sorted alphabetically for stable column order.
Requires a time-keyed input series.
Known limitation: a group column containing both literal
"undefined" strings and actually-undefined values collapses both
into a single "undefined" output column. Edge case — open an
issue if you hit it.
Example: series.pivotByGroup("host", "cpu", { aggregate: "avg" }).
Averages values when multiple rows share (timestamp, host).
Example (typed output via declared groups):
const HOSTS = ['api-1', 'api-2'] as const;
const wide = series.pivotByGroup('host', 'cpu', { groups: HOSTS });
// wide.schema is now literal-typed:
// [time, { name: 'api-1_cpu', kind: 'number', required: false },
// { name: 'api-2_cpu', kind: 'number', required: false }]
wide.baseline('api-1_cpu', { window: '1m', sigma: 2 }); // no cast
When groups is supplied:
groups throw upfront. Use the untyped form
(no groups option) when the group set is open or unknown.Example: series.rate("requests").
Computes the per-second rate of change for the specified numeric columns.
Non-specified columns pass through unchanged. The first event gets
undefined in affected columns unless { drop: true } is passed,
which removes the first event entirely.
Example: series.rate(["requests", "cpu"]).
Multiple columns can be rated in a single call.
Example: series.rate("requests", { drop: true }).
Drops the first event instead of keeping it with undefined values.
Multi-entity series: the "previous event" may belong to a
different entity, producing meaningless rates across entity
boundaries. On a series carrying multiple entities (host, region,
device id), use
series.partitionBy(col).rate(...).collect() to scope per entity.
See TimeSeries.partitionBy.
Example: series.reduce("value", "avg").
Collapses the entire series to a single scalar value using the specified reducer.
Example: series.reduce({ cpu: "avg", requests: "sum" }).
Collapses the entire series to a record with one entry per mapped column.
Uses the same reducer specs as aggregate(...) — built-in names like "avg", "sum",
"count", or custom functions (values) => result. Where aggregate buckets by time and
produces a new TimeSeries, reduce treats the whole series as one bucket and produces
a plain value or record.
Example: series.reduce("value", "avg").
Collapses the entire series to a single scalar value using the specified reducer.
Example: series.reduce({ cpu: "avg", requests: "sum" }).
Collapses the entire series to a record with one entry per mapped column.
Uses the same reducer specs as aggregate(...) — built-in names like "avg", "sum",
"count", or custom functions (values) => result. Where aggregate buckets by time and
produces a new TimeSeries, reduce treats the whole series as one bucket and produces
a plain value or record.
Example: series.reduce("value", "avg").
Collapses the entire series to a single scalar value using the specified reducer.
Example: series.reduce({ cpu: "avg", requests: "sum" }).
Collapses the entire series to a record with one entry per mapped column.
Uses the same reducer specs as aggregate(...) — built-in names like "avg", "sum",
"count", or custom functions (values) => result. Where aggregate buckets by time and
produces a new TimeSeries, reduce treats the whole series as one bucket and produces
a plain value or record.
Example: series.rename({ cpu: "usage" }). Returns a new series with payload field names renamed according to the supplied mapping.
Example: series.rolling("1h", { value: "avg" }).
Computes event-driven rolling aggregations over the ordered series.
Example: series.rolling(Sequence.every("1m"), "5m", { value: "avg" }).
Computes sequence-driven rolling aggregations and returns an interval-keyed series on the
supplied grid.
Rolling windows are anchored either at each event's begin() time or at the sample point of
each sequence bucket. Membership is determined from source event begin() times.
Supported alignments:
"trailing": (t - window, t]"leading": [t, t + window)"centered": [t - window/2, t + window/2)Defaults:
alignment: "trailing"sample: "begin"range: series.timeRange()Multi-entity series: the rolling window includes events from
every entity within the window — host-A's rolling average mixes
host-B's and host-C's values into the same number. On a
series carrying multiple entities (host, region, device id), use
series.partitionBy(col).rolling(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Optionaloptions: { alignment?: RollingAlignment }Example: series.rolling("1h", { value: "avg" }).
Computes event-driven rolling aggregations over the ordered series.
Example: series.rolling(Sequence.every("1m"), "5m", { value: "avg" }).
Computes sequence-driven rolling aggregations and returns an interval-keyed series on the
supplied grid.
Rolling windows are anchored either at each event's begin() time or at the sample point of
each sequence bucket. Membership is determined from source event begin() times.
Supported alignments:
"trailing": (t - window, t]"leading": [t, t + window)"centered": [t - window/2, t + window/2)Defaults:
alignment: "trailing"sample: "begin"range: series.timeRange()Multi-entity series: the rolling window includes events from
every entity within the window — host-A's rolling average mixes
host-B's and host-C's values into the same number. On a
series carrying multiple entities (host, region, device id), use
series.partitionBy(col).rolling(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Optionaloptions: { alignment?: RollingAlignment }Example: series.rolling("1h", { value: "avg" }).
Computes event-driven rolling aggregations over the ordered series.
Example: series.rolling(Sequence.every("1m"), "5m", { value: "avg" }).
Computes sequence-driven rolling aggregations and returns an interval-keyed series on the
supplied grid.
Rolling windows are anchored either at each event's begin() time or at the sample point of
each sequence bucket. Membership is determined from source event begin() times.
Supported alignments:
"trailing": (t - window, t]"leading": [t, t + window)"centered": [t - window/2, t + window/2)Defaults:
alignment: "trailing"sample: "begin"range: series.timeRange()Multi-entity series: the rolling window includes events from
every entity within the window — host-A's rolling average mixes
host-B's and host-C's values into the same number. On a
series carrying multiple entities (host, region, device id), use
series.partitionBy(col).rolling(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Optionaloptions: { alignment?: RollingAlignment; range?: TemporalLike; sample?: AlignSample }Example: series.rolling("1h", { value: "avg" }).
Computes event-driven rolling aggregations over the ordered series.
Example: series.rolling(Sequence.every("1m"), "5m", { value: "avg" }).
Computes sequence-driven rolling aggregations and returns an interval-keyed series on the
supplied grid.
Rolling windows are anchored either at each event's begin() time or at the sample point of
each sequence bucket. Membership is determined from source event begin() times.
Supported alignments:
"trailing": (t - window, t]"leading": [t, t + window)"centered": [t - window/2, t + window/2)Defaults:
alignment: "trailing"sample: "begin"range: series.timeRange()Multi-entity series: the rolling window includes events from
every entity within the window — host-A's rolling average mixes
host-B's and host-C's values into the same number. On a
series carrying multiple entities (host, region, device id), use
series.partitionBy(col).rolling(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Optionaloptions: { alignment?: RollingAlignment; range?: TemporalLike; sample?: AlignSample }Example: series.select("cpu", "healthy"). Returns a new series containing only the selected payload fields.
Example: series.shift("value", 1).
Lags column values by N events (positive N) or leads them (negative N).
Vacated positions get undefined.
Multi-entity series: the value pulled in from N positions away
may belong to a different entity, producing meaningless lagged
values across entity boundaries. On a series carrying multiple
entities (host, region, device id), use
series.partitionBy(col).shift(...).collect() to scope per entity.
See TimeSeries.partitionBy.
Example: series.slice(0, 10). Returns a positional half-open slice of the series.
OptionalbeginIndex: numberOptionalendIndex: numberExample: series.smooth("value", "ema", { alpha: 0.2 }).
Applies a smoothing transform to one numeric payload column while preserving the original key
type, key values, and all non-target payload fields.
Example: series.smooth("value", "movingAverage", { window: "5m", alignment: "centered", output: "valueAvg" }).
Computes a moving average over the selected numeric column using anchor points derived from
event keys. Time keys use their timestamp. TimeRange and Interval keys use the midpoint
of their extent.
Example: series.smooth("value", "loess", { span: 0.75 }).
Computes a LOESS-smoothed value for the selected numeric column using local weighted linear
regression over those same anchor points.
When output is omitted, the smoothed values replace the target column. When output is
supplied, the smoothed values are appended as a new optional numeric column.
Multi-entity series: the smoothing window pulls values from
every entity into each smoothed point — host-A's smoothed value
is blended with host-B's and host-C's. On a series carrying
multiple entities (host, region, device id), use
series.partitionBy(col).smooth(...).collect() to scope per
entity. See TimeSeries.partitionBy.
Example: series.some(event => event.get("healthy")). Returns true when at least one event matches the predicate.
Example: series.tail('30s').
Returns the trailing portion of the series covering the supplied
duration, measured backward from the last event's begin(). Events
whose begin() is strictly greater than lastBegin - duration are
kept. If the series is empty, or the argument is omitted, the series
is returned unchanged — tail() with no argument is the identity.
This is the temporal counterpart to Array.slice(-n), and composes
naturally with reduce to express "current" state:
series.tail('30s').reduce({ cpu: 'avg', host: 'unique' });
// => { cpu: number | undefined, host: ArrayValue | undefined }
Optionalduration: DurationInputExample: series.timeRange(). Returns the overall temporal extent of the series, if the series is not empty.
Example: series.toArray(). Returns a shallow copy of the event array.
Example: series.toJSON({ rowFormat: "object" }).
Serializes the series into the JSON-friendly shape accepted by TimeSeries.fromJSON(...).
Timestamps are emitted as numbers to avoid time zone ambiguity. Missing payload values are
emitted as null. By default rows are emitted as arrays; use rowFormat: "object" for rows
keyed by schema column names.
Example: series.toObjects(). Returns normalized schema-keyed object rows using temporal key objects and undefined for missing payload values.
Example: series.toPoints().
Wide-row export: { ts, ...valueColumns }[]. Every event in the
series produces one row; every value column from the schema
appears as a top-level key. Missing values stay undefined
(chart libraries render those as gaps under connectNulls={false}
or equivalent).
ts is event.begin() — for Time keys this is the timestamp,
for TimeRange / Interval keys this is the interval start.
Need a single column? Compose with select:
const data = series.select('cpu').toPoints();
// [{ ts: number, cpu: number | undefined }, ...]
The shape — flat { ts, ... }[] — is what every mainstream chart
library accepts directly (Recharts, Observable Plot, visx, raw d3).
Example: series.toRows(). Returns normalized row arrays using Time/TimeRange/Interval keys and undefined for missing payload values.
Example: series.trim(range).
Returns the series trimmed to the supplied range by clipping overlapping event keys.
Non-overlapping events are dropped. Overlapping TimeRange and Interval keys are clipped
to the supplied range. Overlapping Time keys are preserved unchanged.
Example: series.within(start, end).
Returns the portion of the series fully contained by the supplied inclusive temporal range.
This is equivalent in behavior to containedBy(...), but accepts either explicit begin/end
boundaries or a single range-like value.
Example: series.within(range).
Returns the portion of the series fully contained by the supplied inclusive temporal range.
Use overlapping(...) for intersection-based selection or trim(...) for clipped output.
StaticconcatExample: TimeSeries.concat([s1, s2, s3]).
Concatenates the events of N same-schema TimeSeries instances and
returns one wider series with all events sorted by key. This is the
"row-append" / vertical-stack counterpart to joinMany (column-merge
by key) and the inverse of the per-group fan-out pattern from
groupBy(col, fn). Matches Array.prototype.concat /
pandas.concat(axis=0) / SQL UNION ALL semantics.
Schemas must match column-by-column on name and kind only —
the required flag is intentionally not part of the structural
check, since required: false only widens cell types and doesn't
affect the concat contract. Other mismatches throw upfront. The
concatenated series's name is taken from the first input.
Event references survive the concat unchanged (no clones), so
concat.at(0) is the same Event instance as the corresponding
source-series event. Tied keys preserve input order via stable
sort — concat([a, b]) puts a's events before b's at any shared
key.
Coming from pondjs: TimeSeries.timeSeriesListMerge(...)'s
concatenation case maps to TimeSeries.concat([...]). Its
column-union case maps to TimeSeries.joinMany([...]).
const groups = series.groupBy('host', (g) =>
g.fill({ cpu: 'linear' }, { limit: 2 }),
);
const concat = TimeSeries.concat([...groups.values()]);
// same schema as the source; events from all hosts re-sorted by time.
For combining series with different schemas (e.g. CPU and memory
sources) by joining on the time key, use TimeSeries.joinMany([...])
instead.
StaticfromExample: TimeSeries.fromEvents(events, { schema, name }).
Builds a typed series from an array of Event instances. The events
are sorted by key before construction, so callers don't need to
pre-sort. The schema is taken on trust — callers should pass the
same schema the events were originally produced under.
Trust contract: no validation against the declared schema. If
the caller passes events from a different schema, the series
builds successfully and downstream event.get('col') calls will
return undefined / produce confusing errors at access time. Most
callers come from groupBy(...).values() or other pond-ts
transforms and can't hit this; if you're constructing events by
hand, prefer new TimeSeries({ schema, rows }) or
TimeSeries.fromJSON(...), both of which validate.
Closes the round-trip after groupBy(col, fn) + per-group transforms:
const groups = series.groupBy('host', (g) =>
g.fill({ cpu: 'linear' }, { limit: 2 }),
);
const allEvents = [...groups.values()].flatMap((g) => [...g.events]);
const merged = TimeSeries.fromEvents(allEvents, {
name: series.name,
schema: series.schema,
});
For combining multiple same-schema series in one call, prefer
TimeSeries.concat([...]) — it does the events-spread for you.
StaticfromExample: TimeSeries.fromJSON({ name, schema, rows, parse: { timeZone: "Europe/Madrid" } }).
Creates a typed series from JSON-style row arrays or object rows keyed by schema column names.
null values are treated as missing values. Ambiguous local timestamp strings are parsed using
the supplied parse.timeZone, which defaults to UTC.
StaticfromExample: TimeSeries.fromPoints(pts, { schema: [...] }).
Construct a TimeSeries from a flat array of wide-row points —
the inverse of toPoints(). Each point carries ts plus one key
per value column from the schema; missing keys become undefined.
The schema's first column must be kind: 'time' — ts is a
single timestamp and can't reconstruct a TimeRange or
Interval extent. Schemas may have any number of value columns.
Useful for round-tripping chart data back into pond-native
operations — e.g. bucketing a flat list of anomaly points via
aggregate(Sequence.every('15s'), { cpu: 'count' }).
StaticjoinExample: TimeSeries.joinMany([cpu.align(seq), memory.align(seq), errors.align(seq)]).
Performs an exact-key n-ary join across many series.
Use join(...) for the binary case and joinMany(...) when you want to build one wide series
from several aligned or aggregated inputs. This avoids repeated manual pairwise joins in
feature-building, reporting, and dashboard pipelines.
Defaults:
type: "outer"onConflict: "error"Optionaloptions: ErrorJoinOptionsExample: TimeSeries.joinMany([cpu.align(seq), memory.align(seq), errors.align(seq)]).
Performs an exact-key n-ary join across many series.
Use join(...) for the binary case and joinMany(...) when you want to build one wide series
from several aligned or aggregated inputs. This avoids repeated manual pairwise joins in
feature-building, reporting, and dashboard pipelines.
Defaults:
type: "outer"onConflict: "error"
An immutable ordered collection of typed events sharing a common schema.
Example