Optionaloptions: { groups?: readonly K[] }ReadonlybyOptional ReadonlygroupsDeclared partition values when partitionBy(col, { groups }) was
used. When set, toMap iterates in declared order (not insertion
order), empty declared groups still appear as empty TimeSeries
entries, and unknown partition values throw at construction time.
ReadonlysourcePer-partition aggregate. See TimeSeries.aggregate.
Optionaloptions: { range?: TemporalLike }Per-partition aggregate. See TimeSeries.aggregate.
Optionaloptions: { range?: TemporalLike }Per-partition align. See TimeSeries.align.
Optionaloptions: { method?: AlignMethod; range?: TemporalLike; sample?: AlignSample }Run a transform fn independently on each partition and return a
TimeSeries<R> directly (terminal — does not stay in the
partitioned view). The escape hatch for compositions or operators
not exposed as sugar.
To keep the partition after a custom transform, use the sugar
methods (which preserve partition state) or call .partitionBy(...)
again on the result.
Per-partition baseline. See TimeSeries.baseline.
Materialize the partitioned view back into a regular TimeSeries.
Terminal operation — call this at the end of a chain to "collect"
the per-partition results. Equivalent to .apply(g => g) but
cheaper (no fn dispatch, just returns the source as-is).
Per-partition cumulative. See TimeSeries.cumulative.
Per-partition dedupe. The duplicate key becomes "same partition
columns AND same timestamp" — partitionBy provides the partition
segregation, dedupe handles the within-partition timestamp
collapse. The most common dedupe shape for multi-entity ingest.
See TimeSeries.dedupe.
Optionaloptions: { keep?: DedupeKeep<S> }Per-partition diff. See TimeSeries.diff.
Per-partition fill. See TimeSeries.fill.
Optionaloptions: { limit?: number; maxGap?: DurationInput }Per-partition materialize. See TimeSeries.materialize.
Bonus over the bare TimeSeries.materialize call: every
output row, including empty-bucket rows, gets the partition
columns auto-populated from the partition's known key values.
Without this, empty buckets would emit rows with undefined
partition columns — forcing a follow-up
.fill({ host: 'hold' }) step that fails for partitions where
every event sits in a long-outage gap.
Optionaloptions: {Per-partition outliers. See TimeSeries.outliers.
Per-partition pctChange. See TimeSeries.pctChange.
Per-partition rate. See TimeSeries.rate.
Per-partition rolling. See TimeSeries.rolling.
Optionaloptions: { alignment?: RollingAlignment }Per-partition rolling. See TimeSeries.rolling.
Optionaloptions: { alignment?: RollingAlignment }Per-partition rolling. See TimeSeries.rolling.
Optionaloptions: { alignment?: RollingAlignment; range?: TemporalLike; sample?: AlignSample }Per-partition rolling. See TimeSeries.rolling.
Optionaloptions: { alignment?: RollingAlignment; range?: TemporalLike; sample?: AlignSample }Per-partition shift. See TimeSeries.shift.
Per-partition smooth. See TimeSeries.smooth.
Materialize the partitioned view as a Map<key, TimeSeries<S>>,
one entry per partition. Terminal — exits the partition view.
Use this when downstream code needs to iterate or look up per
partition (typical in dashboards: one chart line per host, one
tooltip per region). Without this, the equivalent dance was
.collect().groupBy(col, fn) — two operators where one would do.
The map key is the stringified partition value for single-column
partitions, or a JSON.stringify'd array of values for composite
partitions. The single-column form preserves the value's natural
string representation (a host column with values 'api-1'
yields keys 'api-1'); composite keys produce JSON like
'["api-1","eu"]'. Map iteration order matches the order each
partition was first encountered in the source events.
undefined partition values become the literal ' undefined'
with a leading space — this avoids colliding with a string
column whose value happens to be the literal text 'undefined'.
The two are distinct buckets:
series // events with host=undefined and host='undefined'
.partitionBy('host')
.toMap();
// → 2 entries: ' undefined' (missing) vs 'undefined' (string literal)
Divergence from series.groupBy(col): groupBy uses bare
'undefined' (no leading space) for missing values, so it
collapses these two cases. toMap's leading-space sentinel is
an intentional improvement — the older groupBy shape silently
loses the distinction between "missing" and "the string
'undefined'". Migrating from groupBy to toMap will produce
different keys for partitions with undefined values; lookup
code that previously did .get('undefined') should change to
.get(' undefined') (note the leading space) to find the
missing-value bucket.
Composite encoder. For composite partitions, JSON.stringify
with a ?? null fallback emits both null and undefined as
JSON null. In practice this only matters if event data
contains explicit null values, which the standard
validation/ingest paths convert to undefined upfront — so the
single-column-vs-composite asymmetry is unreachable through the
normal API.
// Per-host event lookup
const byHost = events.partitionBy('host').toMap();
const apiEvents = byHost.get('api-1');
// With a transform — one-shot per-partition shape change
const points = events.partitionBy('host').toMap((g) => g.toPoints());
for (const [host, rows] of points) {
chart.addSeries(host, rows);
}
// Composite partition
const byHostRegion = events
.partitionBy(['host', 'region'])
.toMap();
const apiEu = byHostRegion.get('["api-1","eu"]');
Materialize the partitioned view as a Map<key, TimeSeries<S>>,
one entry per partition. Terminal — exits the partition view.
Use this when downstream code needs to iterate or look up per
partition (typical in dashboards: one chart line per host, one
tooltip per region). Without this, the equivalent dance was
.collect().groupBy(col, fn) — two operators where one would do.
The map key is the stringified partition value for single-column
partitions, or a JSON.stringify'd array of values for composite
partitions. The single-column form preserves the value's natural
string representation (a host column with values 'api-1'
yields keys 'api-1'); composite keys produce JSON like
'["api-1","eu"]'. Map iteration order matches the order each
partition was first encountered in the source events.
undefined partition values become the literal ' undefined'
with a leading space — this avoids colliding with a string
column whose value happens to be the literal text 'undefined'.
The two are distinct buckets:
series // events with host=undefined and host='undefined'
.partitionBy('host')
.toMap();
// → 2 entries: ' undefined' (missing) vs 'undefined' (string literal)
Divergence from series.groupBy(col): groupBy uses bare
'undefined' (no leading space) for missing values, so it
collapses these two cases. toMap's leading-space sentinel is
an intentional improvement — the older groupBy shape silently
loses the distinction between "missing" and "the string
'undefined'". Migrating from groupBy to toMap will produce
different keys for partitions with undefined values; lookup
code that previously did .get('undefined') should change to
.get(' undefined') (note the leading space) to find the
missing-value bucket.
Composite encoder. For composite partitions, JSON.stringify
with a ?? null fallback emits both null and undefined as
JSON null. In practice this only matters if event data
contains explicit null values, which the standard
validation/ingest paths convert to undefined upfront — so the
single-column-vs-composite asymmetry is unreachable through the
normal API.
// Per-host event lookup
const byHost = events.partitionBy('host').toMap();
const apiEvents = byHost.get('api-1');
// With a transform — one-shot per-partition shape change
const points = events.partitionBy('host').toMap((g) => g.toPoints());
for (const [host, rows] of points) {
chart.addSeries(host, rows);
}
// Composite partition
const byHostRegion = events
.partitionBy(['host', 'region'])
.toMap();
const apiEu = byHostRegion.get('["api-1","eu"]');
Materialize the partitioned view as a Map<key, TimeSeries<S>>,
one entry per partition. Terminal — exits the partition view.
Use this when downstream code needs to iterate or look up per
partition (typical in dashboards: one chart line per host, one
tooltip per region). Without this, the equivalent dance was
.collect().groupBy(col, fn) — two operators where one would do.
The map key is the stringified partition value for single-column
partitions, or a JSON.stringify'd array of values for composite
partitions. The single-column form preserves the value's natural
string representation (a host column with values 'api-1'
yields keys 'api-1'); composite keys produce JSON like
'["api-1","eu"]'. Map iteration order matches the order each
partition was first encountered in the source events.
undefined partition values become the literal ' undefined'
with a leading space — this avoids colliding with a string
column whose value happens to be the literal text 'undefined'.
The two are distinct buckets:
series // events with host=undefined and host='undefined'
.partitionBy('host')
.toMap();
// → 2 entries: ' undefined' (missing) vs 'undefined' (string literal)
Divergence from series.groupBy(col): groupBy uses bare
'undefined' (no leading space) for missing values, so it
collapses these two cases. toMap's leading-space sentinel is
an intentional improvement — the older groupBy shape silently
loses the distinction between "missing" and "the string
'undefined'". Migrating from groupBy to toMap will produce
different keys for partitions with undefined values; lookup
code that previously did .get('undefined') should change to
.get(' undefined') (note the leading space) to find the
missing-value bucket.
Composite encoder. For composite partitions, JSON.stringify
with a ?? null fallback emits both null and undefined as
JSON null. In practice this only matters if event data
contains explicit null values, which the standard
validation/ingest paths convert to undefined upfront — so the
single-column-vs-composite asymmetry is unreachable through the
normal API.
// Per-host event lookup
const byHost = events.partitionBy('host').toMap();
const apiEvents = byHost.get('api-1');
// With a transform — one-shot per-partition shape change
const points = events.partitionBy('host').toMap((g) => g.toPoints());
for (const [host, rows] of points) {
chart.addSeries(host, rows);
}
// Composite partition
const byHostRegion = events
.partitionBy(['host', 'region'])
.toMap();
const apiEu = byHostRegion.get('["api-1","eu"]');
View over a
TimeSeriesthat scopes stateful transforms to within each partition. Created byTimeSeries.partitionBy(by).Most pond-ts stateful operators read from neighboring events when computing each output. On a multi-entity series (events for many hosts interleaved by time), those neighbors silently cross entity boundaries: a
fill('linear')forhost-Awould interpolate usinghost-B's value as a "neighbor"; arolling('5m', { cpu: 'avg' })would average across all hosts in the window.partitionByruns the transform independently on each partition's events. The view is persistent across chains — each sugar method returns anotherPartitionedTimeSeriescarrying the same partition columns, so multi-step per-partition workflows compose cleanly:Call
.collect()(or.apply(fn)for arbitrary transforms) to materialize back to a regularTimeSeries. Without.collect(), the chain stays in partition view.Example