pond-ts API Reference (core)
    Preparing search index...

    Class PartitionedTimeSeries<S, K>

    View over a TimeSeries that scopes stateful transforms to within each partition. Created by TimeSeries.partitionBy(by).

    Most pond-ts stateful operators read from neighboring events when computing each output. On a multi-entity series (events for many hosts interleaved by time), those neighbors silently cross entity boundaries: a fill('linear') for host-A would interpolate using host-B's value as a "neighbor"; a rolling('5m', { cpu: 'avg' }) would average across all hosts in the window.

    partitionBy runs the transform independently on each partition's events. The view is persistent across chains — each sugar method returns another PartitionedTimeSeries carrying the same partition columns, so multi-step per-partition workflows compose cleanly:

    const cleaned = ts
    .partitionBy('host')
    .dedupe({ keep: 'last' }) // per-host
    .fill({ cpu: 'linear' }) // per-host
    .rolling('5m', { cpu: 'avg' }) // per-host
    .collect(); // back to TimeSeries<S>

    Call .collect() (or .apply(fn) for arbitrary transforms) to materialize back to a regular TimeSeries. Without .collect(), the chain stays in partition view.

    // Per-host fill
    const filled = series.partitionBy('host').fill({ cpu: 'linear' }).collect();

    // Composite partitioning by host + region
    const filled = series.partitionBy(['host', 'region']).fill({ cpu: 'linear' }).collect();

    // Arbitrary transform via apply (terminal — returns TimeSeries directly)
    const custom = series.partitionBy('host').apply(g =>
    g.fill({ cpu: 'linear' }).rolling('5m', { cpu: 'avg' }),
    );

    Type Parameters

    Index

    Constructors

    Properties

    by: readonly (keyof EventDataForSchema<S> & string)[]
    groups?: readonly K[]

    Declared partition values when partitionBy(col, { groups }) was used. When set, toMap iterates in declared order (not insertion order), empty declared groups still appear as empty TimeSeries entries, and unknown partition values throw at construction time.

    source: TimeSeries<S>

    Methods

    • Per-partition aggregate. See TimeSeries.aggregate.

      Type Parameters

      • const Mapping extends Readonly<AggregateMapEntries<S>>

      Parameters

      Returns PartitionedTimeSeries<
          readonly [
              ColumnDef<"interval", "interval">,
              AggregateColumns<ValueColumnsForSchema<S>, Mapping>,
          ],
          K,
      >

    • Per-partition aggregate. See TimeSeries.aggregate.

      Type Parameters

      • const Mapping extends Readonly<
            Record<
                string,
                Readonly<
                    {
                        from: ValueColumnsForSchema<S>[number]["name"];
                        kind?: ScalarKind;
                        using: AggregateFunctionsForKind<
                            Extract<
                                ValueColumnsForSchema<S>[number],
                                ColumnDef<ValueColumnsForSchema<(...)>[number]["name"], ScalarKind>,
                            >["kind"],
                        >;
                    },
                >,
            >,
        >

      Parameters

      Returns PartitionedTimeSeries<
          readonly [
              ColumnDef<"interval", "interval">,
              AggregateOutputMapColumns<S, Mapping>,
          ],
          K,
      >

    • Run a transform fn independently on each partition and return a TimeSeries<R> directly (terminal — does not stay in the partitioned view). The escape hatch for compositions or operators not exposed as sugar.

      To keep the partition after a custom transform, use the sugar methods (which preserve partition state) or call .partitionBy(...) again on the result.

      Type Parameters

      Parameters

      Returns TimeSeries<R>

      // chain two stateful ops within each partition (one shot)
      const out = series.partitionBy('host').apply(g =>
      g.fill({ cpu: 'linear' }).rolling('5m', { cpu: 'avg' }),
      );
    • Materialize the partitioned view back into a regular TimeSeries. Terminal operation — call this at the end of a chain to "collect" the per-partition results. Equivalent to .apply(g => g) but cheaper (no fn dispatch, just returns the source as-is).

      Returns TimeSeries<S>

      const cleaned = ts
      .partitionBy('host')
      .fill({ cpu: 'linear' })
      .rolling('5m', { cpu: 'avg' })
      .collect(); // <- TimeSeries<S>
    • Per-partition cumulative. See TimeSeries.cumulative.

      Type Parameters

      • const Targets extends string

      Parameters

      • spec: {
            [K in string]:
                | "sum"
                | "count"
                | "min"
                | "max"
                | ((acc: number, value: number) => number)
        }

      Returns PartitionedTimeSeries<
          readonly [
              S[0],
              ReplaceSmoothedColumn<ValueColumnsForSchema<S>, Targets>,
          ],
          K,
      >

    • Per-partition materialize. See TimeSeries.materialize.

      Bonus over the bare TimeSeries.materialize call: every output row, including empty-bucket rows, gets the partition columns auto-populated from the partition's known key values. Without this, empty buckets would emit rows with undefined partition columns — forcing a follow-up .fill({ host: 'hold' }) step that fails for partitions where every event sits in a long-outage gap.

      Parameters

      • sequence: SequenceLike
      • Optionaloptions: {
            range?: TemporalLike;
            sample?: AlignSample;
            select?: "first" | "last" | "nearest";
        }

      Returns PartitionedTimeSeries<
          readonly [
              ColumnDef<"time", "time">,
              OptionalizeColumns<ValueColumnsForSchema<S>>,
          ],
          K,
      >

    • Materialize the partitioned view as a Map<key, TimeSeries<S>>, one entry per partition. Terminal — exits the partition view.

      Use this when downstream code needs to iterate or look up per partition (typical in dashboards: one chart line per host, one tooltip per region). Without this, the equivalent dance was .collect().groupBy(col, fn) — two operators where one would do.

      The map key is the stringified partition value for single-column partitions, or a JSON.stringify'd array of values for composite partitions. The single-column form preserves the value's natural string representation (a host column with values 'api-1' yields keys 'api-1'); composite keys produce JSON like '["api-1","eu"]'. Map iteration order matches the order each partition was first encountered in the source events.

      undefined partition values become the literal ' undefined' with a leading space — this avoids colliding with a string column whose value happens to be the literal text 'undefined'. The two are distinct buckets:

      series // events with host=undefined and host='undefined'
      .partitionBy('host')
      .toMap();
      // → 2 entries: ' undefined' (missing) vs 'undefined' (string literal)

      Divergence from series.groupBy(col): groupBy uses bare 'undefined' (no leading space) for missing values, so it collapses these two cases. toMap's leading-space sentinel is an intentional improvement — the older groupBy shape silently loses the distinction between "missing" and "the string 'undefined'". Migrating from groupBy to toMap will produce different keys for partitions with undefined values; lookup code that previously did .get('undefined') should change to .get(' undefined') (note the leading space) to find the missing-value bucket.

      Composite encoder. For composite partitions, JSON.stringify with a ?? null fallback emits both null and undefined as JSON null. In practice this only matters if event data contains explicit null values, which the standard validation/ingest paths convert to undefined upfront — so the single-column-vs-composite asymmetry is unreachable through the normal API.

      Returns Map<K, TimeSeries<S>>

      // Per-host event lookup
      const byHost = events.partitionBy('host').toMap();
      const apiEvents = byHost.get('api-1');

      // With a transform — one-shot per-partition shape change
      const points = events.partitionBy('host').toMap((g) => g.toPoints());
      for (const [host, rows] of points) {
      chart.addSeries(host, rows);
      }

      // Composite partition
      const byHostRegion = events
      .partitionBy(['host', 'region'])
      .toMap();
      const apiEu = byHostRegion.get('["api-1","eu"]');
    • Materialize the partitioned view as a Map<key, TimeSeries<S>>, one entry per partition. Terminal — exits the partition view.

      Use this when downstream code needs to iterate or look up per partition (typical in dashboards: one chart line per host, one tooltip per region). Without this, the equivalent dance was .collect().groupBy(col, fn) — two operators where one would do.

      The map key is the stringified partition value for single-column partitions, or a JSON.stringify'd array of values for composite partitions. The single-column form preserves the value's natural string representation (a host column with values 'api-1' yields keys 'api-1'); composite keys produce JSON like '["api-1","eu"]'. Map iteration order matches the order each partition was first encountered in the source events.

      undefined partition values become the literal ' undefined' with a leading space — this avoids colliding with a string column whose value happens to be the literal text 'undefined'. The two are distinct buckets:

      series // events with host=undefined and host='undefined'
      .partitionBy('host')
      .toMap();
      // → 2 entries: ' undefined' (missing) vs 'undefined' (string literal)

      Divergence from series.groupBy(col): groupBy uses bare 'undefined' (no leading space) for missing values, so it collapses these two cases. toMap's leading-space sentinel is an intentional improvement — the older groupBy shape silently loses the distinction between "missing" and "the string 'undefined'". Migrating from groupBy to toMap will produce different keys for partitions with undefined values; lookup code that previously did .get('undefined') should change to .get(' undefined') (note the leading space) to find the missing-value bucket.

      Composite encoder. For composite partitions, JSON.stringify with a ?? null fallback emits both null and undefined as JSON null. In practice this only matters if event data contains explicit null values, which the standard validation/ingest paths convert to undefined upfront — so the single-column-vs-composite asymmetry is unreachable through the normal API.

      Type Parameters

      Parameters

      Returns Map<K, TimeSeries<R>>

      // Per-host event lookup
      const byHost = events.partitionBy('host').toMap();
      const apiEvents = byHost.get('api-1');

      // With a transform — one-shot per-partition shape change
      const points = events.partitionBy('host').toMap((g) => g.toPoints());
      for (const [host, rows] of points) {
      chart.addSeries(host, rows);
      }

      // Composite partition
      const byHostRegion = events
      .partitionBy(['host', 'region'])
      .toMap();
      const apiEu = byHostRegion.get('["api-1","eu"]');
    • Materialize the partitioned view as a Map<key, TimeSeries<S>>, one entry per partition. Terminal — exits the partition view.

      Use this when downstream code needs to iterate or look up per partition (typical in dashboards: one chart line per host, one tooltip per region). Without this, the equivalent dance was .collect().groupBy(col, fn) — two operators where one would do.

      The map key is the stringified partition value for single-column partitions, or a JSON.stringify'd array of values for composite partitions. The single-column form preserves the value's natural string representation (a host column with values 'api-1' yields keys 'api-1'); composite keys produce JSON like '["api-1","eu"]'. Map iteration order matches the order each partition was first encountered in the source events.

      undefined partition values become the literal ' undefined' with a leading space — this avoids colliding with a string column whose value happens to be the literal text 'undefined'. The two are distinct buckets:

      series // events with host=undefined and host='undefined'
      .partitionBy('host')
      .toMap();
      // → 2 entries: ' undefined' (missing) vs 'undefined' (string literal)

      Divergence from series.groupBy(col): groupBy uses bare 'undefined' (no leading space) for missing values, so it collapses these two cases. toMap's leading-space sentinel is an intentional improvement — the older groupBy shape silently loses the distinction between "missing" and "the string 'undefined'". Migrating from groupBy to toMap will produce different keys for partitions with undefined values; lookup code that previously did .get('undefined') should change to .get(' undefined') (note the leading space) to find the missing-value bucket.

      Composite encoder. For composite partitions, JSON.stringify with a ?? null fallback emits both null and undefined as JSON null. In practice this only matters if event data contains explicit null values, which the standard validation/ingest paths convert to undefined upfront — so the single-column-vs-composite asymmetry is unreachable through the normal API.

      Type Parameters

      • R

      Parameters

      Returns Map<K, R>

      // Per-host event lookup
      const byHost = events.partitionBy('host').toMap();
      const apiEvents = byHost.get('api-1');

      // With a transform — one-shot per-partition shape change
      const points = events.partitionBy('host').toMap((g) => g.toPoints());
      for (const [host, rows] of points) {
      chart.addSeries(host, rows);
      }

      // Composite partition
      const byHostRegion = events
      .partitionBy(['host', 'region'])
      .toMap();
      const apiEu = byHostRegion.get('["api-1","eu"]');