Skip to main content

pond-ts

Typed time-series primitives for TypeScript. Schema-driven events, composable transforms, a live-ingest mode, and an optional React integration — all strict TS end to end, all immutable.

npm install pond-ts # core
npm install @pond-ts/react # React hooks (optional)

First example: batch

Your data is rows with timestamps. Tell pond-ts the schema once; every transform downstream narrows off of it.

import { Sequence, TimeSeries } from 'pond-ts';

// 1. Define the schema. Every downstream transform narrows off of it —
// .get('cpu') returns number, .get('host') returns string, and
// transforms like aggregate preserve that narrowing.
const schema = [
{ name: 'time', kind: 'time' },
{ name: 'cpu', kind: 'number' },
{ name: 'requests', kind: 'number' },
{ name: 'host', kind: 'string' },
] as const;

// 2. Construct a TimeSeries. Rows are positional by schema column;
// time values are ms-since-epoch, ISO strings with an offset, or
// Dates.
const cpu = TimeSeries.fromJSON({
name: 'cpu',
schema,
rows: [
['2025-01-01T00:00:00Z', 0.31, 120, 'host1'],
['2025-01-01T00:01:00Z', 0.44, 135, 'host2'],
['2025-01-01T00:02:00Z', 0.52, 141, 'host1'],
['2025-01-01T00:03:00Z', 0.48, 128, 'host1'],
['2025-01-01T00:04:00Z', 0.63, 166, 'host3'],
],
});

// 3. Compose transforms. Each one returns a new TimeSeries — the
// source is immutable, types narrow per step.
const byMinute = cpu.aggregate(Sequence.every('1m'), {
cpu: 'avg',
requests: 'sum',
host: 'last',
});

const bands = cpu.baseline('cpu', { window: '2m', sigma: 2 });
// ^ appends avg / sd / upper / lower columns in one rolling pass,
// flat-window `sd === 0` handling already correct.

const anomalies = cpu.outliers('cpu', { window: '2m', sigma: 2 });
// ^ schema-preserving filter — same columns, just the spikes.

// 4. Chart-ready output. `toPoints()` returns wide rows keyed by
// schema column name — drop straight into Recharts / Observable
// Plot / visx / d3.
byMinute.toPoints(); // [{ ts, cpu, requests, host }, ...]
bands.toPoints(); // [{ ts, cpu, ..., avg, sd, upper, lower }, ...]
anomalies.length; // how many spikes tripped the 2σ band

The whole batch API is composable like this — filter, map, rolling, smooth, groupBy, join, reduce, diff, rate, fill all fit the same "TimeSeries in, TimeSeries out, typed schema preserved" shape.

Second example: live data

Drop a LiveSeries wherever events arrive incrementally — a WebSocket, a polling loop, a message queue. Push in as they come; snapshot to a regular TimeSeries at any moment for batch analytics.

import { LiveSeries, Sequence } from 'pond-ts';

// 1. Same schema; this is a live append buffer with retention.
const live = new LiveSeries({
name: 'cpu',
schema,
retention: { maxAge: '10m' }, // keep only the last 10 minutes
});

// 2. Push as events arrive. Each push is validated against the schema.
live.push([Date.now(), 0.45, 128, 'api-1']);
// ...driven by whatever your data source is.

// 3. React to events inline. `e.get('cpu')` narrows to number |
// undefined straight from the schema — no `as number` casts.
live.on('event', (e) => {
const cpu = e.get('cpu');
if (cpu !== undefined && cpu > 0.9) {
console.warn(`High CPU on ${e.get('host')}`);
}
});

// 4. Snapshot to a TimeSeries for batch analytics at any time.
const snap = live.toTimeSeries();
snap.aggregate(Sequence.every('1m'), { cpu: 'avg' });

The live side also has its own aggregate / rolling / filter views that stay incremental — good for continuously-updated dashboard numbers without snapshot overhead. See Live.

Third example: React (optional)

You do not need React to use pond-ts. Everything above is the core package. @pond-ts/react is a separate install that wraps the live primitives in hooks — it's the bridge, not the library.

import { useEffect } from 'react';
import { useLiveSeries, useCurrent } from '@pond-ts/react';

function Dashboard() {
// 1. Create and subscribe to a LiveSeries; snap refreshes on push.
const [live, snap] = useLiveSeries({
name: 'cpu',
schema,
retention: { maxAge: '10m' },
});

// 2. Pipe events from wherever they come (WebSocket here).
useEffect(() => {
const ws = new WebSocket('/api/metrics');
ws.onmessage = (m) => {
const { ts, cpu, requests, host } = JSON.parse(m.data);
live.push([ts, cpu, requests, host]);
};
return () => ws.close();
}, [live]);

// 3. Reduce the source to chart-ready scalars — reference-stable
// per-field, so downstream useMemo doesn't invalidate on every
// push.
const { cpu } = useCurrent(live, { cpu: 'avg' });

return (
<>
<Stat label="Avg CPU" value={cpu} />
{snap && <Chart points={snap.select('cpu').toPoints()} />}
</>
);
}

See @pond-ts/react for the hook reference and end-to-end dashboard patterns.

What pond-ts is (and isn't)

  • Typed — the schema is a readonly TypeScript tuple; every transform output narrows off it. No as casts at the call site.
  • ImmutableEvent and TimeSeries are frozen; transforms return new instances.
  • Batch-first — the core library is in-memory transforms over a complete TimeSeries. The live side is a bounded buffer with moderate reordering tolerance at ingest, not a full streaming engine (see Live Transforms → Late-event scope).
  • Framework-agnostic — the core has no React dependency; @pond-ts/react is strictly optional.

Not a database. Not a query engine. Not a chart library — toPoints() is the one-line bridge to every charting option.

In this docs site

  • Start here — install, first workflow, the mental model.
  • pond-ts (core)TimeSeries, LiveSeries, advanced (charting, array columns).
  • @pond-ts/react — React hooks for subscribing to live sources.
  • Recipes — end-to-end worked dashboards.
  • Reference — benchmarks, migration, bibliography.

Full generated API references for both packages live at /api.