Skip to main content

Frontend telemetry reporting

Collecting high-frequency timing or performance events in a frontend app and reporting summarised values to a backend service — without drowning the server in raw event traffic. The same rolling state drives an in-app live display.

The pattern:

  1. Collect timing events into a LiveSeries as they happen.
  2. Call rolling() with Trigger.every(duration) to maintain a trailing rolling window AND emit one snapshot per reporting interval in a single primitive.
  3. Subscribe to the rolling and fetch each emitted snapshot to the backend.
  4. Read rolling.value() directly for the in-app live display — same rolling state, no duplicated deque.

Setup

import { LiveSeries, Trigger } from 'pond-ts';

// ── Schema ──────────────────────────────────────────────────────
const schema = [
{ name: 'time', kind: 'time' },
{ name: 'latency', kind: 'number' }, // milliseconds
] as const;

// ── Live source ─────────────────────────────────────────────────
export const timings = new LiveSeries({ name: 'timings', schema });

// ── Rolling 1-minute window with a 30 s clock trigger ───────────
// Single primitive: maintains the rolling window AND emits one
// snapshot per 30 s boundary crossing of source events.
export const rolling = timings.rolling(
'1m',
{ latency: 'p95' },
{
trigger: Trigger.every('30s'),
minSamples: 10, // suppress output until the window is warm
},
);

// ── Backend report on each boundary crossing ───────────────────
rolling.on('event', (event) => {
fetch('/api/telemetry', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
ts: event.begin(), // epoch-aligned 30 s boundary
p95: event.get('latency'),
}),
});
});

The same rolling powers both the backend report (via 'event' listener at the trigger cadence) and the in-app display (via rolling.value() at any time). No second rolling, no duplicated deque.

Want multiple percentiles?

Use the AggregateOutputMap form to compute multiple stats from one source column in a single rolling pass — same deque, same trigger, all percentiles populated together:

const stats = timings.rolling(
'1m',
{
p50: { from: 'latency', using: 'p50' },
p95: { from: 'latency', using: 'p95' },
p99: { from: 'latency', using: 'p99' },
},
{ trigger: Trigger.every('30s'), minSamples: 10 },
);

stats.on('event', (event) => {
fetch('/api/telemetry', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
ts: event.begin(),
p50: event.get('p50'),
p95: event.get('p95'),
p99: event.get('p99'),
}),
});
});

The runtime uses a shared O(N) deque — emitting more percentiles adds reducer state per column but doesn't multiply the per-event work the way running multiple rolling() calls would.

Collecting events

Call timings.push() wherever you measure latency — in a fetch wrapper, an interceptor, or a performance observer:

import { Temporal } from '@js-temporal/polyfill';

function measureFetch(url: string): Promise<Response> {
const start = performance.now();
return fetch(url).finally(() => {
timings.push([Temporal.Now.instant(), performance.now() - start]);
});
}

timings.push() accepts a Temporal.Instant as the time key; the library converts it to a millisecond timestamp internally.

Displaying in React

useLiveQuery re-evaluates on every source event (throttled). Read rolling.value() to surface the latest aggregate snapshot — note that rolling.value() reads the current rolling state regardless of trigger; it's not gated on the next 30 s boundary:

import { useLiveQuery } from '@pond-ts/react';
import { timings, rolling } from './telemetry';

function PerformancePanel() {
// Throttled to once per second; re-reads rolling.value() on each event
const stats = useLiveQuery(timings, () => rolling.value(), {
throttleMs: 1_000,
});

if (stats?.latency === undefined) return <span>warming up…</span>;

return (
<dl>
<dt>p95 latency</dt>
<dd>{stats.latency.toFixed(0)} ms</dd>
</dl>
);
}

The display updates on every incoming timing event; the backend POST fires only at 30 s boundaries. Both read from the same rolling window — single source of truth, no duplicated state.

Adding throughput

Include events-per-second alongside the percentile so the backend can contextualise the reading (a p95 of 200 ms at 3 req/s is very different from 300 req/s):

const rate = timings.window('1m').eventRate(); // events/sec over last minute

rolling.on('event', (event) => {
fetch('/api/telemetry', {
method: 'POST',
body: JSON.stringify({
ts: event.begin(),
p95: event.get('latency'),
rps: rate.value(), // snapshot at the same 30 s boundary
}),
});
});

Per-endpoint breakdown

If you want separate latency stats per endpoint, partition first:

const schema = [
{ name: 'time', kind: 'time' },
{ name: 'latency', kind: 'number' },
{ name: 'endpoint', kind: 'string' },
] as const;

const timings = new LiveSeries({ name: 'timings', schema });

// Synchronised partitioned rolling: every endpoint's snapshot
// emits at the same 30 s boundary, all sharing the same ts.
const ticks = timings
.partitionBy('endpoint')
.rolling(
'1m',
{ latency: 'p95' },
{ trigger: Trigger.every('30s') },
);

// Each tick: one event per known endpoint, all sharing the same ts.
// Schema: [time, endpoint, latency]
ticks.on('event', (event) => {
// event.begin() === <boundary timestamp>
// event.get('endpoint') === '/api/users' | '/api/orders' | …
// event.get('latency') === <rolling p95 for that endpoint>
});

Emission semantics

Trigger.every(duration) (and the explicit Trigger.clock(sequence) form it desugars to) is data-driven, not timer-driven — it emits when incoming events cross an epoch-aligned interval boundary, not on a wall clock. If the app goes idle (no fetch activity), no report is sent. If a burst of events arrives after a quiet period, exactly one report is sent per crossed boundary, not one per skipped interval.

The snapshot is taken after the boundary-crossing event is ingested, so the emitted aggregate reflects the full trailing window including that event. Late arrivals (under ordering: 'reorder') do not trigger emission — they're absorbed silently by the rolling window's recompute.

See Triggering in the live transforms reference for the full semantics and the synchronised-partitioned shape.