Data Products and Public Signals

Macro Pulse

A macro-signal system that turns raw public series into historical comparisons, regime-aware dashboards, and source-visible narrative summaries.

Historical context Public data made readable

Context

Macro Pulse started from a product gap I keep seeing in public-data tools: strong charts, weak interpretation. Most macro dashboards expose the series but not the comparison frame a serious user needs. People want to know what changed, which indicators are stale, how current conditions compare with prior windows, and whether different signals are moving together or diverging.

Problem

Macro dashboards often expose raw charts without enough structure to answer the next set of questions: which series refreshed recently, which indicators lag, how different stress signals line up, and what prior periods looked structurally similar without collapsing into prediction theater.

What I Built

  • A scheduled ingestion layer for FRED and related public series with source metadata, refresh tracking, and standardized time alignment
  • A derived-metrics layer for yield-curve spreads, rate differentials, rolling changes, and stress composites built on top of normalized base series
  • A comparison engine that matches current signal configurations to historical windows and exposes why two periods are considered similar
  • User-facing dashboards and chart surfaces for curve shape, credit stress, inflation context, and regime comparison with plain-English descriptive summaries
  • A freshness-aware narrative layer that keeps explanations source-visible and bounded by what the underlying data can actually support

Notes

System overview

Macro Pulse is designed as a small data platform wrapped in a macro product surface. The dashboard itself is only the last layer. Underneath it sits a pipeline for ingesting public series, normalizing releases with different cadences, computing derived indicators, scoring historical similarity, and then exposing those results through charts and short summaries.

The product shape looks more like this:

Public sources -> ingestion jobs -> normalized series store -> derived metrics ->
comparison engine -> chart panels + summaries -> watch surfaces and weekly digests

That distinction matters. If the product is only a set of hand-built charts, it stops being extensible quickly. If the underlying model captures base series, transformations, freshness, and comparison features, the product can support many more views without rewriting the logic from scratch.

Product principles

Reliable data first

Every chart and summary should trace back to a source the user can inspect.

No investment advice

The job of the product is to describe and contextualize signals, not to tell users what trade to make.

Comparative thinking

The most useful question is often not "what is the number today?" but "when have we seen a configuration like this before?"

Data model and pipeline shape

The core system has a few first-class objects:

  • series_catalog: metadata for each base series, source, units, release cadence, and expected lag
  • series_points: normalized observations with observation date, value date, release timestamp, and ingest timestamp
  • derived_metrics: spreads, deltas, rolling windows, and composite stress indicators
  • comparison_snapshots: feature vectors for historical windows used in regime matching
  • summary_runs: generated descriptions tied to specific data snapshots so outputs remain reproducible

That schema makes a few useful things possible:

  • backfilling without losing source timing
  • showing users when data is stale versus simply unchanged
  • recomputing derived metrics when upstream series are revised
  • explaining why one historical period ranked near another

Chart surfaces

The user-facing layer is not one page. It is a set of reusable panel types:

  • curve-shape panels for 10Y-2Y, 10Y-3M, and related spread views
  • credit-stress panels for high-yield spreads and widening / tightening deltas
  • inflation and rate context panels for CPI, Fed funds, and policy-sensitive overlays
  • historical match panels that compare the current signal vector against prior windows

The point is not maximum chart density. It is giving each panel a clear job and then letting the user move between them without losing context.

Historical comparison engine

This is the most product-defining layer in the system.

Instead of asking only whether 10Y-2Y looks inverted, the engine compares a broader configuration:

  • current curve shape
  • recent change in spreads
  • credit-stress level and direction
  • inflation trend state
  • policy-rate context

That produces a more useful question:

What prior windows looked similar across rates, credit, and inflation at the same time?

That is far more informative than a single-series lookup, but it also requires discipline. Similarity has to stay inspectable. If a historical period is marked as comparable, the product should expose which features drove the match rather than hiding behind a generic score.

Narrative generation without prediction theater

The narrative layer exists to make the charts easier to interpret, not to act like an economist.

A useful summary for this product sounds like:

  • 10Y-2Y widened 18 bps over the last month while high-yield spreads remained relatively stable
  • credit conditions are calmer than rate volatility alone would suggest
  • the current configuration most closely resembles prior windows where curve stress arrived ahead of credit repricing

It should not sound like:

  • recession is imminent
  • the market is about to pivot
  • investors should position for X

That boundary is a product decision as much as a writing one.

Dashboard flow

One way to think about the user path is:

Inspect current state -> check freshness -> compare historical windows ->
read concise explanation -> open underlying series -> save or share the view

That sequence keeps the product explanatory and inspectable.

Research anchors