The Quiet Revolution in Time Series and Where It's Going Next

Oleg Zarakhani
LEAD DATA SCIENTIST, FUNDAMENTAL

4

MIN READ

4 Key Takeaways

The era of hand-rolled forecasting pipelines is ending

Model selection shouldn't be a human decision

Feature engineering is table stakes. The system handles it now

When setup takes hours not weeks, teams ask better questions

The era of hand-rolled forecasting pipelines is ending

Model selection shouldn't be a human decision

Feature engineering is table stakes. The system handles it now

When setup takes hours not weeks, teams ask better questions

4 MIN LEFT
4 MIN LEFT

For most of the last decade, time series forecasting has been the unglamorous cousin of machine learning. While generative models captured the headlines, the people forecasting tomorrow's demand, next quarter's capacity, and the cost of being wrong on either kept doing what they have always done: hand-rolling lag features, tuning seasonality terms, and rebuilding pipelines every time a new business unit asked the same question with slightly different data.

That era is ending. Forecasting is the load-bearing wall of every operational decision an organization makes; planning, pricing, staffing, inventory, risk, capital allocation. The next decade of competitive advantage will not be defined by who predicts better in a vague sense. It will be defined by who forecasts faster, more often, and with less human babysitting.

What has been holding the field back

Three things, mostly.

Feature engineering is still artisanal. Every serious time series project begins the same way: someone writes lag features, someone else writes rolling means, a third person writes calendar encodings, and a fourth person quietly introduces a leakage bug that nobody catches until production. The work is repetitive, error-prone, and almost identical across domains and yet teams keep doing it from scratch.

One model rarely fits all series. A panel of thousands of series almost never behaves uniformly. Some are smooth, some are bursty, some are zero-inflated, some sit inside a hierarchy where totals matter more than individual rows. A single global model is convenient but blunt; a model per series is accurate but unmanageable. Most teams pick one extreme and live with the consequences.

The gap between a notebook and production is enormous. A working model in a Jupyter cell is maybe ten percent of the way to a forecast a business will actually trust. Schema drift, retraining cadence, feature pipelines, serving latency, and explainability all sit between the prototype and anything operational, and historically, every team has rebuilt that bridge themselves.

What changes next

Three shifts are already underway, and they are going to accelerate.

Forecasting becomes declarative. You will describe your data, the columns, the grain, the horizon - and the system will choose the architecture. The era of picking between ARIMA, Prophet, gradient boosting, and a deep model based on a practitioner's hunch is ending. The choice of model becomes part of what the system optimizes, not an upstream human decision.

The right answer is a portfolio, not a model. No single architecture wins across every shape of series. The future is systems that backtest a set of frameworks, a standard global model, a clustered model for heterogeneous behavior, a hierarchical decomposition for series that move with their parents, a residual stack that corrects a base predictor, a frequency-severity split for zero-inflated targets and pick the right one per problem. Selection itself becomes the model.

The boundary between feature engineering and modeling dissolves. Lag features, rolling statistics, calendar effects, cross-sectional aggregates, known-in-advance variables - these are no longer differentiators. They are table stakes, generated automatically and safely (no leakage, no cross-series contamination, no retraining drift). The interesting work moves up a level: which features actually carry signal, and how do we know.

Where NEXUS fits

NEXUS is one of the first concrete instances of this shift. It is not the final word on automated forecasting, nobody has written that yet, but it is a useful glimpse of the shape the future takes.

A few things make it worth pointing at:

  • Framework selection is automatic. NEXUS backtests a portfolio of architectures, standard, performance-clustered, hierarchical-temporal, residual, and frequency-severity - and picks the best per dataset. The user does not have to know which one their problem needs.

  • Feature engineering is built in, and leakage-safe by construction. Lags, rolling statistics, calendar features, cross-sectional ratios, and known-in-advance variables are generated per series, automatically, with the right separation between train and predict.

  • It is frequency-agnostic and panel-native. Hourly, daily, monthly; one series or thousands. The system adapts. This is what "industry-agnostic" actually means in practice, not marketing copy, but the property of working across grains and panel shapes without reconfiguration.

  • It speaks the language of production. A scikit-learn-style fit / predict API, mutual-information-based feature selection for scalable interpretability, and stateless serialization for serverless deployment. The bridge from notebook to production is shorter because the bridge is part of the product.

What this actually unlocks

The real prize is not better numbers on a backtest. It is a change in how often forecasting happens at all.

When the cost of producing a credible forecast drops by an order of magnitude, forecasting stops being a quarterly project and becomes a continuous capability. Teams without dedicated forecasters get production-grade results. Experiments that used to take weeks compress into hours. New questions get asked because the cost of asking them is no longer prohibitive.

That is the real shift. Not "AI for time series." Not yet another model on the leaderboard. A change in the economics of asking the future a question.

The next cycle

The organizations that win the next cycle will not be the ones with the most data. They will be the ones whose forecasts are the fastest to update and the cheapest to trust. Everything else, the architectures, the features, the pipelines, is plumbing. The question worth asking is whether your team is still building that plumbing by hand.

If the answer is yes, the next few years are going to be uncomfortable. If the answer is no, they are going to be very interesting.

Explore NEXUS

Fundamental Technologies Inc.

Copyright © 2026

All rights reserved

Copyright © 2026

All rights reserved

Fundamental Technologies Inc.