• by wenc on 12/27/2024, 2:42:40 PM

    9 months ago (things may have changed) someone showed simple time series models outperforming Chronos.

    https://github.com/Nixtla/nixtla/tree/main/experiments/amazo...

    In domains where there is inherent randomness in the process, simple (ensemble) models tend to outperform complex ones. Nonlinear models can capture nonlinear patterns but they also tend to fit noise.

    Neural network models have been shown to work really well on text, images, sound etc, but these types of data have no inherent randomness in them. A piece of text is a piece of text.

    Whereas most time series are usually trying to forecast quantities that have complex unmeasured causality, like natural gas prices. Past behavior is no guarantee of future behavior. Capturing the nonlinear behavior in the past better can actually degrade future performance. While simple models tend to be more robust because they tend to not overly bias towards any one trend.

  • by cl42 on 12/27/2024, 9:25:52 AM

    I've been skeptical about the "time series" LLMs papers from earlier, so this is interesting to see. Curious if others disagree with this paper!