TinyTimeMixers

TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research. With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.

https://huggingface.co/ibm/TTM

TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. 

Chronos: Learning the Language of Time Series

  • Chronos is a framework designed for pretrained probabilistic time series models.
  • It utilizes scaling and quantization to tokenize time series values into a fixed vocabulary.
  • Chronos trains transformer-based language model architectures (specifically, models from the T5 family with parameters ranging from 20M to 710M) using cross-entropy loss.
  • The models are pretrained on a mix of publicly available datasets and a synthetic dataset generated via Gaussian processes, enhancing generalization.
  • In a comprehensive benchmark involving 42 datasets, including both classical local models and deep learning approaches, Chronos models:
  • (a) significantly outperform other methods on datasets included in the training corpus;
  • (b) show comparable or occasionally superior zero-shot performance on new datasets compared to methods trained specifically on those datasets.
  • These results demonstrate the potential of pretrained models to leverage time series data across various domains for improving zero-shot accuracy on unseen forecasting tasks, suggesting a simplified approach to forecasting pipelines.

https://arxiv.org/pdf/2403.07815.pdf

https://github.com/amazon-science/chronos-forecasting/

Unified Time Series Model

UniTS is a unified time series model that can process various tasks across multiple domains with shared parameters and does not have any task-specific modules.

Foundation models, especially LLMs, are profoundly transforming deep learning. Instead of training many task-specific models, we can adapt a single pretrained model to many tasks via few-shot prompting or fine-tuning. However, current foundation models apply to sequence data but not to time series, which present unique challenges due to the inherent diverse and multi-domain time series datasets, diverging task specifications across forecasting, classification and other types of tasks, and the apparent need for task-specialized models. 

We developed UniTS, a unified time series model that supports a universal task specification, accommodating classification, forecasting, imputation, and anomaly detection tasks. This is achieved through a novel unified network backbone, which incorporates sequence and variable attention along with a dynamic linear operator and is trained as a unified model. 

Across 38 multi-domain datasets, UniTS demonstrates superior performance compared to task-specific models and repurposed natural language-based LLMs. UniTS exhibits remarkable zero-shot, few-shot, and prompt learning capabilities when evaluated on new data domains and tasks. We will release the source code and datasets.

https://arxiv.org/pdf/2403.00131v1.pdf

https://zitniklab.hms.harvard.edu/projects/UniTS/

https://github.com/mims-harvard/UniTS

Unified Training of Universal Time Series Forecasting Transformers

  • Deep learning for time series forecasting traditionally uses a one-model-per-dataset approach, limiting potential advancements.
  • Universal forecasting introduces the idea of pre-training a single Large Time Series Model on a vast collection of datasets for diverse tasks.
  • Challenges in creating such a model include: cross-frequency learning, handling multivariate series with arbitrary variates, and varying distributional properties of large-scale data.
  • To overcome these challenges, novel enhancements to the time series Transformer architecture are introduced, creating the Masked EncOder-based UnIveRsAl TIme Series Forecasting Transformer (MOIRAI).
  • MOIRAI is trained on the Large-scale Open Time Series Archive (LOTSA), which contains over 27 billion observations across nine domains.
  • MOIRAI demonstrates competitive or superior performance as a zero-shot forecaster compared to full-shot models.

https://arxiv.org/pdf/2402.02592.pdf?utm_source=substack&utm_medium=email