Challenges in Time Series Forecasting Models
The biggest challenge in building a pre-trained model for time series is finding high-quality and diverse data. This difficulty is at the core of developing effective forecasting models.
Main Approaches
Two primary approaches are used to build a fundamental forecasting model:
- Adapting an LLM: This method involves repurposing a pre-trained language model like GPT-4 or Llama by adapting it to time series tasks.
- Building from Scratch: This approach involves creating a vast time series dataset to pre-train a model, hoping it will generalize to new data.
Results and Limitations
The second approach has proven more effective, as evidenced by models such as MOIRAI, TimesFM, and TTM. However, these models follow scaling laws, and their performance heavily depends on the availability of extensive time series data, which brings us back to the initial challenge.
Innovation: Using Images
Faced with these limitations, an innovative approach was explored: using a different modality, namely images. Although counterintuitive, this method has produced groundbreaking results, opening new perspectives in the field of time series forecasting.
VisionTS: A New Paradigm
VisionTS represents a novel approach that leverages the power of image-based models for time series forecasting. This method transforms time series data into images, allowing the use of advanced computer vision techniques to predict future values.
Advantages of Image-Based Forecasting
Using images for time series forecasting offers several advantages:
- Access to a vast pool of pre-trained image models
- Ability to capture complex patterns and relationships in data
- Potential for transfer learning from diverse image datasets
Future Implications
The success of VisionTS suggests a promising direction for future research in time series forecasting. It demonstrates the potential of cross-modal learning and opens up new possibilities for improving prediction accuracy and generalization in various domains.
paper:
https://arxiv.org/pdf/2408.17253
Code: