Spatio-Temporal Graphs:
Spatio-Temporal Graphs(STGs):
Temporal knowledge graphs (TKGs):A temporal knowledge graph
Videos:Let
Point cloud streaming(PCS)
Trajectories and others
目前每种数据类型的代表性任务:
Time Series Tasks
forecasting:预测时间序列未来值,分为短期预测和长期预测
classification:将输入时间序列分成不同类型
anomaly detection(异常值检测):一种特殊的分类任务,目标从正常时间序列中识别出异常时间序列
imputation(插补):目标是填充时间序列中的缺失值,而且不失去一般性
Spatio-Temporal Graph Tasks
forecasting(最主要的下游任务):通过参照历史属性和结构信息来预测节点特征
link prediction:根据历史信息预测边的存在
node/graph classification:将节点或者图分类为不同的类别
Temporal Knowledge Graph Tasks
completion:估算并补全图中缺失的关系
forecasting:预测图中节点未来的关系
Video Tasks
detection:识别视频中的特定对象或动作
captioning:为视频内容生成自然语言描述
anticipation:预测视频序列中即将出现的帧
querying:检索与特定查询相关的视频片段
四个维度:data categories, model architectures, model scopes, and application domains or tasks
LM4TS
LLM4TS
general-purpose
domain-specific
PFM4TS
general-purpose
domain-specific
LM4STD
LLM4STD
Spatio-Temporal Graphs(STGs)
Temporal knowledge graphs (TKGs)
Videos Data
PFM4STD
Spatio-Temporal Graphs(STGs)
Videos Data
4.1 Large Language Models in Time Series
4.1.1 General Models
PromptCast:输入和输出都是自然语言句子,是一种“无代码”时间序列预测解决方案,新引入instruction dataset (PISA) for PromptCast任务
LLMTime:证明LLM是有效的零样本时间序列学习器当对事件序列数据正确设置tokenization时
OFA[48]:为了解决缺乏大规模数据训练的问题,提出了一个统一的微调框架,部分冻结LLMs,保持自注意力层和前馈层冻结时只微调embedding层和normalization层。这在所有主要任务中实现了优越的性能。
TEMPO[104]:专注于时间序列预测但是结合了额外的 fine-grained designs such as time series decomposition and soft prompts。
Llm4ts[106]:两阶段微调,首先用监督微调引导LLM处理时间序列数据,然后转向面对下游的时间序列微调。
TEST[105]:使用了新的嵌入方法,tokenizes and encodes the data by instance-wise, feature-wise, and text-prototype-aligned contrast, and then creates prompts to pass to LLMs to perform the tasks。
Time-LLM :使用原数据模态和基于自然语言的prompt来reprogram时间序列,在各种预测场景中取得了最先进的性能,并且在few-shot和zero-shot设置中表现出色。因为不直接编辑输入时间序列,也不微调LLM主干,所以高效。
4.1.2 Domain-Specific Models
Transportation
Finance
Event Prediction
Healthcare
4.2 Pre-Trained Foundation Models in Time Series
4.2.1 General Models
Voice2Series:利用预训练语音模型的表示学习能力,使用语音数据作为univariate temporal signals进行时间序列分类,是第一个对时间序列任务进行reprogramming的工作。
TF-C(based on contrastive learning): 包含了一个基于时间的分量和一个基于频率的分量,每个分量通过对比评估被单独训练,而自监督的信号则由时间和频率分量之间的距离提供,即时间-频率一致性。
TS2Vec:提出了一种通用的对比学习框架,通过a hierarchical way over augmented context views去学习任意子序列在时间序列域中各种语义级别上的上下文表示。该框架支持多变量输入。
CLUDA :一种基于对比学习的无监督域适应模型。两个新颖的组件:custom contrastive learning,and nearest-neighbor contrastive learning。 The adversarial learning被用于对齐这两个组件跨source和target域。对比学习组件旨在学习一个表示空间,使得语义相似的样本更近,而不同的样本更远,因此它可以学习domain-invariant contextual representations in multivariate time series for domain adaptation.
STEP
MTSMAE
SimMTM
PatchTST
TSMixer
4.2.2 Domain-Specific Models
5.1 Spatio-Temporal Graphs
5.1.1 Large Language Models in Spatio-Temporal Graphs
5.1.2 Pre-Trained Foundation Models in Spatio-Temporal Graphs
General Purposes
Domain-specific
Climate
Transportation
5.2 Temporal Knowledge Graphs
5.3 Videos
5.3.1 Large Language Models for Video Data
5.3.2 Pre-Trained Foundation Models for Video Data
6.1 Traffic Application
Datasets
Tools
6.2 Healthcare Application
Datasets
Model Checkpoints and Toolkits
6.3 Weather Application
Datasets
Models and Tools
6.4 Finance Application
Datasets
Models and Tools
6.5 Video Application
Datasets
Models and Tools
6.6 Event Predication Application
Datasets
Models and Tools
6.7 Other Applications
Datasets
General Tools and Librarie
Theoretical Analysis of Large Models
Development of Multimodal Models
Continuous Learning and Adaptation
Interpretability and Explainability
Privacy and Adversarial Attacks on Large Models
Model Generalization and Vulnerabilities