AllSpark: A Multimodal Spatio-Temporal General Intelligence Model with Ten Modalities via Language as a Reference Framework
2025; Institute of Electrical and Electronics Engineers; Linguagem: Inglês
10.1109/tgrs.2025.3526725
ISSN1558-0644
AutoresRun Shao, Cheng Yang, Qiujun Li, Lei Xu, Xiang Yang, Xian Li, M. H. Li, Qing Zhu, Yongjun Zhang, Yansheng Li, Yu Liu, Yong Tang, Dapeng Liu, Shizhong Yang, Haifeng Li,
Tópico(s)Geographic Information Systems Studies
ResumoRGB, multispectral, point and other spatio-temporal modal data fundamentally represent different observational approaches for the same geographic object. Therefore, leveraging multimodal data is an inherent requirement for comprehending geographic objects. However, due to the high heterogeneity in structure and semantics among various spatio-temporal modalities, the joint interpretation of multimodal spatio-temporal data has long been an extremely challenging problem. The primary challenge resides in striking a trade-off between the cohesion and autonomy of diverse modalities. This trade-off becomes progressively nonlinear as the number of modalities expands. Inspired by the human cognitive system and linguistic philosophy, where perceptual signals from the five senses converge into language, we introduce the Language as Reference Framework (LaRF), a fundamental principle for constructing a multimodal unified model. Building upon this, we propose AllSpark, a multimodal spatio-temporal general artificial intelligence model. Our model integrates ten different modalities into a unified framework, including one-dimensional (language, code, table), two-dimensional (RGB, SAR, multispectral, hyperspectral, graph, trajectory), and three-dimensional (point cloud) modalities. To achieve modal cohesion, AllSpark introduces a modal bridge and multimodal large language model (LLM) to map diverse modal features into the language feature space. To maintain modality autonomy, AllSpark uses modality-specific encoders to extract the tokens of various spatio-temporal modalities. Finally, observing a gap between the model's interpretability and downstream tasks, we designed modality-specific prompts and task heads, enhancing the model's generalization capability across specific tasks. Experiments indicate that the incorporation of language enables AllSpark to excel in few-shot classification tasks for RGB and point cloud modalities without additional training, surpassing baseline performance by up to 41.82%. Additionally, AllSpark, despite lacking expert knowledge in most spatio-temporal modalities and utilizing a unified structure, demonstrates strong adaptability across ten modalities. LaRF and AllSpark contribute to the shift in the research paradigm in spatio-temporal intelligence, transitioning from a modality-specific and task-specific paradigm to a general paradigm. The source code is available at https://github.com/GeoX-Lab/AllSpark.
Referência(s)