(NeurIPS2024) Supra-Laplacian Encoding for Transformer on Dynamic Graphs

Published in 38th Conference on Neural Information Processing Systems (NeurIPS 2024), 2024

Recommended citation: Karmim, Y., Lafon, M., Fournier S’niehotta Cedric, R., & Thome, N. (2024). Supra-Laplacian Encoding for Transformer on Dynamic Graphs. NeurIPS2024. https://arxiv.org/abs/2409.17986v1 https://arxiv.org/abs/2409.17986

Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching. However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention, GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information. Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix. Our second contribution explicitly model nodes’ pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction. SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g LSTM), and Dynamic Graph Transformers, on 9 datasets. Code and instructions to reproduce our results will be open-sourced.

Download paper here