GSR-GNN: Training Acceleration and Memory-Saving Framework of Deep GNNs on Circuit Graph

Published in ACM/IEEE Design Automation Conference (DAC), 2026
GSR-GNN enables training GNNs with up to hundreds of layers on circuit graphs while reducing both compute and memory overhead, achieving up to 87.2% peak memory reduction and over 30x training speedup.
Y. Luo, S. Li, Y. Feng, V. Kancharla, S. Huang, C. Ding. "GSR-GNN: Training Acceleration and Memory-Saving Framework of Deep GNNs on Circuit Graph." In Proceedings of the 63rd ACM/IEEE Design Automation Conference (DAC '26), 2026.
Download Paper






