TopoImb: Toward Topology-level Imbalance in Learning from Graphs

Tianxiang Zhao, Dongsheng Luo, Xiang Zhang, Suhang Wang

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Graph serves as a powerful tool for modeling data that has an underlying structure in non-Euclidean space, by encoding relations as edges and entities as nodes. Despite developments in learning from graph-structured data over the years, one obstacle persists: graph imbalance. Although several attempts have been made to target this problem, they are limited to considering only class-level imbalance. We argue that for graphs, the imbalance is likely to exist at the sub-class level in the form of infrequent topological motifs. Due to the flexibility of topology structures, graphs could be highly diverse, and learning a generalizable classification boundary would be difficult. Therefore, several majority topology groups may dominate the learning process, rendering others under-represented. To address this problem, we propose a new framework TopoImb and design (1) a topology extractor, which automatically identifies the topology group for each instance with explicit memory cells, (2) a training modulator, which modulates the learning process of the target GNN model to prevent the case of topology-group-wise under-representation. TopoImb can be used as a key component in GNN models to improve their performances under the data imbalance setting. Analyses on both topology-level imbalance and the proposed TopoImb are provided theoretically, and we empirically verify its effectiveness with both node-level and graph-level classification as the target tasks.

Original languageEnglish (US)
JournalProceedings of Machine Learning Research
Volume198
StatePublished - 2022
Event1st Learning on Graphs Conference, LOG 2022 - Virtual, Online
Duration: Dec 9 2022Dec 12 2022

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Cite this