By the same authors

Two-level Graph Neural Network

Research output: Working paperPreprint

Full text download(s)

Links

Author(s)

Department/unit(s)

Publication details

DatePublished - 3 Jan 2022
Original languageEnglish

Abstract

Graph Neural Networks (GNNs) are recently proposed neural network structures for the processing of graph-structured data. Due to their employed neighbor aggregation strategy, existing GNNs focus on capturing node-level information and neglect high-level information. Existing GNNs therefore suffer from representational limitations caused by the Local Permutation Invariance (LPI) problem. To overcome these limitations and enrich the features captured by GNNs, we propose a novel GNN framework, referred to as the Two-level GNN (TL-GNN). This merges subgraph-level information with node-level information. Moreover, we provide a mathematical analysis of the LPI problem which demonstrates that subgraph-level information is beneficial to overcoming the problems associated with LPI. A subgraph counting method based on the dynamic programming algorithm is also proposed, and this has time complexity is O(n^3), n is the number of nodes of a graph. Experiments show that TL-GNN outperforms existing GNNs and achieves state-of-the-art performance.

Bibliographical note

14 pages, 10 figures

    Research areas

  • cs.LG, cs.AI

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations