In this work, the authors present a simpler alternative, “a simple linear model w.r.t. the adjacency matrix of the graph”, to graph convolution encoders for graph autoencoders.

Embedding vectors are obtained by multiplying the n × n normalized adjacency matrix A by a single n × d weight matrix W, tuned by gradient descent in a similar fashion w.r.t. standard AE. This encoder is a straightforward linear mapping. Contrary to standard GCN encoders (as L ≥ 2), nodes only aggregate information from their one-step neighbors.

Tests on link prediction and node clustering tasks show similar results to those obtained with graph convolution methods.

Below is the abstract of Keep It Simple: Graph Autoencoders Without Graph Convolutional Networks.

Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged as powerful node embedding methods, with promising performances on challenging tasks such as link prediction and node clustering. Graph AE, VAE and most of their extensions rely on graph convolutional networks (GCN) to learn vector space representations of nodes. In this paper, we propose to replace the GCN encoder by a simple linear model w.r.t. the adjacency matrix of the graph. For the two aforementioned tasks, we empirically show that this approach consistently reaches competitive performances w.r.t. GCN-based models for numerous real-world graphs, including the widely used Cora, Citeseer and Pubmed citation networks that became the de facto benchmark datasets for evaluating graph AE and VAE. This result questions the relevance of repeatedly using these three datasets to compare complex graph AE and VAE models. It also emphasizes the effectiveness of simple node encoding schemes for many real-world applications.

Pin It on Pinterest