Graph Convolutional Neural Network (Part II)

In the previous post, the convolution of the graph Laplacian is defined in its graph Fourier space as outlined in the paper of Bruna et. al. (arXiv:1312.6203) However, the eigenmodes of the graph Laplacian are not ideal because it makes the bases to be graph-dependent. A lot of works were done in order to solve this problem, with the help of various special functions to express the filter functions. Examples include Chebyshev polynomials and Cayley transform.

ChebNet

Kipf and Welling proposed the ChebNet (arXiv:1609.02907) to approximate the filter using Chebyshev polynomial, using the result proved by Hammond et. al. (arXiv:0912.3848) With a convolutional layer g_{\theta} \circ x, instead of computing it directly by \Phi^T g_{\theta} \Phi x, it approximates the filter in terms of Chebyshev polynomial:

g_{\theta} (\Lambda) \approx \sum_{k=0}^K \theta_k T_k(\Lambda),

where T_k(x) are the Chebyshev polynomials. It has been proved by Hammond et. al. that by well truncating this representation, Chebyshev polynomials are very good to approximate the function. Then the convolution is:

g_{\theta} \circ x \approx \sum_{k=0}^K \theta_k T_k (\Lambda) x.

Here, \theta’s are the parameters to be trained. This fixed the bases of the representation, and it speeds up computation. The disadvantage is that eigenvalues are clusters in a few values with large gaps.

CayleyNet

The problem of ChebNet led to the work of Levie et. al., (arXiv:1705.07664) who proposed another approximation is used using Cayley transform. They made use of the Cayley function:

C(\lambda) = \frac{\lambda - i}{\lambda + i},

which is a bijective function from \mathbb{R}^+ to complex unit half-circle. Instead of Chebyshev polynomials, it approximates the filter as:

g(\lambda) = c_0 + \sum_{j=1}^r \left[ c_j C^j (h \lambda) + c_j^{\ast} C^{j \ast} (h \lambda) \right] ,

where c_0 is real and other c_j’s are generally complex, and h is a zoom parameter, and \lambda’s are the eigenvalues of the graph Laplacian. Tuning $h$ makes one find the best zoom that spread the top eigenvalues. c‘s are computed by training. This solves the problem of unfavorable clusters in ChebNet.

MotifNet

All previous works are undirected graph. How do we deal with directed graph with an asymmetric graph Laplacian? Benson, Gleich, Leskovec published an important work on Science in 2016 (arXiv:1612.08447) to address this problem. Their approach is reducing a directed graph to a higher order structure called network motifs. There are 13 network motifs. For each network motif, one can define an adjacency matrix for that motif by \mathcal{M}_k, with elements \mathcal{M}_{k, ij} being the number of motifs in the graph the the edge (i, j) that it belongs to.

f1-large

Then one computes 13 graph Laplacians from these 13 adjacency matrices. These graph Laplacians are symmetric, like those of undirected graphs. Then any filters can be approximated by the following multivariate matrix polynomial, as suggested by Monti, Otness, and Bronstein in their MotifNet paper (arXiv:1802.01572):

f_{\Theta} (\Delta_1, \dots, \Delta_k) = \sum_{j=0}^P \sum_{k_1, \dots, k_j \in {1, \ldots, K}} \theta_{\theta_1, \ldots, \theta_k} \Delta_{k_1}, \ldots, \Delta_{k_j} .

Applications of image processing, citation networks etc.

  • “Graph Convolutional Neural Network (Part I),” Everything About Data Analytics, WordPress (2018). [WordPress]
  • Joan Bruna, Wojciech Zaremba, Arthur Szlam, Yann LeCun, “Spectral Networks and Locally Connected Networks on Graphs,” arXiv:1312.6203 (2013). [arXiv]
  • Eric Weisstein, “Chebyshev Polynomial of the First Kind,” Wolfram World. [Wolfram]
  • “Cayley transform.” [Wikipedia]
  • Thomas N. Kipf, Max Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” arXiv:1609.02907 (2016). [arXiv]
  • David K Hammond, Pierre Vandergheynst, Rémi Gribonval, “Wavelets on Graphs via Spectral Graph Theory,” arXiv:0912.3848 (2009). [arXiv]
  • Ron Levie, Federico Monti, Xavier Bresson, Michael M. Bronstein, “CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters,” arXiv:1705.07664 (2017). [arXiv]
  • Austin R. Benson, David F. Gleich, Jure Leskovec, “Higher-order organization of complex networks,” Science 353 (6295): 163-166 (2016). (arXiv:1612.08447) [arXiv] [Science]
  • Federico Monti, Karl Otness, Michael M. Bronstein, “MotifNet: a motif-based Graph Convolutional Network for directed graphs,” arXiv:1802.01572 (2018). [arXiv]
  • Michael Bronstein, “Geometric Deep Learning on Graphs and Manifolds,” Graph Signal Processing Workshop (June 7, 2018, Lausanne). [PDF]

2 thoughts on “Graph Convolutional Neural Network (Part II)

Add yours

  1. It has fully emerged to crown Singapore’s southern shores and undoubtedly placed her on the global map of residential landmarks. I still scored the more points than I ever have in a season for GS. I think you would be hard pressed to find somebody with the same consistency I have had over the years so I am happy with that.
    360DigiTMG

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at WordPress.com.

Up ↑

%d bloggers like this: