In the previous post, the convolution of the graph Laplacian is defined in its graph Fourier space as outlined in the paper of Bruna *et. al.* (arXiv:1312.6203) However, the eigenmodes of the graph Laplacian are not ideal because it makes the bases to be graph-dependent. A lot of works were done in order to solve this problem, with the help of various special functions to express the filter functions. Examples include Chebyshev polynomials and Cayley transform.

## ChebNet

Kipf and Welling proposed the ChebNet (arXiv:1609.02907) to approximate the filter using Chebyshev polynomial, using the result proved by Hammond *et. al.* (arXiv:0912.3848) With a convolutional layer , instead of computing it directly by , it approximates the filter in terms of Chebyshev polynomial:

,

where are the Chebyshev polynomials. It has been proved by Hammond *et. al.* that by well truncating this representation, Chebyshev polynomials are very good to approximate the function. Then the convolution is:

.

Here, ’s are the parameters to be trained. This fixed the bases of the representation, and it speeds up computation. The disadvantage is that eigenvalues are clusters in a few values with large gaps.

## CayleyNet

The problem of ChebNet led to the work of Levie *et. al.,* (arXiv:1705.07664) who proposed another approximation is used using Cayley transform. They made use of the Cayley function:

,

which is a bijective function from to complex unit half-circle. Instead of Chebyshev polynomials, it approximates the filter as:

,

where is real and other ’s are generally complex, and is a zoom parameter, and ’s are the eigenvalues of the graph Laplacian. Tuning $h$ makes one find the best zoom that spread the top eigenvalues. ‘s are computed by training. This solves the problem of unfavorable clusters in ChebNet.

## MotifNet

All previous works are undirected graph. How do we deal with directed graph with an asymmetric graph Laplacian? Benson, Gleich, Leskovec published an important work on Science in 2016 (arXiv:1612.08447) to address this problem. Their approach is reducing a directed graph to a higher order structure called network motifs. There are 13 network motifs. For each network motif, one can define an adjacency matrix for that motif by , with elements being the number of motifs in the graph the the edge that it belongs to.

Then one computes 13 graph Laplacians from these 13 adjacency matrices. These graph Laplacians are symmetric, like those of undirected graphs. Then any filters can be approximated by the following multivariate matrix polynomial, as suggested by Monti, Otness, and Bronstein in their MotifNet paper (arXiv:1802.01572):

.

Applications of image processing, citation networks etc.

Continue reading “Graph Convolutional Neural Network (Part II)”