In the previous post, the convolution of the graph Laplacian is defined in its graph Fourier space as outlined in the paper of Bruna et. al. (arXiv:1312.6203) However, the eigenmodes of the graph Laplacian are not ideal because it makes the bases to be graph-dependent. A lot of works were done in order to solve this problem, with the help of various special functions to express the filter functions. Examples include Chebyshev polynomials and Cayley transform.

## ChebNet

Kipf and Welling proposed the ChebNet (arXiv:1609.02907) to approximate the filter using Chebyshev polynomial, using the result proved by Hammond et. al. (arXiv:0912.3848) With a convolutional layer $g_{\theta} \circ x$, instead of computing it directly by $\Phi^T g_{\theta} \Phi x$, it approximates the filter in terms of Chebyshev polynomial: $g_{\theta} (\Lambda) \approx \sum_{k=0}^K \theta_k T_k(\Lambda)$,

where $T_k(x)$ are the Chebyshev polynomials. It has been proved by Hammond et. al. that by well truncating this representation, Chebyshev polynomials are very good to approximate the function. Then the convolution is: $g_{\theta} \circ x \approx \sum_{k=0}^K \theta_k T_k (\Lambda) x$.

Here, $\theta$’s are the parameters to be trained. This fixed the bases of the representation, and it speeds up computation. The disadvantage is that eigenvalues are clusters in a few values with large gaps.

## CayleyNet

The problem of ChebNet led to the work of Levie et. al., (arXiv:1705.07664) who proposed another approximation is used using Cayley transform. They made use of the Cayley function: $C(\lambda) = \frac{\lambda - i}{\lambda + i}$,

which is a bijective function from $\mathbb{R}^+$ to complex unit half-circle. Instead of Chebyshev polynomials, it approximates the filter as: $g(\lambda) = c_0 + \sum_{j=1}^r \left[ c_j C^j (h \lambda) + c_j^{\ast} C^{j \ast} (h \lambda) \right]$,

where $c_0$ is real and other $c_j$’s are generally complex, and $h$ is a zoom parameter, and $\lambda$’s are the eigenvalues of the graph Laplacian. Tuning $h$ makes one find the best zoom that spread the top eigenvalues. $c$‘s are computed by training. This solves the problem of unfavorable clusters in ChebNet.

## MotifNet

All previous works are undirected graph. How do we deal with directed graph with an asymmetric graph Laplacian? Benson, Gleich, Leskovec published an important work on Science in 2016 (arXiv:1612.08447) to address this problem. Their approach is reducing a directed graph to a higher order structure called network motifs. There are 13 network motifs. For each network motif, one can define an adjacency matrix for that motif by $\mathcal{M}_k$, with elements $\mathcal{M}_{k, ij}$ being the number of motifs in the graph the the edge $(i, j)$ that it belongs to. Then one computes 13 graph Laplacians from these 13 adjacency matrices. These graph Laplacians are symmetric, like those of undirected graphs. Then any filters can be approximated by the following multivariate matrix polynomial, as suggested by Monti, Otness, and Bronstein in their MotifNet paper (arXiv:1802.01572): $f_{\Theta} (\Delta_1, \dots, \Delta_k) = \sum_{j=0}^P \sum_{k_1, \dots, k_j \in {1, \ldots, K}} \theta_{\theta_1, \ldots, \theta_k} \Delta_{k_1}, \ldots, \Delta_{k_j}$.

Applications of image processing, citation networks etc.

Suggested by some friends, I have been reading graph convolutional neural network. This is not a recent interest, as I have been interested in some graph-related problems (as I developed the graphflow package) and topological data analysis (TDA, as I developed the mogutda package). Graph theory is not a new field as well, but it is interesting to see how it is being applied in deep learning, as a lot of real-world relational data can be expressed in terms of graphs. (Examples: word relations, organizational structure, genes, internet etc.) With the increasing attention to neo4j, we know graph is coming to be hot.

It would be helpful to review some basic graph theory:

• A graph is represented by $\mathcal{G} = (V, E)$, where $V$ and $E$ are the nodes (vertices) and edges respectively.
• A graph can be directed or undirected. In graph convolutional neural network, they are undirected usually.
• The adjacency matrix $A$ describes how nodes are connected: $A_{ij} = 1$ if there is an edge connecting from node $i$ to node $j$, and $0$ otherwise. $A$ is a symmetric matrix for an undirected graph.
• The incidence matrix $B$ is another way to describe how nodes are connected: $B_{ij} = 1$ if a node $i$ is connected with edge $j$. This is useful for undirected graph.
• The degree matrix $D$ is a diagonal matrix, with elements $D_{ii}$ denotes the number of neighbors for node $i$ in undirected matrix.
• The function acting on the nodes is called the filter.
• The graph Laplacian, or Kirchhoff matrix, is defined by $L = D - A$, and the normalized graph Laplacian is $\tilde{L} = I - D^{-\frac{1}{2}} A D^{-\frac{1}{2}}$.

The graph Laplacian is the most important matrix in graph convolutional neural network. It is analogous to the Laplacian operator in Euclidean space, $\nabla^2$. The reader can easily verify this by constructing a graph of 2D lattice and compute the graph Laplacian matrix, and find that it is the same as the discretized Laplacian operator.

We can also get some insights from the Euclidean analogue. In physics, the solution to the Laplacian equation is harmonic: the basis of the solution can be described in the spectral/Fourier space, as: $\nabla^2 \psi = - \omega^2 \psi$,

And $\psi \propto e^{\pm i \omega x}$. In graph convolutional neural network, as Bruna et. al. suggested in 2013, the graph is calculated in the graph Fourier space, instead of directly dealing with the Laplacian matrix in all layers of network.

On the other hand, we know that for the convolution $(f \circ g) (x) = \int dy f(x-y) g(y)$,

its Fourier transform is given by $\tilde{( f \circ g)} (\omega) = \tilde{f} (\omega) \tilde{g} (\omega)$.

In Fourier space, the convolution of two functions are just their products. Similarly, in graph convolutional neural network, convolutions can be computed in the Fourier space as the mere product of two filters in the Fourier space. More specifically, for finding the convolution of the filters $f$ and $g$, with $\Phi$ being the unitary eigenmatrix, $f \circ g = \Phi ((\Phi^T g) \circ (\Phi^T f)) = \Phi \textbf{diag}(g_1, g_2, \ldots, g_n) f$.

However, such general description is basis-dependent, and it is still computationally expensive. More work has been proposed to smooth the representation, which will be covered in the upcoming blogs.

A side note, the readers can verify themselves that $\sum_{ij} A_{ij} | f_i - g_j |^2 = f \tilde{L} g$