Discriminative Adversarial Networks

Generative adversarial networks (GANs) have made a big impact to the world of machine learning. It is particularly useful for generating sample data when there are insufficient data for certain purposes. It is also useful for training using data with both labeled and unlabeled data, i. e., semi-supervised learning. (SSL)

The rise of GANs also lead to the re-emergence of adversarial learning regarding the handling of unbalanced data or sensitive data. (For example, see arXiv:1707.00075.)

GAN is particularly useful for computer vision problems. However, it is not very good for natural language problems as the data cannot be generated continuously. Under this context, a modification on GAN is developed, called discriminative adversarial networks (DAN, see arXiv:1707.02198.). Unlike GANs that has a discriminator to train a generator to produce good data, DAN has two discriminators: one discriminator, usually denoted as the predictor P, that predicts on the unlabeled data, and another, usually denoted as the judge J, that classifies whether the label is a human label or a machine-predicted label.

Screen Shot 2019-06-23 at 5.55.36 PM

The loss function of DAN is very similar to that of GAN: minimizing the entropy difference for the judge J for labeled data, but minimizing that for predictions for unlabeled data for the predictor P.

Screen Shot 2019-06-23 at 6.01.15 PM

However, GAN and DAN are not generative-disciminative pairs.

Continue reading “Discriminative Adversarial Networks”

Advertisements

Strategies of Recommendation Systems

Systems developed by enterprises such as Netflix produce recommendations. Good recommendations induce good user experience and higher return rates. Humans give recommendations based on experience, knowledge, worldviews, wisdom etc., and automatic recommendation systems do it based on big data and machine learning.

Recommendation Strategies

Recommendation systems employ one or more of the following strategies:

  1. Collaborative Filtering (CF);
  2. Content-based Filtering (CBF);
  3. Demographic Filtering (DF); and
  4. Knowledge-Based Filtering (KBF).

1. Collaborative Filtering (CF)

CF recommends similar items to users of similar tastes. Whether it is user-based filtering or item-based filtering, the same assumption holds. Similarity between users or items are calculated by Pearson correlations or cosine similarities.

Matrix Factorization (MF), or similar latent semantic indexing (LSI) is actually a kind of collaborative filtering, although the users or items are converted to an encoded vector, and the recommendation scores are given by the cosine similarity between the encoded vectors of the users and the items.

Such recommendation systems suffer the cold-start problem: new users or new items cannot be accounted for when giving recommendations.

2. Content-Based Filtering (CBF)

CBF employs common machine learning algorithms to learn a user’s preference based on their consumption/purchase history and their profiles. Embedded vectors will be used too. However, this suffer cold-start problem.

3. Demographic Filtering (DF)

DF strategy makes use of users’ profiles such as age, sex, and other information to make recommendations. The algorithms might be rule-based, or machine learning also. However, nowadays, it might give rise to issues regarding fairness, equal opportunities, privacy, or ethics, in the wake of the era of GDPR or CCPA.

4. Knowledge-Based Filtering (KBF)

KBF makes recommendations based on the expert knowledge of the subject matter, known reasoning, or statistics. Recommendations may be made using a rule-based approach, or a predefined probabilistic model (such as census data). Some might have even employ a knowledge database. Big data may not be necessary in this kind of systems as the reasoning has been manually built-in.

Hybrid Recommendation Systems

Hybrid recommendation systems employ more than one of the above strategies. To combine all these strategies, one might put a voting system to all the results to give an aggregated results, or a weighting scheme, or a stacked generalization to combine all these methods together.

Continue reading “Strategies of Recommendation Systems”

Mathematical Models of Fake News

Fake news is not something new, but it catches our attention since the 2016 presidential election campaign. Some people label a piece of news fake news if it does not align with its ideological elements in its analysis. Fake news are necessarily biased, although truth are inevitably not neutral as well. While a lot of technological tycoons want to handle fake news appropriately in their platform, a lack of a formal definition makes it difficult.

It has been introduced in previous entry that in order to detect fake news using machine learning, we have to provide a corpus. This provides a paveway to train supervised learning models.

Recently, some people look at this problem in a different way from the sociological point of view. In order to evaluate the impact imposed by fake news, instead of having a machine learning model, we need a dynamical model of the spread of fake news. In their preprint in arXiv, Brody and Meier studied it using the tools in communication theory. They proposed that the flow of information \eta_t is:

\eta_t = \sigma X t + B_t + F_t,

where t is time, X is either 0 or 1 indicating the news is false or true respectively, B_t is noise (Brownian motion), and F_t is the fake news. The first term indicates that we have more information over time, and thus the flow of true information increases with time. This model assumes a linear evolution. The authors define F_t to be fake news if its expection $latex \mathop{\mathbb{E}} (F_t) \neq 0$. Depending on the situation, X and F_t are either independent or correlated.

To study the impact, the authors categorized voters into three categories:

  • Category I: unaware of the existence of fake news, but act rationally;
  • Category II: aware of the existence of fake news, but unaware of the time point when fake news emerges;
  • Category III: fully aware of the existence of fake news, and fully eliminated them when making a judgement.

There are further mathematical models to model the election dynamics, but readers can refer the details to the preprint. With a piece of fake news in favor of candidate B, this model gives the influence of fake news on one’s judgment, as plotted below:Screen Shot 2018-10-03 at 5.14.01 PM

This dynamical model confirms that by eliminating the fake news, the voters make better judgment. However, with the awareness of the piece of fake news emerging at a time unknown to the voter, the impact is still disastrous.

To me, this study actually confirms that the fact check is useless in terms or curbing the turmoil introduced by fake news. The flow of information nowadays is so without viscosity that ways to eliminate fake news has to be derived. However, we know censorship is not the way to go as it is a highway to a totalitarian government. The future of democracy is dim.

Continue reading “Mathematical Models of Fake News”

Use of Graph Networks in Machine Learning

Deep learning has achieved a big success in the past few years, but its interpretive power is limited. They work largely because of the abundance of data. On the other hand, traditional machine learning algorithms are much better in interpretive power, but manual feature engineering costs a lot, due to the lack of data in earlier era. In light of this, a group of scientists initiated the work of graph networks, aiming at devising new artificial intelligence algorithms that exploits the advantages of two worlds, while still holding the principle of combinatorial generalization in constructing methods by using known building blocks to build new methods. Graph is good at interpretation as it is good for relational representation.

The use of graph networks is more than the graph convolutional neural networks (GCN) in the previous two blog entries. (part I and part II) However, to achieve relational inductive biases, an entity (an element with attributes), a relation, (a property between entities) and a rule. (a function that maps entities and relations to other entities and relations) This can be realized using graph, which is a mathematical structure that contains nodes and edges (that connect nodes.) To generalize the use of graph networks in various machine learning and deep learning methods, they reviewed the graph block, which is basically a function, or a mapping, from a graph to another graph, as shown in the algorithm below:

GBBlock

Works of graph networks are not non-existent; the authors listed previous works that can be seen as graph networks, for example:

  • Message-passing neural network (MPNN) (2017);
  • Non-local neural networks (NLNN) (2018).

The use of graph networks, I believe, is the next trend. There have been works regarding the graph-powered machine learning. (see Google AI blog, GraphAware Slideshare) I recently started an open-source project, a Python package called graphflow, to explore various algorithms using graphs, including PageRank, HITS, resistance, and non-linear resistance.

Continue reading “Use of Graph Networks in Machine Learning”

Graph Convolutional Neural Network (Part II)

In the previous post, the convolution of the graph Laplacian is defined in its graph Fourier space as outlined in the paper of Bruna et. al. (arXiv:1312.6203) However, the eigenmodes of the graph Laplacian are not ideal because it makes the bases to be graph-dependent. A lot of works were done in order to solve this problem, with the help of various special functions to express the filter functions. Examples include Chebyshev polynomials and Cayley transform.

ChebNet

Kipf and Welling proposed the ChebNet (arXiv:1609.02907) to approximate the filter using Chebyshev polynomial, using the result proved by Hammond et. al. (arXiv:0912.3848) With a convolutional layer g_{\theta} \circ x, instead of computing it directly by \Phi^T g_{\theta} \Phi x, it approximates the filter in terms of Chebyshev polynomial:

g_{\theta} (\Lambda) \approx \sum_{k=0}^K \theta_k T_k(\Lambda),

where T_k(x) are the Chebyshev polynomials. It has been proved by Hammond et. al. that by well truncating this representation, Chebyshev polynomials are very good to approximate the function. Then the convolution is:

g_{\theta} \circ x \approx \sum_{k=0}^K \theta_k T_k (\Lambda) x.

Here, \theta’s are the parameters to be trained. This fixed the bases of the representation, and it speeds up computation. The disadvantage is that eigenvalues are clusters in a few values with large gaps.

CayleyNet

The problem of ChebNet led to the work of Levie et. al., (arXiv:1705.07664) who proposed another approximation is used using Cayley transform. They made use of the Cayley function:

C(\lambda) = \frac{\lambda - i}{\lambda + i},

which is a bijective function from \mathbb{R}^+ to complex unit half-circle. Instead of Chebyshev polynomials, it approximates the filter as:

g(\lambda) = c_0 + \sum_{j=1}^r \left[ c_j C^j (h \lambda) + c_j^{\ast} C^{j \ast} (h \lambda) \right] ,

where c_0 is real and other c_j’s are generally complex, and h is a zoom parameter, and \lambda’s are the eigenvalues of the graph Laplacian. Tuning $h$ makes one find the best zoom that spread the top eigenvalues. c‘s are computed by training. This solves the problem of unfavorable clusters in ChebNet.

MotifNet

All previous works are undirected graph. How do we deal with directed graph with an asymmetric graph Laplacian? Benson, Gleich, Leskovec published an important work on Science in 2016 (arXiv:1612.08447) to address this problem. Their approach is reducing a directed graph to a higher order structure called network motifs. There are 13 network motifs. For each network motif, one can define an adjacency matrix for that motif by \mathcal{M}_k, with elements \mathcal{M}_{k, ij} being the number of motifs in the graph the the edge (i, j) that it belongs to.

f1-large

Then one computes 13 graph Laplacians from these 13 adjacency matrices. These graph Laplacians are symmetric, like those of undirected graphs. Then any filters can be approximated by the following multivariate matrix polynomial, as suggested by Monti, Otness, and Bronstein in their MotifNet paper (arXiv:1802.01572):

f_{\Theta} (\Delta_1, \dots, \Delta_k) = \sum_{j=0}^P \sum_{k_1, \dots, k_j \in {1, \ldots, K}} \theta_{\theta_1, \ldots, \theta_k} \Delta_{k_1}, \ldots, \Delta_{k_j} .

Applications of image processing, citation networks etc.

Continue reading “Graph Convolutional Neural Network (Part II)”

Graph Convolutional Neural Network (Part I)

Suggested by some friends, I have been reading graph convolutional neural network. This is not a recent interest, as I have been interested in some graph-related problems (as I developed the graphflow package) and topological data analysis (TDA, as I developed the mogutda package). Graph theory is not a new field as well, but it is interesting to see how it is being applied in deep learning, as a lot of real-world relational data can be expressed in terms of graphs. (Examples: word relations, organizational structure, genes, internet etc.) With the increasing attention to neo4j, we know graph is coming to be hot.

It would be helpful to review some basic graph theory:

  • A graph is represented by \mathcal{G} = (V, E), where V and E are the nodes (vertices) and edges respectively.
  • A graph can be directed or undirected. In graph convolutional neural network, they are undirected usually.
  • The adjacency matrix A describes how nodes are connected: A_{ij} = 1 if there is an edge connecting from node i to node j, and 0 otherwise. A is a symmetric matrix for an undirected graph.
  • The incidence matrix B is another way to describe how nodes are connected: B_{ij} = 1 if a node i is connected with edge j. This is useful for undirected graph.
  • The degree matrix D is a diagonal matrix, with elements D_{ii} denotes the number of neighbors for node i in undirected matrix.
  • The function acting on the nodes is called the filter.
  • The graph Laplacian, or Kirchhoff matrix, is defined by L = D - A, and the normalized graph Laplacian is \tilde{L} = I - D^{-\frac{1}{2}} A D^{-\frac{1}{2}}.

The graph Laplacian is the most important matrix in graph convolutional neural network. It is analogous to the Laplacian operator in Euclidean space, \nabla^2. The reader can easily verify this by constructing a graph of 2D lattice and compute the graph Laplacian matrix, and find that it is the same as the discretized Laplacian operator.

We can also get some insights from the Euclidean analogue. In physics, the solution to the Laplacian equation is harmonic: the basis of the solution can be described in the spectral/Fourier space, as:

\nabla^2 \psi = - \omega^2 \psi,

And \psi \propto e^{\pm i \omega x}. In graph convolutional neural network, as Bruna et. al. suggested in 2013, the graph is calculated in the graph Fourier space, instead of directly dealing with the Laplacian matrix in all layers of network.

On the other hand, we know that for the convolution

(f \circ g) (x) = \int dy f(x-y) g(y),

its Fourier transform is given by

\tilde{( f \circ g)} (\omega) = \tilde{f} (\omega) \tilde{g} (\omega).

In Fourier space, the convolution of two functions are just their products. Similarly, in graph convolutional neural network, convolutions can be computed in the Fourier space as the mere product of two filters in the Fourier space. More specifically, for finding the convolution of the filters f and g, with \Phi being the unitary eigenmatrix,

f \circ g = \Phi ((\Phi^T g) \circ (\Phi^T f)) = \Phi \textbf{diag}(g_1, g_2, \ldots, g_n) f.

However, such general description is basis-dependent, and it is still computationally expensive. More work has been proposed to smooth the representation, which will be covered in the upcoming blogs.

A side note, the readers can verify themselves that

\sum_{ij} A_{ij} | f_i - g_j |^2 = f \tilde{L} g

Continue reading “Graph Convolutional Neural Network (Part I)”

Embedded Language Models

Sebastian Ruder recently wrote an article on The Gradient and asserted that the oracle of natural language processing is emerging. While I am not sure such confident statement is overstated, I do look forward to the moment that we will download pre-trained embedded language models and transfer to our use cases, just like we are using pre-trained word-embedding models such as Word2Vec and FastText.

I do not think one can really draw a parallelism between computer vision and natural language processing. Computer vision is challenging, but natural language processing is even more difficult because the tasks regarding linguistics are not limited to object or meaning recognition, but also human psychology, cultures, and linguistic diversities. The objectives are far from being identical.

However, the transferrable use of embedded language models is definitely a big step forward. Ruder quoted three articles, which I would summarize below in a few words.

  • Embeddings from Language Models (ELMo, arXiv:1802.05365): based on the successful bidirectional LSTM language models, the authors developed a deep contextualized embedded models by collapses all layers in the neural network architecture.
  • Universal Language Model Fine-Tuning for Text Classification (ULMFiT, arXiv:1801.06146): the authors proposed a type of architectures that learn representations for specific tasks, which involve three steps in training: a) LM pre-training: learning through unlabeled corpus with abundant data; b) LM fine-tuning: learning through labeled corpus; and c) classifier fine-tuning: transferred training for specific classification tasks.
  • OpenAI Transformer (article still in progress): the author proposed a simple generative language model with the three similar steps in ULMFit: a) unsupervised pre-training: training a language model that maximizes the likelihood of a sequence of tokens within a context window; b) supervised fine-tuning: a supervised classification training that maximizes the likelihood using the Bayesian approach; c) task-specific input transformations: training the classifiers on a specific task.

These three articles are intricately related to each other. Without abundant data and good hardware, it is almost impossible to produce the language models. As Ruder suggested, we will probably have a pre-trained model up to the second step of the ULMFit and OpenAI Transformer papers, but we train our own specific model for our use. We have been doing this for word-embedding models, and this approach has been common in computer vision too.

Continue reading “Embedded Language Models”

moguTDA: Python package for Simplicial Complex

It has been a while since I wrote about topological data analysis (TDA). For pedagogical reasons, a lot of the codes were demonstrated in the Github repository PyTDA. However, it is not modularized as a package, and those codes run in Python 2.7 only.

Upon a few inquiries, I decided to release the codes as a PyPI package, and I named it mogutda, under the MIT license. It is open-source, and the codes can be found at the Github repository MoguTDA. It runs in Python 2.7, 3.5, and 3.6.

For more information and simple tutorial, please refer to the documentation, or the Github page.

Continue reading “moguTDA: Python package for Simplicial Complex”

Election Forecasting using Quantum Computers

The 2016 US Presidential Election ended with a surprise that Mr. Donald Trump won, despite the overwhelming prediction of a Clinton victory. There have been many studies challenging the theories in traditional political forecasting.

Some took an approach regarding statistics. Many studies concluded that many election forecasting models did not take into account between individual states predictions. However, a classical computation method limited such type of models that connects individual states (or fully-connected models). Hence, a group from QxBranch and Standard Cognition resorted to adiabatic quantum computation. (See: arXiv:1802.00069.)

D-Wave computers are adiabatic quantum computers that perform quantum annealing. A D-Wave 2X has 1152 qubits, and can naturally describes a Boltzmann Machine (BM) model, equivalent to Ising model in statistical physics. The energy function is described by:

E[\mathbf{s}] = -\sum_{\mathbf{s}_i \in \mathbf{S}} b_i s_i - \sum_{\mathbf{s}_i, \mathbf{s}_j \in \mathbf{S}} W_{ij} s_i s_j ,

where \mathbf{s} are the values of all qubits (0, 1, or their superpositions). The field strength b_i and coupling constants W_{ij} can be tuned. Classical models can handle the first term, which is linear; but the correlations, described by the second term, can be computationally costly for classical computers. Hence, the authors used a D-Wave quantum computer to trained the election models from June 30, 2016 to November 11, 2016 for every two weeks, and retrieved the correlations between individual states. Then The correctly simulated that Mr. Trump would win the election.

This Ising model of election was devised after the election, and it is prone to suspicion for fixing the problems using the results. However, this work demonstrated the power of a quantum computer that it solves some political modeling problems that can be too complicated for classical computers.

Continue reading “Election Forecasting using Quantum Computers”

Blog at WordPress.com.

Up ↑