These people published an open-source software package, a Python library, called OpenFermion. It facilitates simulation of quantum algorithms in fermionic systems.

For a completeness, a few years ago, another group of scientists published a Python package, QuTiP, that helps simulating the open quantum systems.

- Ryan Babbush, “Reformulating Chemistry for More Efficient Quantum Computation,”
*Google AI Blog*(March 22, 2018). [Google] - P. J. J. O’Malley
*et. al.*, “Scalable Quantum Simulation of Molecular Energies,”*Phys. Rev. X***6**, 031007 (2016). [PRX] - Ryan Babbush, Nathan Wiebe, Jarrod McClean, James McClain, Hartmut Neven, and Garnet Kin-Lic Chan, “Low-Depth Quantum Simulation of Materials,”
*Phys. Rev. X***8**, 011044 (2018). [PRX] - Ian D. Kivlichan, Jarrod McClean, Nathan Wiebe, Craig Gidney, Alán Aspuru-Guzik, Garnet Kin-Lic Chan, and Ryan Babbush, “Quantum Simulation of Electronic Structure with Linear Depth and Connectivity,”
*Phys. Rev. Lett.***120**, 110501 (2018). [PRL] - Github: quantumlib/OpenFermion. [Github]
- Jarrod R. McClean, Ian D. Kivlichan, Kevin J. Sung, Damian S. Steiger, Yudong Cao, Chengyu Dai, E. Schuyler Fried, Craig Gidney, Brendan Gimby, Thomas Häner, Tarini Hardikar, Vojtěch Havlíček, Cupjin Huang, Zhang Jiang, Matthew Neeley, Thomas O’Brien, Isil Ozfidan, Maxwell D. Radin, Jhonathan Romero, Nicholas Rubin, Nicolas P. D. Sawaya, Kanav Setia, Sukin Sim, Mark Steudtner, Wei Sun, Fang Zhang, Ryan Babbush, “OpenFermion: The Electronic Structure Package for Quantum Computers,” arXiv:1710.07629 (2017). [arXiv]
- QuTiP: Quantum Toolbox in Python. [QuTiP]
- Github: qutip/qutip. [Github]
- J. R. Johansson, P. D. Nation, Franco Nori, “QuTiP: An open-source Python framework for the dynamics of open quantum systems,”
*Computer Physics Communications***183**, 1760-1772 (2012). [Elsevier] [arXiv]

Apparently, with a state-of-the-art hardware, it is of Google’s advantage to perform such an experiment on the CIFAR-10 dataset using 450 GPUs for 3-4 days. But this makes the work inaccessible for small companies or personal computers.

Then it comes an improvement to NAS: the Efficient Neural Architecture Search via Parameter Sharing (ENAS), which is a much more efficient method to search for a neural networks, by narrowing down the search in a subgraph. It reduces the need of GPUs.

While I do not think it is a threat to machine learning engineers, it is a great algorithm to note. It looks to me a brute-force algorithm, but it needs scientists and engineers to gain insights. Still, I believe development of the theory behind neural networks is much needed.

- “Using Machine Learning to Explore Neural Network Architecture,”
*Google Research Blog*, 2017. [Google] - Barret Zoph, Quoc V. Le, “Neural Architecture Search with Reinforcement Learning,” arXiv:1611.01578 (2016). [arXiv]
- “AutoML for large scale image classification and object detection,”
*Google Research Blog*, 2017. [Google] - Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin, “Large-Scale Evolution of Image Classifiers,” arXiv:1703.01041 (2017). [arXiv]
- Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean, “Efficient Neural Architecture Search via Parameter Sharing,” arXiv:1802.03268 (2018). [arXiv]
- tobe, “ENAS: 更有效地设计神经网络模型(AutoML),” TensorFlow专栏, Zhihu (2018). [Zhihu]
- “Neural Architecture Search.” [Wikipedia]
- “How realistic is AutoML (Google’s attempts to build neural networks without human intervention)? Is this a real threat for machine learning engineers?” [Quora]

Automatic text summarizationis the task of producing a concise and fluent summary while preserving key information content and overall meaning.

There are basically two approaches to this task:

*extractive summarization*: identifying important sections of the text, and extracting them; and*abstractive summarization*: producing summary text in a new way.

Most algorithmic methods developed are of the extractive type, while most human writers summarize using abstractive approach. There are many methods in extractive approach, such as identifying given keywords, identifying sentences similar to the title, or wrangling the text at the beginning of the documents.

How do we instruct the machines to perform extractive summarization? The authors mentioned about two representations: topic and indicator. In topic representations, frequencies, tf-idf, latent semantic indexing (LSI), or topic models (such as latent Dirichlet allocation, LDA) are used. However, simply extracting these sentences out with these algorithms may not generate a readable summary. Employment of knowledge bases or considering contexts (from web search, e-mail conversation threads, scientific articles, author styles etc.) are useful.

In indicator representation, the authors mentioned the graph methods, inspired by PageRank. (see this) “Sentences form vertices of the graph and edges between the sentences indicate how similar the two sentences are.” And the key sentences are identified with ranking algorithms. Of course, machine learning methods can be used too.

Evaluation on the performance on text summarization is difficult. Human evaluation is unavoidable, but with manual approaches, some statistics can be calculated, such as ROUGE.

- Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, Krys Kochut, “Text Summarization Techniques: A Brief Survey,” arXiv:1707.02268 (2017). [arXiv]

First of all, three years ago, most people were still writing Python 2.7. But now there is a trend to switch to Python 3. I admitted that I still have not started the switch yet, but in the short term, I will have no choice and I will.

What are some of the essential packages?

Numerical Packages

- numpy: numerical Python, containing most basic numerical routines such as matrix manipulation, linear algebra, random sampling, numerical integration etc. There is a built-in wrapper for Fortran as well. Actually, numpy is so important that some Linux system includes it with Python.
- scipy: scientific Python, containing some functions useful for scientific computing, such as sparse matrices, numerical differential equations, advanced linear algebra, special functions etc.
- networkx: package that handles various types of networks
- PuLP: linear programming
- cvxopt: convex optimization

Data Visualization

- matplotlib: basic plotting.
- ggplot2: the ggplot2 counterpart in Python for producing quality publication plots.

Data Manipulation

- pandas: data manipulation, working with data frames in Python, and save/load of various formats such as CSV and Excel

Machine Learning

- scikit-learn: machine-learning library in Python, containing classes and functions for supervised and unsupervised learning

Probabilistic Programming

Deep Learning Frameworks

- TensorFlow: because of Google’s marketing effort, TensorFlow is now the industrial standard for building deep learning networks, with rich source of mathematical functions, esp. for neural network cells, with GPU capability
- Keras: containing routines of high-level layers for deep learning neural networks, with TensorFlow, Theano, or CNTK as the backbone
- PyTorch: a rivalry against TensorFlow

Natural Language Processing

- nltk: natural language processing toolkit for Python, containing bag-of-words model, tokenizer, stemmers, chunker, lemmatizers, part-of-speech taggers etc.
- gensim: a useful natural language processing package useful for topic modeling, word-embedding, latent semantic indexing etc., running in a fast fashion
- shorttext: text mining package good for handling short sentences, that provide high-level routines for training neural network classifiers, or generating feature represented by topic models or autoencodings.
- spacy: industrial standard for natural language processing common tools

GUI

I can probably list more, but I think I covered most of them. If you do not find something useful, it is probably time for you to write a brand new package.

]]>Exploring with DTM therefore becomes an important issues with a good text-mining tool. How do we perform exploratory data analysis on DTM using R and Python? We will demonstrate it using the data set of U. S. Presidents’ Inaugural Address, preprocessed, and can be downloaded here.

In R, we can use the package textmineR, which has been in introduced in a previous post. Together with other packages such as dplyr (for tidy data analysis) and snowBall (for stemming), load all of them at the beginning:

library(dplyr) library(textmineR) library(SnowballC)

Load the datasets:

usprez.df<- read.csv('inaugural.csv', stringsAsFactors = FALSE)

Then we create the DTM, while we remove all digits and punctuations, make all letters lowercase, and stem all words using Porter stemmer.

dtm<- CreateDtm(usprez.df$speech, doc_names = usprez.df$yrprez, ngram_window = c(1, 1), lower = TRUE, remove_punctuation = TRUE, remove_numbers = TRUE, stem_lemma_function = wordStem)

Then defining a set of functions:

get.doc.tokens<- function(dtm, docid) dtm[docid, ] %>% as.data.frame() %>% rename(count=".") %>% mutate(token=row.names(.)) %>% arrange(-count) get.token.occurrences<- function(dtm, token) dtm[, token] %>% as.data.frame() %>% rename(count=".") %>% mutate(token=row.names(.)) %>% arrange(-count) get.total.freq<- function(dtm, token) dtm[, token] %>% sum get.doc.freq<- function(dtm, token) dtm[, token] %>% as.data.frame() %>% rename(count=".") %>% filter(count>0) %>% pull(count) %>% length

Then we can happily extract information. For example, if we want to get the top-most common words in 2009’s Obama’s speech, enter:

dtm %>% get.doc.tokens('2009-Obama') %>% head(10)

Or which speeches have the word “change”: (but need to stem the word before extraction)

dtm %>% get.token.occurrences(wordStem('change')) %>% head(10)

You can also get the total number of occurrence of the words by:

dtm %>% get.doc.freq(wordStem('change')) # gives 28

In Python, similar things can be done using the package shorttext, described in a previous post. It uses other packages such as pandas and stemming. Load all packages first:

import shorttext import numpy as np import pandas as pd from stemming.porter import stem import re

And define the preprocessing pipelines:

pipeline = [lambda s: re.sub('[^\w\s]', '', s), lambda s: re.sub('[\d]', '', s), lambda s: s.lower(), lambda s: ' '.join(map(stem, shorttext.utils.tokenize(s))) ] txtpreproceesor = shorttext.utils.text_preprocessor(pipeline)

The function <code>txtpreprocessor</code> above perform the functions we talked about in R.

Load the dataset:

usprezdf = pd.read_csv('inaugural.csv')

The corpus needs to be preprocessed before putting into the DTM:

docids = list(usprezdf['yrprez']) # defining document IDs corpus = [txtpreproceesor(speech).split(' ') for speech in usprezdf['speech']]

Then create the DTM:

dtm = shorttext.utils.DocumentTermMatrix(corpus, docids=docids, tfidf=False)

Then we do the same thing as we have done above. To get the top-most common words in 2009’s Obama’s speech, enter:

dtm.get_doc_tokens('2009-Obama')

Or we look up which speeches have the word “change”:

dtm.get_token_occurences(stem('change'))

Or to get the document frequency of the word:

dtm.get_doc_frequency(stem('change'))

They Python and R codes give different document frequencies probably because the two stemmers work slightly differently.

- CRAN: textmineR [CRAN]; Github: TommyJones/textmineR. [Github]
- “textmineR: a new text mining package for R,”
*Everything in Data Analytics*, WordPress (2016). [WordPress] - “A Grammar for Data Manipulation: dplyr.” [Tidyverse]
- PyPI: shorttext. [PyPI]; Github: stephenhky/shorttext. [Github]; ReadTheDocs: shorttext. [RTFD]
- “Python Package for Short Text Mining,”
*Everything in Data Analytics*, WordPress (2016). [WordPress]

GAN can be used in word translation problem too. In a recent preprint in arXiv (refer to arXiv:1710.04087), Wasserstein GAN has been used to train a machine translation machine, given that there are no parallel data between the word embeddings between two languages. The translation mapping is seen as a generator, and the mapping is described using Wasserstein distance. The training objective is cross-domain similarity local scaling (CSLS). Their work has been performed in English-Russian and English-Chinese mappings.

It seems to work. Given GAN sometimes does not work for unknown reasons, it is an excitement that it works.

- “Generative Adversarial Networks,”
*Everything About Data Analytics*, WordPress (2017). [WordPress] - Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, “Generative Adversarial Networks,” arXiv:1406.2661 (2014). [arXiv]
- Ian Goodfellow, “NIPS 2016 Tutorial: Generative Adversarial Networks,” arXiv:1701.00160 (2017). [arXiv]
- Na Lei, Kehua Su, Li Cui, Shing-Tung Yau, David Xianfeng Gu, “A Geometric View of Optimal Transportation and Generative Model,” arXiv:1710.05488 (2017). [arXiv]
- “On Wasserstein GAN,”
*Everything About Data Analytics*, WordPress (2017). [WordPress] - “Interpretability of Neural Networks,”
*Everything About Data Analytics*, WordPress (2017). [WordPress] - Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, “Word Translation Without Parallel Data,” arXiv:1710.04087 (2017). [arXiv]
- “Word Mover’s Distance as a Linear Programming Problem,”
*Everything About Data Analytics*, WordPress (2017). [WordPress] - 罗若天, “论文笔记：Word translation without parallel data无监督单词翻译” RT的论文笔记以及其他乱七八糟的东西. (2017) [Zhihu]
- “Word Embedding Algorithms,”
*Everything About Data Analytics*, WordPress (2016). [WordPress]

“A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part.” The nodes of inputs and outputs are vectors, instead of scalars as in neural networks. A cheat sheet comparing the traditional neurons and capsules is as follow:

Based on the capsule, the authors suggested a new type of layer called CapsNet.

Huadong Liao implemented CapsNet with TensorFlow according to the paper. (Refer to his repository.)

- Sara Sabour, Nicholas Frosst, Geoffrey E Hinton, “Dynamic Routing Between Capsules,” arXiv:1710.09829 (2017). [arXiv]
- “浅析 Hinton 最近提出的 Capsule 计划” (2017). [Zhihu] (in Chinese)
- “如何看待Hinton的论文《Dynamic Routing Between Capsules》？” (2017) [Zhihu] (in Chinese)
- Github: naturomics/CapsNet-Tensorflow [Github]
- Nick Bourdakos, “Capsule Networks Are Shaking up AI — Here’s How to Use Them,” Medium (2017). [Medium]

Mehta and Schwab analytically connected renormalization group (RG) with one particular type of deep learning networks, the restricted Boltzmann machines (RBM). (See their paper and a previous post.) RBM is similar to Heisenberg model in statistical physics. This weakness of this work is that it can only explain only one type of deep learning algorithms.

However, this insight gives rise to subsequent work, with the use of density matrix renormalization group (DMRG), entanglement renormalization (in quantum information), and tensor networks, a new supervised learning algorithm was invented. (See their paper and a previous post.)

Lin and Tegmark were not satisfied with the RG intuition, and pointed out a special case that RG does not explain. However, they argue that neural networks are good approximation to several polynomial and asymptotic behaviors of the physical universe, making neural networks work so well in predictive analytics. (See their paper, Lin’s reply on Quora, and a previous post.)

Tishby and his colleagues have been promoting information bottleneck as a backing theory of deep learning. (See previous post.) In recent papers such as arXiv:1612.00410, on top of his information bottleneck, they devised an algorithm using variation inference.

Recently, Kawaguchi, Kaelbling, and Bengio suggested that “deep model classes have an exponential advantage to represent certain natural target functions when compared to shallow model classes.” (See their paper and a previous post.) They provided their proof using generalization theory. With this, they introduced a new family of regularization methods.

Recently, Lei, Su, Cui, Yau, and Gu tried to offer a geometric view of generative adversarial networks (GAN), and provided a simpler method of training the discriminator and generator with a large class of transportation problems. However, I am still yet to understand their work, and their experimental results were done on low-dimensional feature spaces. (See their paper.) Their work is very mathematical.

- Pankaj Mehta, David J. Schwab, “An exact mapping between the Variational Renormalization Group and Deep Learning,” arXiv:1410.3831. (2014) [arXiv]
- E. Miles Stoudenmire, David J. Schwab, “Supervised Learning With Quantum-Inspired Tensor Networks,” arXiv:1605.05775 (2016). [arXiv]
- Cédric Bény, “Deep learning and the renormalization group,” arXiv:1301.3124 (2013). [arXiv]
- Charles H. Martin, “on Cheap Learning: Partition Functions and RBMs,”
*Machine Learning*, WordPress (2016). [WordPress] - Henry W. Lin, Max Tegmark, “Why does deep and cheap learning work so well?” arXiv:1608.08225 (2016). [arXiv]
- Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy, “Deep Variational Information Bottleneck,” arXiv:1612.00410 (2016). [arXiv]
- Kenji Kawaguchi, Leslie Pack Kaelbling, Yoshua Bengio, “Generalization in Deep Learning,” arXiv:1710.05468 (2017). [arXiv]
- Na Lei, Kehua Su, Li Cui, Shing-Tung Yau, David Xianfeng Gu, “A Geometric View of Optimal Transportation and Generative Model,” arXiv:1710.05488 (2017). [arXiv]

This paper explains why deep learning can generalize well, despite large capacity and possible algorithmic instability, nonrobustness, and sharp minima, effectively addressing an open problem in the literature. Based on our theoretical insight, this paper also proposes a family of new regularization methods. Its simplest member was empirically shown to improve base models and achieve state-of-the-art performance on MNIST and CIFAR-10 benchmarks. Moreover, this paper presents both data-dependent and data-independent generalization guarantees with improved convergence rates. Our results suggest several new open areas of research.

- Kenji Kawaguchi, Leslie Pack Kaelbling, Yoshua Bengio, “Generalization in Deep Learning,” arXiv:1710.05468 (2017). [arXiv]

Google published a paper about the big picture of computational model in TensorFlow:

TensorFlow is a powerful, programmable system for machine learning. This paper aims to provide the basics of a conceptual framework for understanding the behavior of TensorFlow models during training and inference: it describes an operational semantics, of the kind common in the literature on programming languages. More broadly, the paper suggests that a programming-language perspective is fruitful in designing and in explaining systems such as TensorFlow.

Beware that this model is not limited to deep learning.

- Coursera: Deep Learning Specialization. [Coursera]
- TensorFlow. [TensorFlow]
- Martin Abadi, Michael Isard, Derek G. Murray, “A Computational Model in TensorFlow,”
*Google Research Blog*(MAPL 2017). [GoogleResearch]