Sammon Embedding

Word embedding has been a frequent theme of this blog. But the original embedding has been algorithms that perform a non-linear mapping of higher dimensional data to the lower one. This entry I will talk about one of the most oldest and widely used one: Sammon Embedding, published in 1969. This is an embedding algorithm … More Sammon Embedding

Short Text Categorization using Deep Neural Networks and Word-Embedding Models

There are situations that we deal with short text, probably messy, without a lot of training data. In that case, we need external semantic information. Instead of using the conventional bag-of-words (BOW) model, we should employ word-embedding models, such as Word2Vec, GloVe etc. Suppose we want to perform supervised learning, with three subjects, described by … More Short Text Categorization using Deep Neural Networks and Word-Embedding Models

Topological Phases

Michael Kosterlitz, Duncan Haldane, and David J. Thouless are the laureates of Nobel Prize in Physics 2016, “for theoretical discoveries of topological phase transitions and topological phases of matter.” Before Thouless, topology was not known to the physics community. It is a basic knowledge nowadays, however. I am particularly familiar with Berezinskii-Kosterlitz-Thouless phase transition. What … More Topological Phases