Deep learning, a collection of related neural network algorithms, has been proved successful in certain types of machine learning tasks in computer vision, speech recognition, data cleaning, and natural language processing (NLP). [Mikolov *et. al.* 2013] However, it was unclear how deep learning can be so successful. It looks like a black box with messy inputs and excellent outputs. So why is it so successful?

A friend of mine showed me this article in the preprint (arXiv:1410.3831) [Mehta & Schwab 2014] last year, which mathematically shows the equivalence of deep learning and renormalization group (RG). RG is a concept in theoretical physics that has been widely applied in different problems, including critical phenomena, self-organized criticality, particle physics, polymer physics, and strongly correlated electronic systems. And now, Mehta and Schwab showed that an explanation to the performance of deep learning is available through RG.

[Taken from http://www.inspiredeconomies.com/intelligibleecosystems/images/fractals/GasketMag.gif]

So what is RG? Before RG, Leo Kadanoff, a physics professor in University of Chicago, proposed an idea of coarse-graining in studying many-body problems in 1966. [Kadanoff 1966] In 1972, Kenneth Wilson and Michael Fisher succeeded in applying *ɛ*-expansion in perturbative RG to explain the critical exponents in Landau-Ginzburg-Wilson (LGW) Hamiltonian. [Wilson & Fisher 1972] This work has been the standard material of graduate physics courses. In 1974, Kenneth Wilson applied RG to explain the Kondo problem, which led to his Nobel Prize in Physics in 1982. [Wilson 1983]

RG assumes a system of scale invariance, which means the system are similar in whatever scale you are seeing. One example is the chaotic system as in Fig. 1. The system looks the same when you zoom in. We call this scale-invariant system *self-similar*. And physical systems closed to phase transition are self-similar. And if it is self-similar, Kadanoff’s idea of coarse-graining is then applicable, as in Fig. 2. Four spins can be viewed as one spin that “summarizes” the four spins in that block without changing the description of the physical system. This is somewhat like we “zoom out” the picture on Photoshop or Web Browser.

[Taken from [Singh 2014]]

So what’s the point of zooming out? Physicists care about the Helmholtz free energies of physical systems, which are similar to cost functions to the computer scientists and machine learning specialists. Both are to be minimized. However, whatever scale we are viewing at, the energy of the system should be scale-invariant. Therefore, as we zoom out, the system “changes” yet “looks the same” due to self-similarity, but the energy stays the same. The form of the model is unchanged, but the parameters change as the scale changes.

This is important, because this process tells us which parameters are *relevant*, and which others are *irrelevant*. Why? Think of it this way: we have an awesome computer to simulate a glass of water that contains 10^{23} water molecules. To describe the systems, you have all parameters, including the position of molecules, strength of Van der Waals force, orbital angular momentum of each atom, strength of the covalent bonds, velocities of the molecules… You might have 10^{25} parameters. However, this awesome computer cannot handle such a system with so many parameters. Then you try to coarse-grain the system, and you discard some parameters in each step of coarse-graining. After numerous steps, it turns out that the temperature and the pressure are the only relevant parameters.

RG helps you identify the relevant parameters.

And it is exactly what happened in deep learning. In each convolutional cycle, features that are not important are gradually discarded, and those that are important are kept and enhanced. Indeed, in computer vision and NLP, the data are so noisy that there are a lot of unnecessary information. Deep learning gradually discards these information. As Mehta and Schwab stated, [Mehta & Schwab 2014]

Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.

So what is the point of understanding this? Unlike other machine algorithms, we did not know how it works, which sometimes makes model building very difficult because we have no idea how to adjust parameters. I believe understanding its equivalence to RG helps guide us to build a model that works.

Charles Martin also wrote a blog entry with more demonstration about the equivalence of deep learning and RG. [Martin 2015]

- P. Mehta, D. J. Schwab, “An exact mapping between the Variational Renormalization Group and Deep Learning”, arXiv:1410.3831 (2014).
- T. Mikolov, I. Sutskever, K. Chen, G. Corrado, J. Dean, “Distributed Representations of Words and Phrases and their Compositionality”, In Proceedings of NIPS, 2013. [arXiv:1310.4546]
- L. Kadanoff, “Scaling laws for Ising models near
*T*_{c}“,*Physics***2**, 263 (1966). [See: http://jfi.uchicago.edu/~leop/SciencePapers/Old%20Science%20Papers/Scaling%20Laws%20for%20Ising%20Models%20Near%20Tc.pdf] - K. G. Wilson, M. E. Fisher, “Critical Exponents in 3.99 Dimensions”,
*Phys. Rev. Lett.***28**, 240 (1972). - K. G. Wilson, “The renormalization group and critical phenomena”,
*Rev. Mod. Phys.***55**, 583 (1983). - Nobel Prize in Physics 1982.
- N. Singh, “Thermodynamical Phase transitions, the mean-field theories, and the renormalization (semi)group: A pedagogical introduction”, arXiv:1402.6837 (2014). [See this: http://inspirehep.net/record/1283384/plots]
- C. H. Martin, “Why Deep Learning Works II: the Renormalization Group“, WordPress (2015).
- S.-K. Ma, “Modern Theory of Critical Phenomena”, Advanced Book Program (1976).
- M. Karder, “Statistical Physics of Fields”, Cambridge (2007).
- A. Altland, B. Simons, “Condensed Matter Field Theory”, 2nd. ed., Cambridge (2009).
- Free deep learning book – MIT Press (2015).
- Kwan-yuet Ho, “Talking Not So Deep About Deep Learning“, WordPress (2015).