To briefly state the difference between generative models and discriminative models, I would say a generative model concerns the specification of the joint probability , and a discriminative model that of the conditional probability . Andrew Ng and Michael Jordan wrote a paper on this with logistic regression and naive Bayes as the example. [Ng, Jordan, 2001]

Assume that we have a set of *k* features . Then a naive Bayes model, as a generative model, is given by

.

To train a naive Bayes model, the loss function is maximum likelihood.

A logistic regression, and its discrete form as maximum entropy classifiers, is a discriminative model. But Ng and Jordan argued that they are theoretically related as the probability of *y* given can be derived from the naive Bayes model above, i.e.,

.

To train a logistic regression, the loss function is the least mean-squared difference; and for maximum entropy classifiers, it is, of course, the entropy.

When in application, there is one more difference, which is the importance of in classification in the generative model, but its absence in the discriminative model.

Another example for generative-discriminative pair is hidden Markov model (HMM) and conditional random field (CRF). [Sutton, McCallum, 2010]

Continue reading “Generative-Discriminative Pairs”