Generative adversarial networks (GANs) have made a big impact to the world of machine learning. It is particularly useful for generating sample data when there are insufficient data for certain purposes. It is also useful for training using data with both labeled and unlabeled data, i. e., semi-supervised learning. (SSL)
The rise of GANs also lead to the re-emergence of adversarial learning regarding the handling of unbalanced data or sensitive data. (For example, see arXiv:1707.00075.)
GAN is particularly useful for computer vision problems. However, it is not very good for natural language problems as the data cannot be generated continuously. Under this context, a modification on GAN is developed, called discriminative adversarial networks (DAN, see arXiv:1707.02198.). Unlike GANs that has a discriminator to train a generator to produce good data, DAN has two discriminators: one discriminator, usually denoted as the predictor P, that predicts on the unlabeled data, and another, usually denoted as the judge J, that classifies whether the label is a human label or a machine-predicted label.
The loss function of DAN is very similar to that of GAN: minimizing the entropy difference for the judge J for labeled data, but minimizing that for predictions for unlabeled data for the predictor P.
However, GAN and DAN are not generative-disciminative pairs.
- “Generative Adversarial Networks,” Everything About Data Analytics, WordPress (2017). [WordPress]
- Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, “Generative Adversarial Networks,” arXiv:1406.2661 (2014). [arXiv]
- Ian Goodfellow, “NIPS 2016 Tutorial: Generative Adversarial Networks,” arXiv:1701.00160 (2017). [arXiv]
- “Application of Wasserstein GAN,” Everything About Data Analytics, WordPress (2017). [WordPress]
- Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi, “Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations,” 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017) / arXiv:1707.00075 (2017). [arXiv]
- Cicero Nogueira dos Santos, Kahini Wadhawan, Bowen Zhou, “Learning Loss Functions for Semi-supervised Learning via Discriminative Adversarial Networks,” arXiv:1707.02198. (2017) [arXiv]