In this very interesting paper, the authors propose a method to combine an autoencoder with a generative adversarial network (GAN) to create synthetic data as a means of mitigating problems associated with applying machine learning to imbalanced datasets.
The proposed BAGAN methodology aims to generate realistic minority-class images for an imbalanced dataset. It exploits all available information of the specific classification problem by including in the BAGAN training majority and minority classes jointly. GAN and autoencoding techniques are coupled to leverage the strengths of the two approaches. GANs generate high-quality images whereas autoencoders converge towards good solution easily.
Results are compared to a plain GAN and an auxiliary classifier GAN on some image classification datasets and BAGAN usually bests the other two methods.
While the publicity surrounding GANs focuses on their ability to generate art, music, and text, we are interested in using GANs for augmenting imbalanced datasets.
The abstract of BAGAN: Data Augmentation with Balancing GAN is presented below.
Image classification datasets are often imbalanced, characteristic that negatively affects the accuracy of deep-learning classifiers. In this work we propose balancing GAN (BAGAN) as an augmentation tool to restore balance in imbalanced datasets. This is challenging because the few minority-class images may not be enough to train a GAN. We overcome this issue by including during the adversarial training all available images of majority and minority classes. The generative model learns useful features from majority classes and uses these to generate images for minority classes. We apply class conditioning in the latent space to drive the generation process towards a target class. The generator in the GAN is initialized with the encoder module of an autoencoder that enables us to learn an accurate class-conditioning in the latent space. We compare the proposed methodology with state-of-the-art GANs and demonstrate that BAGAN generates images of superior quality when trained with an imbalanced dataset.