This is an interesting preliminary exploration of using Generative Adversarial Networks (GANs) for non-image data to:

… generate synthetic data for a numeric dataset, with the objective to train a classifier without using the original data set, and to oversample the minority class in a imbalanced classification scenario. In both scenarios the data sets generated by GAN were suitable for the tasks proposed.

This is quite useful and shows a continuing trend of work in GANs moving beyond analyzing images. The authors conclude:

Training the classifier using only GAN synthetic data in the balanced scenario showed better accuracy and precision than training on the original data set. On the imbalanced scenario, the GAN synthetic data performed better than the original data, but did not perform better than SMOTE or ADASYN.

SMOTE (Synthetic Minority Oversampling Technique) and ADASYN (Adaptive Synthetic) are methods for dealing with imbalanced datasets by oversampling. They are implemented in the imbalanced-learn python package.

Below is the abstract of Data Augmentation Using GANs.

In this paper we propose the use of Generative Adversarial Networks (GAN) to generate artificial training data for machine learning tasks. The generation of artificial training data can be extremely useful in situations such as imbalanced data sets, performing a role similar to SMOTE or ADASYN. It is also useful when the data contains sensitive information, and it is desirable to avoid using the original data set as much as possible (example: medical data). We test our proposal on benchmark data sets using different network architectures, and show that a Decision Tree (DT) classifier trained using the training data generated by the GAN reached the same, (and surprisingly sometimes better), accuracy and recall than a DT trained on the original data set.

Pin It on Pinterest