In this survey article, current methods in neural architecture search are highlighted. Neural architecture search (NAS) is a subdiscipline of automated machine learning (AutoML) in which types of layers (such as convolution, pooling, etc.), number of layers, types of connections (such as skip connections), etc. are determined automatically from a list of allowable parameters. The field is relatively new as applied to CNNs and RNNs (methods for MLPs have been around for many years) and no single approach has yet to reach prevalence. Additionally, the computation costs are very high as it is not uncommon to read about the use of dozens of GPUs or more running for days. However, the field is evolving rapidly and it will be interesting to keep an eye on it.

Neural Architecture Search: A Survey

Abstract:

Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.