Evolutionary Neural Architecture Search for Deep Learning (2018)
Deep neural networks (DNNs) have produced state-of-the-art results in many benchmarks and problem domains. However, the success of DNNs depends on the proper configuration of its architecture and hyperparameters. DNNs are often not used to their full potential because it is difficult to determine what architectures and hyperparameters should be used. While several approaches have been proposed, computational complexity of searching large design spaces makes them impractical for large modern DNNs. This dissertation introduces an efficient evolutionary algorithm (EA) for simultaneous optimization of DNN architecture and hyperparameters. It builds upon extensive past research of evolutionary optimization of neural network structure. Various improvements to the core algorithm are introduced, including: (1) discovering DNN architectures of arbitrary complexity; (1) generating modular, repetitive modules commonly seen in state-of-the-art DNNs; (3) extending to the multitask learning and multiobjective optimization domains; (4) maximizing performance and reducing wasted computation through asynchronous evaluations. Experimental results in image classification, image captioning, and multialphabet character recognition show that the approach is able to evolve networks that are competitive with or even exceed hand-designed networks. Thus, the method enables an automated and streamlined process to optimize DNN architectures for a given problem and can be widely applied to solve harder tasks.
View:
PDF
Citation:
PhD Thesis, The University of Texas at Austin, 2317 Speedway, Austin, TX 78712, December 2018.
Bibtex:

Jason Zhi Liang Ph.D. Alumni jasonzliang [at] utexas edu