Asynchronous Evolution of Deep Neural Network Architectures (2023)
Many evolutionary algorithms (EAs) take advantage of parallel evaluation of candidates. However, if evaluation times vary significantly, many worker nodes (i.e. compute clients) are idle much of the time, waiting for the next generation to be created. Evolutionary neural architecture search (ENAS), a class of EAs that optimizes the architecture and hyperparameters of deep neural networks, is particularly vulnerable to this issue. This paper proposes a generic asynchronous evaluation strategy (AES) that is then adapted to work with ENAS. AES increases throughput by maintaining a queue of upto K individuals ready to be sent to the workers for evaluation and proceeding to the next generation as soon as M<<K individuals have been evaluated by the workers. A suitable value for M is determined experimentally, balancing diversity and efficiency. To showcase the generality and power of AES, it was first evaluated in 11-bit multiplexer design (a single-population verifiable discovery task) and then scaled up to ENAS for image captioning (a multi-population open-ended-optimization task). In both problems, a multifold performance improvement was observed, suggesting that AES is a promising method for parallelizing the evolution of complex systems with long and variable evaluation times, such as those in ENAS.
arXiv:2308:04102, 2023.

Jason Zhi Liang Ph.D. Alumni jasonzliang [at] utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu
Hormoz Shahrzad Masters Alumni hormoz [at] cognizant com