Evolutionary Optimization of Deep Learning Activation Functions (2020)
The choice of activation function can have a large effect on the performance of a neural network. While there have been some attempts to hand-engineer novel activation functions, the Rectified Linear Unit (ReLU) remains the most commonly-used in practice. This paper shows that evolutionary algorithms can discover novel activation functions that outperform ReLU. A tree-based search space of candidate activation functions is defined and explored with mutation, crossover, and exhaustive search. Experiments on training wide residual networks on the CIFAR-10 and CIFAR-100 image datasets show that this approach is effective. Replacing ReLU with evolved activation functions results in statistically significant increases in network accuracy. Optimal performance is achieved when evolution is allowed to customize activation functions to a particular task; however, these novel activation functions are shown to generalize, achieving high performance across tasks. Evolutionary optimization of activation functions is therefore a promising new dimension of metalearning in neural networks.
View:
PDF
Citation:
In Genetic and Evolutionary Computation Conference (GECCO '20), 289-296, Cancun, Mexico, 2020.
Bibtex:

Presentation:
Video
Garrett Bingham Ph.D. Alumni bingham [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu