Efficient Activation Function Optimization through Surrogate Modeling (2023)
Carefully designed activation functions can improve the performance of neural networks in many machine learning tasks. However, it is difficult for humans to construct optimal activation functions, and current activation function search algorithms are prohibitively expensive. This paper aims to improve the state of the art through three steps: First, the benchmark datasets Act-Bench-CNN, Act-Bench-ResNet, and Act-Bench-ViT were created by training convolutional, residual, and vision transformer architectures from scratch with 2,913 systemati- cally generated activation functions. Second, a characterization of the benchmark space was developed, leading to a new surrogate-based method for optimization. More specifically, the spectrum of the Fisher information matrix associated with the model’s predictive distribution at initialization and the activation function’s output distribution were found to be highly predictive of performance. Third, the surrogate was used to discover improved activation functions in several real-world tasks, with a surprising finding: a sigmoidal design that outperformed all other activation functions was discovered, challenging the status quo of always using rectifier nonlinearities in deep learning. Each of these steps is a contribution in its own right; together they serve as a practical and theoretical foundation for further research on activation function optimization.
View:
PDF
Citation:
In Proceedings of the 23rd Conference on Neural Information Processing Systems (NeurIPS 2023), 2023.
Bibtex:

Presentation:
PosterVideo
Garrett Bingham Ph.D. Alumni bingham [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu