Optimizing Loss Functions Through Multivariate Taylor Polynomial Parameterization (2020)
Metalearning of deep neural network (DNN) architectures and hyperparameters has become an increasingly important area of research. Loss functions are a type of metaknowledge that is crucial to effective training of DNNs, however, their potential role in metalearning has not yet been fully explored. Whereas early work focused on genetic programming (GP) on tree representations, this paper proposes continuous CMA-ES optimization of multivariate Taylor polynomial parameterizations. This approach, TaylorGLO, makes it possible to represent and search useful loss functions more effectively. In MNIST and CIFAR-10 benchmark tasks, TaylorGLO finds new loss functions that outperform functions previously discovered through GP, as well as the standard cross-entropy loss, in fewer generations. These functions serve to regularize the learning task by discouraging overfitting to the labels, which is particularly useful in tasks where limited training data is available. The results thus demonstrate that loss function optimization is a productive new avenue for metalearning.
To Appear In arXiv:2002.00059, 2020.

Santiago Gonzalez Ph.D. Student slgonzalez [at] utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu
SwiftCMA Download on GitHub

SwiftCMA is a pure-Swift implementati...