Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization (2020)
As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss-function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML.
View:
PDF
Citation:
In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), 1-8, July 2020.
Bibtex:

Presentation:
Video
Santiago Gonzalez Ph.D. Alumni slgonzalez [at] utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu
SwiftCMA Download on GitHub

SwiftCMA is a pure-Swift implementation of Co...

2019

SwiftGenetics Download on GitHub

SwiftGenetics is a genetic algor...
2019