Improving Deep Learning Through Loss-Function Evolution (2020)
As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This dissertation tackles a new type of metalearning: loss-function optimization. Loss functions define a model's core training objective and thus present a clear opportunity. Two techniques, GLO and TaylorGLO, were developed to tackle this metalearning problem using genetic programming and evolutionary strategies. Experiments show that neural networks trained with metalearned loss functions are more accurate, have higher data utilization, train faster, and are more robust against adversarial attacks. A theoretical framework was developed to analyze how and why different loss functions bias training towards different regions of the parameter space. Using this framework, their performance gains are found to result from a regularizing effect that is tailored to each domain. Overall, this dissertation demonstrates that new, metalearned loss functions can result in better trained models, and provides the next stepping stone towards fully automated machine learning.
View:
PDF
Citation:
PhD Thesis, Department of Computer Science, The University of Texas at Austin, 2020.
Bibtex:

Santiago Gonzalez Ph.D. Alumni slgonzalez [at] utexas edu