Effective Regularization Through Loss-Function Metalearning (2021)
Loss-function metalearning can be used to discover novel, customized loss functions for deep neural networks, resulting in improved performance, faster training, and improved data utilization. A likely explanation is that such functions discourage overfitting, leading to effective regularization. This paper demonstrates theoretically that this is indeed the case for the TaylorGLO method: Decomposition of learning rules makes it possible to characterize the training dynamics and show that the loss functions evolved by TaylorGLO balance the pull to zero error, and a push away from it to avoid overfitting. This observation leads to an invariant that can be utilized to make the metalearning process more efficient in practice, and result in networks that are robust against adversarial attacks. Loss-function optimization can thus be seen as a well-founded new aspect of metalearning in neural networks.
View:
PDF
Citation:
In arXiv:2010.00788, 2021.
Bibtex:

Santiago Gonzalez Ph.D. Alumni slgonzalez [at] utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu