neural networks research group
areas
people
projects
demos
publications
software/data
Data Augmentation for Deep Transfer Learning (2019)
Cameron R. Wolfe
and Keld T. Lundgaard
Current approaches to deep learning are beginning to rely heavily on transfer learning as an effective method for reducing overfitting, improving model performance, and quickly learning new tasks. Similarly, such pre-trained models are often used to create embedding representations for various types of data, such as text and images, which can then be fed as input into separate, downstream models. However, in cases where such transfer learning models perform poorly (i.e., for data outside of the training distribution), one must resort to fine-tuning such models, or even retraining them completely. Currently, no form of data augmentation has been proposed that can be applied directly to embedding inputs to improve downstream model performance. In this work, we introduce four new types of data augmentation that are generally applicable to embedding inputs, thus making them useful in both Natural Language Processing (NLP) and Computer Vision (CV) applications. For models trained on downstream tasks with such embedding inputs, these augmentation methods are shown to improve the AUC score of the models from a score of 0.9582 to 0.9812 and significantly increase the model’s ability to identify classes of data that are not seen during training.
View:
PDF
Citation:
, December 2019.
Bibtex:
@misc{wolfe:thesis, title={Data Augmentation for Deep Transfer Learning}, author={Cameron R. Wolfe and Keld T. Lundgaard}, month={December}, url="http://nn.cs.utexas.edu/?wolfethesis", year={2019} }
People
Cameron R. Wolfe
Undergraduate Alumni
wolfe cameron [at] utexas edu
Areas of Interest
Natural Language Processing (Cognitive)