Transfer Learning
Traditional machine learning algorithms operate under the assumption that learning for each new task starts from scratch, thus disregarding any knowledge they may have gained while learning in previous domains. Naturally, if the domains encountered during learning are related, this tabula rasa approach would waste both data and computer time to develop hypotheses that could have been recovered by simply examining and possibly slightly modifying previously acquired knowledge. Moreover, the knowledge learned in earlier domains could capture generally valid rules that are not easily recoverable from small amounts of data, thus allowing the algorithm to achieve even higher levels of accuracy than it would if it starts from scratch.

The field of transfer learning, which has witnessed a great increase in popularity in recent years, addresses the problem of how to leverage previously acquired knowledge in order to improve the efficiency and accuracy of learning in a new domain that is in some way related to the original one. In particular, our current research is focused on developing transfer learning techniques for Markov Logic Networks (MLNs), a recently developed approach to statistical relational learning.

Our research in the area is currently sponsored by the Defense Advanced Research Projects Agency (DARPA) and managed by the Air Force Research Laboratory (AFRL) under contract FA8750-05-2-0283.

Object-Model Transfer in the General Video Game Domain Alexander Braylan, Risto Miikkulainen To Appear In Proceedings of the Twelfth AAAI Conference on Artificial Intelligence and Interactiv... 2016

Transfer of Evolved Pattern-Based Heuristics in Games Erkin Bahceci and Risto Miikkulainen In IEEE Symposium On Computational Intelligence and Games (CIG 2008), 220-227, Perth, Austral... 2008

Erkin Bahceci Ph.D. Alumni erkin [at] cs utexas edu
Alexander Braylan braylan [at] cs utexas edu