On the Cross-Domain Reusability of Neural Modules for General Video Game Playing (2015)
We consider a general approach to knowledge transfer in which an agent learning with a neural network adapts how it reuses existing networks as it learns in a new domain. Networks trained for a new domain are able to improve performance by selectively routing activation through previously learned neural structure, regardless of how or for what it was learned. We present a neuroevolution implementation of the approach with application to reinforcement learning domains. This approach is more general than previous approaches to transfer for reinforcement learning. It is domain-agnostic and requires no prior assumptions about the nature of task relatedness or mappings. We analyze the method's performance and applicability in high-dimensional Atari 2600 general video game playing.
In IJCAI'15 Workshop on General Intelligence in Game-Playing Agents, 7--14, 2015.

Alexander Braylan braylan [at] cs utexas edu
Elliot Meyerson Ph.D. Alumni ekm [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu