Evolutionary Feature Evaluation for Online Reinforcement Learning (2013)
Most successful examples of Reinforcement Learning (RL) report the use of carefully designed features, that is, a representation of the problem state that facilitates effective learning. The best features cannot always be known in advance, creating the need to evaluate more features than will ultimately be chosen. This paper presents Temporal Difference Feature Evaluation (TDFE), a novel approach to the problem of feature evaluation in an online RL agent. TDFE combines value function learning by temporal difference methods with an evolutionary algorithm that searches the space of feature subsets, and outputs a ranking over all individual features. TDFE dynamically adjusts its ranking, avoids the sample complexity multiplier of many population-based approaches, and works with arbitrary feature representations. Online learning experiments are performed in the game of Connect Four, establishing (i) that the choice of features is critical, (ii) that TDFE can evaluate and rank all the available features online, and (iii) that the ranking can be used effectively as the basis of dynamic online feature selection.
View:
PDF
Citation:
In Proceedings of 2013 IEEE Conference on Computational Intelligence and Games (CIG2013), 267-275, 2013.
Bibtex:

Julian Bishop Ph.D. Student julian [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu