The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings (2021)
This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An implementation of the framework is developed in which these variable embeddings are learned jointly with internal model parameters. In experiments, the approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. The results suggest that even seemingly unrelated tasks may originate from similar underlying processes, a fact that the traveling observer model can use to make better predictions.
View:
PDF
Citation:
To Appear In International Conference on Learning Representations, 2021.
Bibtex:

Presentation:
Video
Elliot Meyerson Ph.D. Alumni ekm [at] cs utexas edu
Risto Miikkulainen Faculty risto [at] cs utexas edu