Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Transfer of learning by composing solutions of elemental sequential tasks

Transfer of learning by composing solutions of elemental sequential tasks Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focused on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Machine Learning Springer Journals

Transfer of learning by composing solutions of elemental sequential tasks

Machine Learning , Volume 8 (4) – Dec 30, 2004

Loading next page...
 
/lp/springer-journals/transfer-of-learning-by-composing-solutions-of-elemental-sequential-yz7dlk5Uv2

References (30)

Publisher
Springer Journals
Copyright
Copyright
Subject
Computer Science; Artificial Intelligence; Control, Robotics, Mechatronics; Artificial Intelligence; Simulation and Modeling; Natural Language Processing (NLP)
ISSN
0885-6125
eISSN
1573-0565
DOI
10.1007/BF00992700
Publisher site
See Article on Publisher Site

Abstract

Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focused on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm.

Journal

Machine LearningSpringer Journals

Published: Dec 30, 2004

There are no references for this article.