Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Deep Reinforcement Learning with Double Q-Learning

Deep Reinforcement Learning with Double Q-Learning <jats:p> The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. </jats:p> http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Proceedings of the AAAI Conference on Artificial Intelligence CrossRef

Deep Reinforcement Learning with Double Q-Learning

Proceedings of the AAAI Conference on Artificial Intelligence , Volume 30 (1) – Mar 2, 2016

Deep Reinforcement Learning with Double Q-Learning


Abstract

<jats:p>

The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.

</jats:p>

Loading next page...
 
/lp/crossref/deep-reinforcement-learning-with-double-q-learning-af2xL7pbtZ

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
CrossRef
ISSN
2374-3468
DOI
10.1609/aaai.v30i1.10295
Publisher site
See Article on Publisher Site

Abstract

<jats:p> The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. </jats:p>

Journal

Proceedings of the AAAI Conference on Artificial IntelligenceCrossRef

Published: Mar 2, 2016

There are no references for this article.