Volume 14, Issue 4 (Journal of Control, V.14, N.4 Winter 2021)                   JoC 2021, 14(4): 13-23 | Back to browse issues page

XML Persian Abstract Print

Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Khoshroo S A, Khasteh S H. Increase the speed of the DQN learning process with the Eligibility Traces. JoC. 2021; 14 (4) :13-23
URL: http://joc.kntu.ac.ir/article-1-668-en.html
1- K. N. Toosi University of Technology
Abstract:   (1709 Views)
To accelerate the learning process in high-dimensional learning problems, the combination of TD techniques, such as Q-learning or SARSA, is usually used with the mechanism of Eligibility Traces. In the newly introduced DQN algorithm, it has been attempted to using deep neural networks in Q learning, to enable reinforcement learning algorithms to reach a greater understanding of the visual world and to address issues Spread in the past that was considered unbreakable. DQN, which is called a deep reinforcement learning algorithm, has a low learning speed. In this paper, we try to use the mechanism of Eligibility Traces, which is one of the basic methods in reinforcement learning, in combination with deep neural networks to improve the learning process speed. Also, for comparing the efficiency with the DQN algorithm, a number of Atari 2600 games were tested and the experimental results obtained showed that the proposed method significantly reduced learning time compared to the DQN algorithm and converges faster to the optimal model.
Full-Text [PDF 581 kb]   (78 Downloads)    
Type of Article: Research paper | Subject: General
Received: 2019/05/13 | Accepted: 2020/01/9 | ePublished ahead of print: 2020/10/5 | Published: 2021/02/19

Add your comments about this article : Your username or Email:

Send email to the article author

© 2021 CC BY-NC 4.0 | Journal of Control

Designed & Developed by : Yektaweb