Volume 5, Issue 1 (Journal of Control, V.5, N.1 Spring 2011)                   JoC 2011, 5(1): 50-63 | Back to browse issues page

XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Derhami V, Mehrabi O. Action Value Function Approximation Based on Radial Basis Function Network for Reinforcement Learning. JoC. 2011; 5 (1) :50-63
URL: http://joc.kntu.ac.ir/article-1-95-en.html
Abstract:   (8264 Views)
One of the challenges encountered in the application of classical reinforcement learning methods to real-control problems is the curse of dimensiality. In order to overcome this difficulty, hybrid algorithms that combine reinforcement learning with various function approximators have attracted many research interests. In this paper, a novel Neural Reinforcement Learning (NRL) scheme which is based on Sarsa learning and Radial Basis Function (RBF) network is proposed. The RBF network is used to approximate the Action Value Function (AVF) on-line. The inputs of RBF network are state-action pairs of system and its outputs are corresponding approximated AVF. As the necessary condition for the convergence of NSL to the optimal task performance, the existence of stationary points for NSL which coincide with the fixed points of Approximate Action Value Iteration (AAVI) are proved. The validity of the proposed algorithm is tested through simulation examples: mountain car control task, and acrobot problem. Overall results demonstrate that our algorithm can effectively improve convergence speed and the efficiency of experience exploitation.
Full-Text [PDF 538 kb]   (1400 Downloads)    
Type of Article: Research paper | Subject: Special
Received: 2014/06/16 | Accepted: 2014/06/16 | Published: 2014/06/16

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


© 2019 All Rights Reserved | Journal of Control

Designed & Developed by : Yektaweb