Back to the articles list | Back to browse issues page

XML Persian Abstract Print


1- Sahand University of Technology
Abstract:   (220 Views)
In this paper, we present a novel model-free, non-iterative, and adaptive approach for online design of a discrete-time linear quadratic regulator (LQR) with output feedback. Previously, reinforcement learning methods have been used to solve this problem. These iterative methods require sampling a relatively large number of input and output data from the system, which increases the design cost. In this paper, we introduce a method that reformulates the LQR problem as a semidefinite programming problem with linear matrix inequality constraints by sampling input and output data only once over a very short time interval and reconstructing the states using a model-free approach. Moreover, by utilizing the Bellman equation in the proposed algorithm, we enable the redesign of the controller to adapt to possible changes in system dynamics. Finally, through simulations, we demonstrate that our proposed algorithm can solve the problem with significantly fewer data samples and lower design costs compared to Q-learning algorithms. Additionally, by implementing this algorithm on a fourth-order two-input system, we illustrate its applicability to more complex systems.
 
     
Type of Article: Research paper | Subject: General
Received: 2025/03/8 | Accepted: 2025/11/24 | ePublished ahead of print: 2026/03/26

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.