Authors
Tao Yu, Bin Zhou, Ka Wing Chan, Liang Chen, Bo Yang
Publication date
2011/2/4
Journal
IEEE Transactions on Power Systems
Volume
26
Issue
3
Pages
1272-1282
Publisher
IEEE
Description
This paper proposes a stochastic optimal relaxed control methodology based on reinforcement learning (RL) for solving the automatic generation control (AGC) under NERC's control performance standards (CPS). The multi-step Q (λ) learning algorithm is introduced to effectively tackle the long time-delay control loop for AGC thermal plants in non-Markov environment. The moving averages of CPS1/ACE are adopted as the state feedback input, and the CPS control and relaxed control objectives are formulated as multi-criteria reward function via linear weighted aggregate method. This optimal AGC strategy provides a customized platform for interactive self-learning rules to maximize the long-run discounted reward. Statistical experiments show that the RL theory based Q (λ) controllers can effectively enhance the robustness and dynamic performance of AGC systems, and reduce the number of pulses and pulse …
Total citations
20112012201320142015201620172018201920202021202220232024146610121113171071185