Problem solving with reinforcement learning by Gavin Adrian Rummery


ΠšΠ°Ρ‚Π΅Π³ΠΎΡ€ΠΈΡ: Other

ΠŸΠΎΠ΄Π΅Π»ΠΈΡ‚ΡŒΡΡ:
This thesis is concerned with practical issues surrounding the application of reinforcement learning techniques to tasks that take place in high dimensional continuous state-space environments. In particular, the extension of on-line updating methods is considered, where the term implies systems that learn as each experience arrives, rather than storing the experiences for use in a separate oo-line learning phase. Firstly, the use of alternative update rules in place of standard Q-learning (Watkins 1989) is examined to provide faster convergence rates. Secondly, the use of multi-layer perceptron (MLP) neural networks (Rumelhart, Hinton and Williams 1986) is investigated to provide suitable generalising function approximators. Finally, consideration is given to the combination of Adaptive Heuristic Critic (AHC) methods and Q-learning to produce systems combining the beneets of real-valued actions and discrete switching. The diierent update rules examined are based on Q-learning combined with the TD() algorithm (Sutton 1988). Several new algorithms, including Modiied Q-Learning and Summation Q-Learning, are examined, as well as alternatives such as Q() (Peng and Williams 1994). In addition, algorithms are presented for applying these Q-learning updates to train MLPs on-line during trials, as opposed to the backward-replay method used by Lin (1993b) that requires waiting until the end of each trial before updating can occur. The performance of the update rules is compared on the Race Track problem of Barto, Bradtke and Singh (1993) using a lookup table representation for the Q-function. Some of the methods are found to perform almost as well as Real-Time Dynamic Programming, despite the fact that the latter has the advantage of a full world model. The performance of the connectionist algorithms is compared on a larger and more complex robot navigation problem. Here a simulated mobile robot is trained to guide itself to a goal position in the presence of obstacles. The robot must rely on limited sensory feedback from its surroundings and make decisions that can be generalised to arbitrary layouts of obstacles. These simulations show that the performance of on-line learning algorithms is less sensitive to the choice of training parameters than backward-replay, and that the alternative Q-learning rules of Modiied Q-Learning and Q() are more robust than standard Q-learning updates. Finally, a combination of real-valued AHC and Q-learning, called Q-AHC learning, is presented, and various architectures are compared in performance on the robot problem. The resulting reinforcement learning system has the properties of providing on-line training, parallel computation, generalising function
 Π‘ΠΊΠ°Ρ‡Π°Ρ‚ΡŒ



ΠšΠΎΠΌΠΌΠ΅Π½Ρ‚Π°Ρ€ΠΈΠΈ

    НичСго Π½Π΅ Π½Π°ΠΉΠ΄Π΅Π½ΠΎ.