【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 2002   Download 558 本文二维码信息
(Department of Mechanical Engineering and Materials Science, Duke University)
Received:November 09, 2010Revised:March 23, 2011
基金项目:This work was supported by the National Science Foundation (No.ECS 0925407).
A model-based approximate λ-policy iteration approach to online evasive path planning and the video game Ms. Pac-Man
(Department of Mechanical Engineering and Materials Science, Duke University)
This paper presents a model-based approximate λ-policy iteration approach using temporal differences for optimizing paths online for a pursuit-evasion problem, where an agent must visit several target positions within a region of interest while simultaneously avoiding one or more actively pursuing adversaries. This method is relevant to applications, such as robotic path planning, mobile-sensor applications, and path exposure. The methodology described utilizes cell decomposition to construct a decision tree and implements a temporal difference-based approximate λ-policy iteration to combine online learning with prior knowledge through modeling to achieve the objectives of minimizing the risk of being caught by an adversary and maximizing a reward associated with visiting target locations. Online learning and frequent decision tree updates allow the algorithm to quickly adapt to unexpected movements by the adversaries or dynamic environments. The approach is illustrated through a modified version of the video game Ms. Pac-Man, which is shown to be a benchmark example of the pursuit-evasion problem. The results show that the approach presented in this paper outperforms several other methods as well as most human players.
Key words:  Approximate dynamic programming  Reinforcement learning  Path planning  Pursuit evasion games