quotation:[Copy]
Ketaki KULKARNI,Abhijit GOSAVI,Susan MURRAY,Katie GRANTHAM.[en_title][J].Control Theory and Technology,2011,9(3):421~430.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 2403   Download 474 本文二维码信息
码上扫一扫!
KetakiKULKARNI,AbhijitGOSAVI,SusanMURRAY,KatieGRANTHAM
0
(Department of Engineering Management and Systems Engineering, Missouri University of Science and Technology)
摘要:
关键词:  
DOI:
Received:July 10, 2010Revised:March 16, 2011
基金项目:This work was supported by the National Science Foundation (No.ECS0841055).
Semi-Markov adaptive critic heuristics with application to airline revenue management
Ketaki KULKARNI,Abhijit GOSAVI,Susan MURRAY,Katie GRANTHAM
(Department of Engineering Management and Systems Engineering, Missouri University of Science and Technology)
Abstract:
The adaptive critic heuristic has been a popular algorithm in reinforcement learning (RL) and approximate dynamic programming (ADP) alike. It is one of the first RL and ADP algorithms. RL and ADP algorithms are particularly useful for solving Markov decision processes (MDPs) that suffer from the curses of dimensionality and modeling. Many real-world problems, however, tend to be semi-Markov decision processes (SMDPs) in which the time spent in each transition of the underlying Markov chains is itself a random variable. Unfortunately for the average reward case, unlike the discounted reward case, the MDP does not have an easy extension to the SMDP. Examples of SMDPs can be found in the area of supply chain management, maintenance management, and airline revenue management. In this paper, we propose an adaptive critic heuristic for the SMDP under the long-run average reward criterion. We present the convergence analysis of the algorithm which shows that under certain mild conditions, which can be ensured within a simulator, the algorithm converges to an optimal solution with probability 1. We test the algorithm extensively on a problem of airline revenue management in which the manager has to set prices for airline tickets over the booking horizon. The problem has a large scale, suffering from the curse of dimensionality, and hence it is difficult to solve it via classical methods of dynamic programming. Our numerical results are encouraging and show that the algorithm outperforms an existing heuristic used widely in the airline industry.
Key words:  Adaptive critics  Actor critics  Semi-Markov  Approximate dynamic programming  Reinforcement learning