Access the full text.
Sign up today, get DeepDyve free for 14 days.
Markov decision processes (MDP) are widely used in problems whose solutions may be represented by a certain series of actions. A lot of papers demonstrate successful MDP use in model problems, robotic control problems, planning problems, etc. In addition, economic problems have the property of multistep motion towards a goal as well. This paper is dedicated to MDP application to the problem of pricing policy management. The problem of dynamic pricing is stated in terms of MDP. Additional attention is paid to the method of constructing an MDP model based on data mining. Based on the data on sales of an actual industrial plant, construction of an MDP model that includes the searching for and generalization of regularities is demonstrated.
Automatic Control and Computer Sciences – Springer Journals
Published: Jan 5, 2012
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.