Markov Decision Processes with Applications to Finance

Buch | Softcover
XVI, 388 Seiten
2011
Springer Berlin (Verlag)
978-3-642-18323-2 (ISBN)
74,89 inkl. MwSt
The theory of Markov Decision Processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces, illustrating its application through examples in finance and operations research.
The theory of Markov decision processes focuses on controlled Markov chains in discrete time. The authors establish the theory for general state and action spaces and at the same time show its application by means of numerous examples, mostly taken from the fields of finance and operations research. By using a structural approach many technicalities (concerning measure theory) are avoided. They cover problems with finite and infinite horizons, as well as partially observable Markov decision processes, piecewise deterministic Markov decision processes and stopping problems.
The book presents Markov decision processes in action and includes various state-of-the-art applications with a particular view towards finance. It is useful for upper-level undergraduates, Master's students and researchers  in both applied probability and finance, and provides exercises (without solutions).

Nicole Bäuerle is full professor for Stochastics at the Karlsruhe Institute of Technology. Currently she is in the board of the Fachgruppe Stochastik and the DGVFM (Deutsche Gesellschaft für Versicherungs- und Finanzmathematik). She is editor of the journals "Stochastic Models" and "Mathematical Methods of Operations Research". Ulrich Rieder is full professor for Optimization and Operations Research at the University of Ulm since 1980. He helped to establish a new program in applied mathematics at Ulm, called Wirtschaftsmathematik. From 1990-2008 he was editor-in-chief of "Mathematical Methods of Operations Research". He is editor of several journals in the areas of operations research and finance.

Preface
1.Introduction and First Examples
Part I Finite Horizon Optimization Problems and Financial Markets
2.Theory of Finite Horizon Markov Decision Processes
3.The Financial Markets
4.Financial Optimization Problems
Part II Partially Observable Markov Decision Problems
5.Partially Observable Markov Decision Processes
6.Partially Observable Markov Decision Problems in Finance
Part III Infinite Horizon Optimization Problems
7.Theory of Infinite Horizon Markov Decision Processes
8.Piecewise Deterministic Markov Decision Processes
9.Optimization Problems in Finance and Insurance
Part IV Stopping Problems
10.Theory of Optimal Stopping Problems
11.Stopping Problems in Finance
Part V Appendix
A.Tools from Analysis
B.Tools from Probability
C.Tools from Mathematical Finance
References
Index.

From the reviews:

"This book presents Markov decision processes with general state and action spaces and includes various state-of-the-art applications that stem from finance and operations research. ... very helpful, not only for graduate students, but also for researchers working in the field of MDPs and finance. The authors do not focus only on discrete-time MDPs, but provide the description of different classes of Markov models ... . Each chapter ends with remarks, where the potential reader may find further hints concerning references." (Anna Jaskiewicz, Zentralblatt MATH, Vol. 1236, 2012)

Erscheint lt. Verlag 8.6.2011
Reihe/Serie Universitext
Verlagsort Berlin
Sprache englisch
Maße 155 x 235 mm
Gewicht 600 g
Einbandart kartoniert
Themenwelt Mathematik / Informatik Mathematik Finanz- / Wirtschaftsmathematik
Mathematik / Informatik Mathematik Wahrscheinlichkeit / Kombinatorik
Wirtschaft Betriebswirtschaft / Management
Schlagworte 90C40, 93E20, 60J05, 91G10, 93E35, 60G40 • Markov Decision Processes • Markov-Prozesse • Partially Observable Markov Decision Processes • portfolio optimization • Quantitative Finance • stochastic dynamic programming
ISBN-10 3-642-18323-9 / 3642183239
ISBN-13 978-3-642-18323-2 / 9783642183232
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Wie bewerten Sie den Artikel?
Bitte geben Sie Ihre Bewertung ein:
Bitte geben Sie Daten ein:
Mehr entdecken
aus dem Bereich