Markov ChainsMarkov chains are central to the understanding of random processes. This is not only because they pervade the applications of random processes, but also because one can calculate explicitly many quantities of interest. This textbook, aimed at advanced undergraduate or MSc students with some background in basic probability theory, focuses on Markov chains and quickly develops a coherent and rigorous theory whilst showing also how actually to apply it. Both discrete-time and continuous-time chains are studied. A distinguishing feature is an introduction to more advanced topics such as martingales and potentials in the established context of Markov chains. There are applications to simulation, economics, optimal control, genetics, queues and many other topics, and exercises and examples drawn both from theory and practice. It will therefore be an ideal text either for elementary courses on random processes or those that are more oriented towards applications. |
Contents
I | 3 |
III | 12 |
IV | 14 |
V | 21 |
VI | 26 |
VII | 31 |
VIII | 35 |
IX | 42 |
XXIX | 116 |
XXX | 119 |
XXXI | 123 |
XXXII | 125 |
XXXIII | 127 |
XXXIV | 130 |
XXXVI | 136 |
XXXVII | 153 |
Common terms and phrases
assume backward equation birth process Brownian motion conditional Consider continuous-time Markov chain countable set define denote detailed balance diagram electrical networks Example exponential of parameter exponential random variables finite forward equation Fubini's theorem function given h₁ Hence in+1 independent exponential random induction infinite invariant distribution invariant measure irreducible J₁ Jm+1 Jn+1 jump chain Lemma Let Xt)to Markov(X martingale matrix Q minimal non-negative solution monotone convergence node obtain P(Xn P(Xt P(Xt+h P(Xtn+1 Pi(Xt Pij(t Pinin+1 Poisson process positive recurrent process of rate Proof Q-matrix queue random variables random walk recurrence and transience result satisfies Section semigroup simple symmetric random simulate state-space stationary policy stochastic matrix strong Markov property Suppose symmetric random walk theory tn+1 transient transition matrix transition probabilities values Vi(n X₁ Xn)nzo Xn+1 Y₁ ΕΙ ΚΕΙ