Continuous-Time Markov Chains and Applications: A Two-Time-Scale ApproachThis book gives a systematic treatment of singularly perturbed systems that naturally arise in control and optimization, queueing networks, manufacturing systems, and financial engineering. It presents results on asymptotic expansions of solutions of Komogorov forward and backward equations, properties of functional occupation measures, exponential upper bounds, and functional limit results for Markov chains with weak and strong interactions. To bridge the gap between theory and applications, a large portion of the book is devoted to applications in controlled dynamic systems, production planning, and numerical methods for controlled Markovian systems with large-scale and complex structures in the real-world problems. This second edition has been updated throughout and includes two new chapters on asymptotic expansions of solutions for backward equations and hybrid LQG problems. The chapters on analytic and probabilistic properties of two-time-scale Markov chains have been almost completely rewritten and the notation has been streamlined and simplified. This book is written for applied mathematicians, engineers, operations researchers, and applied scientists. Selected material from the book can also be used for a one semester advanced graduate-level course in applied probability and stochastic processes. |
Other editions - View all
Continuous-Time Markov Chains and Applications: A Two-Time-Scale Approach G. George Yin,Qing Zhang No preview available - 2012 |
Continuous-time Markov Chains and Applications: A Singular Perturbation Approach George Yin,Qing Zhang No preview available - 1998 |
Common terms and phrases
algorithm applications approximation Assume asymptotic expansion asymptotic properties asymptotically optimal chapter consider constant constructed continuously differentiable control problem corresponding Cox process defined denote derive differential equation DP equation dx(t eigenvalues error bound example exponential finite-state Markov chain follows forward equation Fredholm alternative given implies inequality initial conditions initial-layer Khasminskii Kokotovic Kushner 139 Lemma limit problem linear Lipschitz Lipschitz continuous Markov chain Markov decision processes Markovian martingale matrix models Moreover nearly optimal nonnegative Note obtain occupation measures optimal control perturbed Markov chains Po(t probability distribution quasi-stationary distribution random satisfies Section 4.2 sequence singularly perturbed Markov small parameter space Springer Science+Business Media stochastic strong interactions tion transient two-time-scale underlying uniformly unique solution value function vector viscosity solution weak and strong weak convergence weakly irreducible