Approximating Countable Markov ChainsA long time ago I started writing a book about Markov chains, Brownian motion, and diffusion. I soon had two hundred pages of manuscript and my publisher was enthusiastic. Some years and several drafts later, I had a thousand pages of manuscript, and my publisher was less enthusiastic. So we made it a trilogy: Markov Chains Brownian Motion and Diffusion Approximating Countable Markov Chains familiarly - MC, B & D, and ACM. I wrote the first two books for beginning graduate students with some knowledge of probability; if you can follow Sections 10.4 to 10.9 of Markov Chains, you're in. The first two books are quite independent of one another, and completely independent of this one, which is a monograph explaining one way to think about chains with instantaneous states. The results here are supposed to be new, except when there are specific disclaimers. It's written in the framework of Markov chains; we wanted to reprint in this volume the MC chapters needed for reference. but this proved impossible. Most of the proofs in the trilogy are new, and I tried hard to make them explicit. The old ones were often elegant, but I seldom saw what made them go. With my own, I can sometimes show you why things work. And, as I will argue in a minute, my demonstrations are easier technically. If I wrote them down well enough, you may come to agree. |
From inside the book
12 pages matching independent and exponential in this book
Where's the rest of this book?
Results 1-3 of 12
Contents
RESTRICTING THE RANGE | 1 |
RESTRICTING THE RANGE APPLICATIONS | 64 |
CONSTRUCTING THE GENERAL MARKOV CHAIN | 95 |
Copyright | |
2 other sections not shown
Other editions - View all
Common terms and phrases
1)-intervals absorbing argue argument b₂ binary rationals Brownian motion chapter coincides conditional P-distribution construction converges countable D₁ defined difference quotient exponentially distributed F(1)-measurable Figure finite subset Fubini hitting holding I₁ implies In+1 independent and exponential infinite interval of constancy joint distribution jump Lebesgue measure Lebesgue s:0 Lemma Let f locally finitary Markov chain Markov process Markov property Markov with stationary Markov with transitions Math nondecreasing notation null set o-field P-probability Poisson process Poisson with parameter positive Prob probability triple product measurable prove pseudo-jumps Qn(j Qn+1 quasiregular random variables recurrent restriction retracted right continuous sample functions satisfies Section sequence standard stochastic semigroup starting stationary standard transitions stationary transitions strictly increasing Suppose T₁ Theorem TJ,n TJ,o visits VOLKER STRASSEN WILLIAM FELLER X-scale corresponding XN+1 Z₁