Finite Mixture and Markov Switching ModelsThe past decade has seen powerful new computational tools for modeling which combine a Bayesian approach with recent Monte simulation techniques based on Markov chains. This book is the first to offer a systematic presentation of the Bayesian perspective of finite mixture modelling. The book is designed to show finite mixture and Markov switching models are formulated, what structures they imply on the data, their potential uses, and how they are estimated. Presenting its concepts informally without sacrificing mathematical correctness, it will serve a wide readership including statisticians as well as biologists, economists, engineers, financial and market researchers. |
From inside the book
Results 1-5 of 61
Page 27
... posterior probability ( Ander- son , 1984 , Chapter 6 ) , because this minimizes the expected misclassification risk , see also Subsection 7.1.7 . How well this classifier works depends on the difference between the parameters in the ...
... posterior probability ( Ander- son , 1984 , Chapter 6 ) , because this minimizes the expected misclassification risk , see also Subsection 7.1.7 . How well this classifier works depends on the difference between the parameters in the ...
Page 28
... probability of the event { S1 = k1 , ... , SN = kN } for all possible allocations ( k1 , ... , kN ) of the N ... posterior distribution p ( Sv , y ) . For known component parameters ✈ the joint posterior density p ( S | v , y ) ...
... probability of the event { S1 = k1 , ... , SN = kN } for all possible allocations ( k1 , ... , kN ) of the N ... posterior distribution p ( Sv , y ) . For known component parameters ✈ the joint posterior density p ( S | v , y ) ...
Page 31
... likelihood p ( y , Sv ) , regarded as a function of as for maximum likelihood estimation , is combined with a prior distribution p ( v ) on the parameter to obtain the complete - data posterior distribution p ( vy , S ) using Bayes ...
... likelihood p ( y , Sv ) , regarded as a function of as for maximum likelihood estimation , is combined with a prior distribution p ( v ) on the parameter to obtain the complete - data posterior distribution p ( vy , S ) using Bayes ...
Page 32
... posterior probabilities for the two values of , Pr ( v = ( 1 , 2 , .5 ) | S ... probability densities rather than probabilities of events , Bayes ' rule may ... posterior density p ( | S , y ) . Bayes ' theorem ( 2.13 ) combines the ...
... posterior probabilities for the two values of , Pr ( v = ( 1 , 2 , .5 ) | S ... probability densities rather than probabilities of events , Bayes ' rule may ... posterior density p ( | S , y ) . Bayes ' theorem ( 2.13 ) combines the ...
Page 35
... posterior p ( μky , S ) is rather skewed for the first data set whereas it is close to a normal distribution for the ... probability of the Bayesian credibility interval is much closer to the nominal value than the ef- fective coverage ...
... posterior p ( μky , S ) is rather skewed for the first data set whereas it is close to a normal distribution for the ... probability of the Bayesian credibility interval is much closer to the nominal value than the ef- fective coverage ...
Contents
1 | |
5 | |
12 | |
25 | |
Practical Bayesian Inference for a Finite Mixture Model | 57 |
Finite Mixtures of Regression Models 241 | 99 |
Finite Mixture Models with Normal Components | 169 |
Data Analysis Based on Finite Mixtures | 203 |
Finite Mixture Models with Nonnormal Components | 277 |
Finite Markov Mixture Modeling 301 | 300 |
Statistical Inference for Markov Switching Models | 319 |
Switching State Space Models | 389 |
A Appendix | 431 |
References | 441 |
Index | 481 |
in Bayesian Analysis | 238 |
Other editions - View all
Common terms and phrases
Algorithm allocations applied assumed asymptotic autocorrelation Bayes Bayes factor Bayesian estimation Bayesian inference bridge sampling Celeux Chib choosing classification clustering component parameters Computational conditional conjugate prior constraint covariance data augmentation defined discussed in Subsection EM algorithm finite mixture distribution finite mixture models Frühwirth-Schnatter Gibbs sampling heterogeneity hidden Markov chain identifiability importance density improper prior Kmax label switching likelihood function likelihood p(y marginal likelihood marginal posterior Markov chain Markov mixture Markov switching models MCMC draws Metropolis-Hastings algorithm mixture density mixture likelihood function mixture of Poisson mixtures of normals ML estimator multivariate mixtures normal distributions normal mixture number of components observations obtained outliers overfitting p(y MK parameter estimation permutation Poisson distributions posterior distribution posterior probability Pr(S Pr(St prior distribution Raftery regression models sampler simulation space models Statistical Synthetic Data Set tion univariate mixtures unknown variance weight distribution whereas