Today I got a good sense of two important things in Statistics, one is Linear Mixed Model (LMM) and the other is Markov Chain and Monte Carlo (MCMC).

LMM:

Linear mixed models (LMM) handle data where observations are not independent. That is, LMM correctly models correlated errors, whereas procedures in the general linear model family (GLM, which includes t-tests, analysis of variance, correlation, regression, and factor analysis) usually do not. LMM is a further generalization of GLM to better support analysis of a continuous dependent for Random effects,Hierarchical effects and Repeated measures.
LMM is required whenever the OLS regression assumption of independent error is violated, as it often is whenever data cluster by some grouping variable (ex., scores nested within schools) or by some repeated measure (ex., yearly scores nested by student id).
There are many varieties of LMM, involving diverse labels: random intercept models, random coefficients models, hierarchical linear models, variance components models, covariance components models, and multilevel models, to name a few. While most multilevel modeling is univariate (one dependent variable), multivariate multilevel modeling for two or more dependent variables is available also.

MCMC:

The big idea behind MCMC is this: rather than sampling from a hard-to-sample target distribution p(.), we will sample from a Markov chain that has p(.) as its stationary distribution. The transition probability of this Markov chain should be a distribution that is easier to sample from. Intuitively, if we generate enough samples from the Markov chain, we will effectively sample from the target distribution.

Advertisements