You are currently browsing the monthly archive for January 2015.

The first colloquium speaker at this semester, professor Wei Zheng from IUPUI, will give a talk on “Universally optimal designs for two interference models“. In this data explosive age, people are easy to get big data set, which renders people difficult to make inferences from such massive data. Since people usually think that with more data, they have more chance to get more useful information from them, lots of researchers are struggling to achieve methodological advancements under this setup. This is a very challenging research area and of course very important, which in my opinion needs the resurgence of mathematical statistics by borrowing great ideas from various mathematical fields. However, another great and classical statistical research area should come back again to help statistical inference procedures from the beginning stage of data analysis, collecting data by design of experiments so that we can control the data quality, usefulness and size. Thus it’s necessary for us to know what is optimal design of experiments. Here is an introduction to this interesting topic.

In statistics, we have to organize an experiment in order to gain some information about an object of interest. Fragments of this information can be obtained by making observations within some elementary experiments called trials. The set of all trials which can be incorporated in a prepared experiment will be denoted by $\mathcal{X}$ , which we shall call the design space. The problem to be solved in experimental design is how to choose, say $N$ trials $x_i\in\mathcal{X} , i = 1, \cdots, N$, called the support points of the design, or eventually how to choose the size $N$ of the design, to gather enough information about the object of interest. Optimum experimental design corresponds to the maximization, in some sense, of this information. In specific, the optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments.

We shall restrict our attention to the parametric situation in the case of a regression model, the mean response function is then parameterized as

$E(Y)=\eta(x, \theta)$

specifying for a particular $x\in\mathcal{X}$ with unknown parameter $\theta\in{R}^p$.

A design is specified by an initially arbitrary measure $\xi(\cdot)$ assigning $n$ design points to estimate the parameter vector. Here $\xi$ can be written as

$\xi=\Big\{(x_1,w_1), (x_2,w_2), \cdots, (x_n, w_n)\Big\}$

where the $n$ design support points $x_1, x_2, \cdots, x_n$ are elements of the design space $\mathcal{X}$, and the associated weights $w_1, w_2, \cdots, w_n$ are nonnegative real numbers which sum to one. We make the usual second moment error assumptions leading to the use of least squares estimates. Then the corresponding Fisher information matrix associated with $\theta$ is given by

$M=M(\xi,\theta)=\sum_{i=1}^nw_i\frac{\partial\eta(x_i)}{\partial\theta}\frac{\partial\eta(x_i)}{\partial\theta^\intercal}=V^\intercal\Omega V$

where $V=\partial\eta/\partial\theta$ and $\Omega=diag\{w_1, w_2, \cdots, w_n\}$.

Now we have to propose the statistical criteria for the optimum. It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an (“efficient”) estimator is called the “Fisher information” for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information. When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the “information matrix”. Because the variance of the estimator of a parameter vector is a matrix, the problem of “minimizing the variance” is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these “information criteria” can be maximized. The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix.

• A-optimality (“average” or trace)
• One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients.
• D-optimality (determinant)
• A popular criterion is D-optimality, which seeks to maximize the determinant of the information matrix of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates.
• E-optimality (eigenvalue)
• Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix.
• T-optimality
• This criterion maximizes the trace of the information matrix.

Other optimality-criteria are concerned with the variance of predictions:

• G-optimality
• A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix. This has the effect of minimizing the maximum variance of the predicted values.
• I-optimality (integrated)
• A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space.
• V-optimality (variance)
• A third criterion on prediction variance is V-optimality, which seeks to minimize the average prediction variance over a set of m specific points.

Now back to our example, because the asymptotic covariance matrix associated with the LSE of $\theta$ is proportional to $M^{-1}$, the most popular regression design criterion is D-optimality, where designs are sought to minimize the determinant of $M^{-1}$. And the standardized predicted variance function, corresponding to the G-optimality, is

$d(x,\xi,\theta)=V^\intercal(x)M^{-1}(\xi,\theta)V(x)$

and G-optimality seeks to minimize $\delta(\xi,\theta)=\max_{x\in\mathcal{X}}d(x,\xi,\theta)$.

A central result in the theory of optimal design, the General Equivalence Theorem, asserts that the design $\xi^*$ that is D-optimal is also G-optimal and that

$\delta(\xi^*,\theta)=p$

the number of parameters.

Now the optimal design for an interference model, professor Wei Zheng will talk about, considers the following model in the block designs with neighbor effects:

$y_{i,j}=\mu+\tau_{d(i,j)}+\lambda_{d(i,j-1)}+\rho_{d(i,j+1)}+\beta_i+e_{i,j}$

where $d(i,j)\in{1, 2, \cdots, t}$ is the treatment assigned to the plot $(i,j)$ in the $j$-th position of the $i$-th block, and

1. $\mu$ is the general mean;
2. $\tau_{d(i,j)}$ is the direct effect of treatment $d(i,j)$;
3. $\lambda_{d(i,j-1)}$ and $\rho_{d(i,j+1)}$ are respectively the left and right neighbor effects; that’s the interference effect of the treatment assigned to, respectively, the left and right neighbor plots $(i,j-1)$ and $(i,j+1)$.
4. $\beta_i$ is the effect of the $i$-th block; and
5. $e_{i,j}$ is the random error, $1\leq i\leq b, 1\leq j\leq k$.

We seed the optimal design among designs $d\in\Omega_{t,b,k}$, the set of all designs with $b$ blocks of size $k$ and with $t$ treatments.

I am not going into the details of the derivation of the optimal design for the above interference model. I just sketch the outline here. First of all we can write down the information matrix for the direct treatment effect $\tau=(\tau_1,\tau_2,\cdots, \tau_t)^\intercal$, say $C_d$. Let $S$ be the set of all possible $t^k$ block sequences with replacement, which is the design space. Then we try to find the optimal measure $\xi$ among the set $P=\{p_s, s\in S, \sum_sp_s=1, p_s\geq 0\}$ to maximize $\Phi(C_{\xi})$ for a given function $\Phi$ satisfying the following three conditions:

1. $\Phi$ is concave;
2. $\Phi(M^\intercal CM)=\Phi(C)$ for any permutation matrix $M$;
3. $\Phi(bC)$ is nondecreasing in the scalar $b>0$.

A measure $\xi$ which achieves the maximum of $\Phi(C_{\xi})$ among $P$ for any $\Phi$ satisfying the above three conditions is said to be universally optimal. Such measure is optimal under criteria of A, D, E, T, etc. Thus we could imagine that all of the analysis is just linear algebra.