You are currently browsing the category archive for the ‘Statistics’ category.

. . . the objective of statistical methods is the reduction of data. A quantity of data. . . is to be replaced by relatively few quantities which shall adequately represent. . . the
relevant information contained in the original data.

Since the number of independent facts supplied in the data is usually far greater than the number of facts sought, much of the information supplied by an actual sample is irrelevant. It is the object of the statistical process employed in the reduction of data to exclude this irrelevant information, and to isolate the whole of the relevant information contained in the data.

—Fisher’s 1922 article “On the mathematical foundations of theoretical statistics”

Sufficiency is the concept to keep relevant information for the estimation of the target parameter. Since the raw data is of course sufficient, we will look for minimal (i.e. maximal reduction) and sufficient statistic. A minimal sufficient statistic may still contain some redundancy. In other words, there may be more than one way to estimate the parameter. Essentially, completeness says the only way to estimate 0 is with 0. If T is not complete, then it somehow can be used to estimate the same quantity two different ways.

Note that a further reduction of complete statistic is also complete. Hence the key point of completeness is that it indicates a reduction of the data to the point where there can be at most one unbiased estimator of any \tau(\theta):

E_{\theta}g_j(T)=\tau(\theta), j=1,2\Rightarrow E_{\theta}[g_1(T)-g_2(T)]=0,\forall\theta\Rightarrow g_1=g_2

Thus with the reduction keeping sufficiency, once it reaches completeness, we know that this sufficient and complete statistic is minimal sufficient if there exists one.

Here is a very nice geometric interpretation of completeness: https://stats.stackexchange.com/q/285503

 

 

  1. A nice blog on CS including learnings: https://blog.acolyer.org/ called “the morning paper”: an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer. I hope there is a similar blog on Statistics, reviewing and recommending an interesting/influential/important paper from the world of Statistics.
  2. A wonderful summary of Mathematical Tricks Commonly Used in Machine Learning and Statistics with examples
  3. I just realized that when I teach ridge regression I should have used A Useful Matrix Inverse Equality for Ridge Regression
  4. GANs should be gained much attention in the stats community: Understanding Generative Adversarial Networks. This is a nice post about GANs based on “probably the highest-quality general overview available nowadays: Ian Goodfellow’s tutorial on arXiv, which he then presented in some form at NIPS 2016. “
  5. R or Python? Why not both? Using Anaconda Python within R with {reticulate}
  6. “A heatmap is basically a table that has colors in place of numbers. Colors correspond to the level of the measurement.”

You can install the StatRep package by downloading statrep.zip from support.sas.com/StatRepPackage, which contains:

  • doc/statrepmanual.pdf – The StatRep User’s Guide (this manual)
  • doc/quickstart.tex – A template and tutorial sample LATEX file
  • sas/statrep_macros.sas – The StatRep SAS macros
  • sas/statrep_tagset.sas – The StatRep SAS tagset for LaTeX tabular output
  • statrep.ins – The LATEX package installer file
  • statrep.dtx – The LATEX package itself

Unzip the file statrep.zip to a temporary directory and perform the following steps:

  • Step 1: Install the StatRep SAS Macros: Copy the file statrep_macros.sas to a local directory. If you have a folder where you keep your personal set of macros, copy the file there. Otherwise, create a directory such as C:\mymacros and copy the file into that directory.
  • Step 2: Install the StatRep LaTeX Package: These instructions show how to install the StatRep package in your LATEX distribution for your personal use.
    • a. For MikTEX users: If you do not have a directory for your own packages, choose a directory name to contain your packages (for example, C:\localtexmf). In the following instructions, this directory is referred to as the “root directory”.
    • b. Create the additional subdirectories under the above root directory: tex/latex/statrep. Your directory tree will have the following structure: root directory/tex/latex/statrep.
    • c. Copy the files statrep.dtx, statrep.ins, statrepmanual.pdf, and statrepmanual.tex to the statrep subdirectory.
    • d. In the command prompt, cd to the statrep directory and enter the following command: pdftex statrep.insThe command creates several files, one of which is the configuration file, statrep.cfg.
  • Step 3: Tell the StatRep Package the Location of the StatRep SAS Macros. Edit the statrep.cfg file that was generated in Step 2d so that the macro \SRmacropath contains the correct location of the macro file from step 1. For example, if you copied the statrep_macros.sas file to a directory named C:\mymacros, then you de- fine macro \SRmacropath as follows: \def\SRmacropath{C:/mymacros/statrep_macros.sas} Use the forward slash as the directory name delimiter instead of the backslash, which is a special character in LaTeX.

You can now test and experiment with the package. Create a working directory, and copy the file quickstart.tex into it. To generate the quick-start document:

  1. Compile the document with pdfLATEX. You can use a LATEX-aware editor such as TEXworks, or use the command-line command pdflatex. This step generates the SAS program that is needed to produce the results.
  2. Execute the SAS program quickstart_SR.sas, which was automatically created in the preceding step. This step generates the SAS results that are requested in the quick-start document.
  3. Recompile the document with pdfLATEX. This step compiles the quick-start document to PDF, this time including the SAS results that were generated in the preceding step. In some cases listing outputs may not be framed properly after this step. If your listing outputs are not framed properly, repeat this step so that LaTeX can remeasure the listing outputs.

Please refer to the following file for detailed information:

Click to access statrepmanual.pdf

 

Yesterday I learned something interesting from a talk given by Professor Bikas K Sinha. The following is an excerpt from the reference [1], which exactly shows the interesting point of the problem.

“A population consisting of an unknown number of distinct species is searched by selecting one member at a time. No a priori information is available concerning the probability that an object selected from this population will represent a particular species. Based on the information available after an n-stage search it is desired to predict the conditional probability that the next selection will represent a species not represented in the n-stage sample.”

Searcher: “I am contemplating extending my initial search an additional m stages, and will so do if the expected number of individuals I will select in the second search who are new species is large. What do you recommend?”

Statistician: “Make one more search and then I will tell you.”

Refer to the Annals of Statistics paper:

[1] Starr, Norman. “Linear estimation of the probability of discovering a new species.” The Annals of Statistics (1979): 644-652.

Recently I was referred to a nice article talking about the relationship between Statistics and data science. Here is my feedback to share with you:

  1. First of all, Statistics is a science dealing with data, including five main components,  data collection (design of experiment, sampling), data preparation (storage, reading, organization, cleaning), exploratory data analysis (numerical summarization, visualization), statistical inference (frequentist and Bayesian) and communication (interpretation).
  2. It’s statistician’s mistake putting extremely unequal weights on the development of the five components in the past 50 years, mostly focusing on the fourth component.
  3. Fortunately, the first component is now showing resurgence under the massive data situation. How to sample the “influential” data points from massive samples is a big and important research topic.
  4. People outside of traditional statistics community have been picking up the second and third components, like adopting the two undeveloped statistics children. And the adoptive parents are saying that the two children are not statistics, and instead they call them data science.
  5. But Statistics is really about all of the five equally important components.
  6. And our Statistician’s goal is to get the two children back to our statistics community. We are all Statistician!

 

 

The first colloquium speaker at this semester, professor Wei Zheng from IUPUI, will give a talk on “Universally optimal designs for two interference models“. In this data explosive age, people are easy to get big data set, which renders people difficult to make inferences from such massive data. Since people usually think that with more data, they have more chance to get more useful information from them, lots of researchers are struggling to achieve methodological advancements under this setup. This is a very challenging research area and of course very important, which in my opinion needs the resurgence of mathematical statistics by borrowing great ideas from various mathematical fields. However, another great and classical statistical research area should come back again to help statistical inference procedures from the beginning stage of data analysis, collecting data by design of experiments so that we can control the data quality, usefulness and size. Thus it’s necessary for us to know what is optimal design of experiments. Here is an introduction to this interesting topic.

In statistics, we have to organize an experiment in order to gain some information about an object of interest. Fragments of this information can be obtained by making observations within some elementary experiments called trials. The set of all trials which can be incorporated in a prepared experiment will be denoted by \mathcal{X} , which we shall call the design space. The problem to be solved in experimental design is how to choose, say N trials x_i\in\mathcal{X} , i = 1, \cdots, N, called the support points of the design, or eventually how to choose the size N of the design, to gather enough information about the object of interest. Optimum experimental design corresponds to the maximization, in some sense, of this information. In specific, the optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments.

We shall restrict our attention to the parametric situation in the case of a regression model, the mean response function is then parameterized as

E(Y)=\eta(x, \theta)

specifying for a particular x\in\mathcal{X} with unknown parameter \theta\in{R}^p.

A design is specified by an initially arbitrary measure \xi(\cdot) assigning n design points to estimate the parameter vector. Here \xi can be written as

\xi=\Big\{(x_1,w_1), (x_2,w_2), \cdots, (x_n, w_n)\Big\}

where the n design support points x_1, x_2, \cdots, x_n are elements of the design space \mathcal{X}, and the associated weights w_1, w_2, \cdots, w_n are nonnegative real numbers which sum to one. We make the usual second moment error assumptions leading to the use of least squares estimates. Then the corresponding Fisher information matrix associated with \theta is given by

M=M(\xi,\theta)=\sum_{i=1}^nw_i\frac{\partial\eta(x_i)}{\partial\theta}\frac{\partial\eta(x_i)}{\partial\theta^\intercal}=V^\intercal\Omega V

where V=\partial\eta/\partial\theta and \Omega=diag\{w_1, w_2, \cdots, w_n\}.

Now we have to propose the statistical criteria for the optimum. It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an (“efficient”) estimator is called the “Fisher information” for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information. When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the “information matrix”. Because the variance of the estimator of a parameter vector is a matrix, the problem of “minimizing the variance” is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these “information criteria” can be maximized. The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix.

  • A-optimality (“average” or trace)
    • One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients.
  • D-optimality (determinant)
    • A popular criterion is D-optimality, which seeks to maximize the determinant of the information matrix of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates.
  • E-optimality (eigenvalue)
    • Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix.
  • T-optimality
    • This criterion maximizes the trace of the information matrix.

Other optimality-criteria are concerned with the variance of predictions:

  • G-optimality
    • A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix. This has the effect of minimizing the maximum variance of the predicted values.
  • I-optimality (integrated)
    • A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space.
  • V-optimality (variance)
    • A third criterion on prediction variance is V-optimality, which seeks to minimize the average prediction variance over a set of m specific points.

Now back to our example, because the asymptotic covariance matrix associated with the LSE of \theta is proportional to M^{-1}, the most popular regression design criterion is D-optimality, where designs are sought to minimize the determinant of M^{-1}. And the standardized predicted variance function, corresponding to the G-optimality, is

d(x,\xi,\theta)=V^\intercal(x)M^{-1}(\xi,\theta)V(x)

and G-optimality seeks to minimize \delta(\xi,\theta)=\max_{x\in\mathcal{X}}d(x,\xi,\theta).

A central result in the theory of optimal design, the General Equivalence Theorem, asserts that the design \xi^* that is D-optimal is also G-optimal and that

\delta(\xi^*,\theta)=p

the number of parameters.

Now the optimal design for an interference model, professor Wei Zheng will talk about, considers the following model in the block designs with neighbor effects:

y_{i,j}=\mu+\tau_{d(i,j)}+\lambda_{d(i,j-1)}+\rho_{d(i,j+1)}+\beta_i+e_{i,j}

where d(i,j)\in{1, 2, \cdots, t} is the treatment assigned to the plot (i,j) in the j-th position of the i-th block, and

  1. \mu is the general mean;
  2. \tau_{d(i,j)} is the direct effect of treatment d(i,j);
  3. \lambda_{d(i,j-1)} and \rho_{d(i,j+1)} are respectively the left and right neighbor effects; that’s the interference effect of the treatment assigned to, respectively, the left and right neighbor plots (i,j-1) and (i,j+1).
  4. \beta_i is the effect of the i-th block; and
  5. e_{i,j} is the random error, 1\leq i\leq b, 1\leq j\leq k.

We seed the optimal design among designs d\in\Omega_{t,b,k}, the set of all designs with b blocks of size k and with t treatments.

I am not going into the details of the derivation of the optimal design for the above interference model. I just sketch the outline here. First of all we can write down the information matrix for the direct treatment effect \tau=(\tau_1,\tau_2,\cdots, \tau_t)^\intercal, say C_d. Let S be the set of all possible t^k block sequences with replacement, which is the design space. Then we try to find the optimal measure \xi among the set P=\{p_s, s\in S, \sum_sp_s=1, p_s\geq 0\} to maximize \Phi(C_{\xi}) for a given function \Phi satisfying the following three conditions:

  1. \Phi is concave;
  2. \Phi(M^\intercal CM)=\Phi(C) for any permutation matrix M;
  3. \Phi(bC) is nondecreasing in the scalar b>0.

A measure \xi which achieves the maximum of \Phi(C_{\xi}) among P for any \Phi satisfying the above three conditions is said to be universally optimal. Such measure is optimal under criteria of A, D, E, T, etc. Thus we could imagine that all of the analysis is just linear algebra.

There has been a Machine Learning (ML) reading list of books in hacker news for a while, where Professor Michael I. Jordan recommend some books to start on ML for people who are going to devote many decades of their lives to the field, and who want to get to the research frontier fairly quickly. Recently he articulated the relationship between CS and Stats amazingly well in his recent reddit AMA, in which he also added some books that dig still further into foundational topics. I just list them here for people’s convenience and my own reference.

  • Frequentist Statistics
    1. Casella, G. and Berger, R.L. (2001). “Statistical Inference” Duxbury Press.—Intermediate-level statistics book.
    2. Ferguson, T. (1996). “A Course in Large Sample Theory” Chapman & Hall/CRC.—For a slightly more advanced book that’s quite clear on mathematical techniques.
    3. Lehmann, E. (2004). “Elements of Large-Sample Theory” Springer.—About asymptotics which is a good starting place.
    4. Vaart, A.W. van der (1998). “Asymptotic Statistics” Cambridge.—A book that shows how many ideas in inference (M estimation, the bootstrap, semiparametrics, etc) repose on top of empirical process theory.
    5. Tsybakov, Alexandre B. (2008) “Introduction to Nonparametric Estimation” Springer.—Tools for obtaining lower bounds on estimators.
    6. B. Efron (2010) “Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction” Cambridge,.—A thought-provoking book.
  • Bayesian Statistics
    1. Gelman, A. et al. (2003). “Bayesian Data Analysis” Chapman & Hall/CRC.—About Bayesian.
    2. Robert, C. and Casella, G. (2005). “Monte Carlo Statistical Methods” Springer.—about Bayesian computation.
  • Probability Theory
    1. Grimmett, G. and Stirzaker, D. (2001). “Probability and Random Processes” Oxford.—Intermediate-level probability book.
    2. Pollard, D. (2001). “A User’s Guide to Measure Theoretic Probability” Cambridge.—More advanced level probability book.
    3. Durrett, R. (2005). “Probability: Theory and Examples” Duxbury.—Standard advanced probability book.
  • Optimization
    1. Bertsimas, D. and Tsitsiklis, J. (1997). “Introduction to Linear Optimization” Athena.—A good starting book on linear optimization that will prepare you for convex optimization.
    2. Boyd, S. and Vandenberghe, L. (2004). “Convex Optimization” Cambridge.
    3. Y. Nesterov and Iu E. Nesterov (2003). “Introductory Lectures on Convex Optimization” Springer.—A start to understand lower bounds in optimization.
  • Linear Algebra
    1. Golub, G., and Van Loan, C. (1996). “Matrix Computations” Johns Hopkins.—Getting a full understanding of algorithmic linear algebra is also important.
  • Information Theory
    1. Cover, T. and Thomas, J. “Elements of Information Theory” Wiley.—Classic information theory.
  • Functional Analysis
    1. Kreyszig, E. (1989). “Introductory Functional Analysis with Applications” Wiley.—Functional analysis is essentially linear algebra in infinite dimensions, and it’s necessary for kernel methods, for nonparametric Bayesian methods, and for various other topics.

Remarks from Professor Jordan: “not only do I think that you should eventually read all of these books (or some similar list that reflects your own view of foundations), but I think that you should read all of them three times—the first time you barely understand, the second time you start to get it, and the third time it all seems obvious.”

In mathematics, a general principle for studying an object is always from the study of the object itself to the study of the relationship between objects. In functional data analysis, the most important part for studying of the object itself, i.e. one functional data set, is functional principal component analysis (FPCA). And for the study of the relationship between two functional data sets, one popular way is various types of regression analysis. For this post, I only focus on the FPCA. The central idea of FPCA is dimension reduction by means of a spectral decomposition of the covariance operator, which yields functional principal components as coefficient vectors to represent the random curves in the sample.

First of all, let’s define what’s the FPCA. Suppose we observe functions X_1(\cdot), X_2(\cdot), \cdots, X_n(\cdot). We want to find an orthonormal basis \phi_1(\cdot), \cdots, \phi_K(\cdot) such that

\sum_{i=1}^n\|X_i-\sum_{k=1}^K<X_i, \phi_k>\phi_k\|^2

is minimized. Once such a basis is found, we can replace each curve X_i by \sum_{k=1}^K<X_i, \phi_k>\phi_k to a good approximation. This means instead of working with infinitely dimensional curves X_i, we can work with $K-$dimensional vectors (<X_i, \phi_1>, \cdots, <X_i, \phi_K>)^\intercal. And the functions \phi_k are called collectively optimal empirical orthonormal basis, or empirical functional principal components. Note that once we got the functional principal components, we can get the so called FPC scores <X_i,\phi_K> to approximate the curves.

For FPCA, we usually adopt the so called “smooth-first-then-estimate” approach, namely,we first pre-process the discrete observations to get smoothed functional data by smoothing and then use the empirical estimators of the mean and covariance based on the smoothed functional data to conduct FPCA.

For the smoothing step, we have to consider individual by individual. For each realization, we can use basis expansion (Polynomial basis is unstable; Fourier basis is suitable for periodic functions; B-spline basis is flexible and useful), smoothing penalties (which lead to smoothing splines by the Smoothing Spline Theorem), as well as local polynomial smoothing:

  • Basis expansion: by assuming one realization of the underlying true process X_i(t)=\sum_{k=1}^Kc_{ik}B_k(t), where \{B_k(\cdot), k=1,2,\cdots, K\} are the basis functions, we have the estimation

min_{\{c_{ik}\}_{k=1}^K}\sum_{j=1}^{m_i}\big(Y_{ij}-\sum_{k=1}^Kc_{ik}B_k(t_{ij})\big)^2.

  • Smoothing penalties: min\sum_{j=1}^{m_i}\big(Y_{ij}-X_i(t_{ij})\big)^2+\lambda J(X_i(\cdot)), where J(\cdot) is a measure for the roughness of functions.
  • Local linear smoothing: assume at point t, we have X_i(t)\approx X_i(t_0)+X_i'(t_0)(t-t_0), then we have X_i(t_0) estimated by the following \hat{a}

min_{a,b}\sum_{j=1}^{m_i}\big(Y_{ij}-a-b(t_{ij}-t_0)\big)^2K_h(t_{ij}-t_0).

Once we have the smoothed functional data, denoted as X_i(t), i=1,2,\cdots,n, we can have the empirical estimator of the mean and covariance as

\bar{X}(t)=\frac{1}{n}\sum_{i=1}^nX_i(t), \hat{C}(s,t)=\frac{1}{n}\sum_{i=1}^n\big(X_i(t)-\bar{X}(t)\big)\big(X_i(s)-\bar{X}(s)\big).

And then we can have the empirical functional principal components as the eigenfunctions of the above sample covariance operator \hat{C} (for the proof, refer to the book “Inference for Functional Data with Applications” page 39). Note that the above estimation procedure of the mean function and the covariance function need to have more dense functional data, since otherwise the smoothing step will be not stable. Thus people are also proposing some other estimators of mean function and covariance function, such as the local linear estimator for the mean function and the covariance function proposed by Professor Yehua Li from ISU, which has an advantage that they can cover all types of functional data, sparse (i.e. longitudinal), dense, or in-between. Now the problem is that how to conduct FPCA based on \hat{C}(s,t) in practice. Actually it’s  the following classic mathematical problem:

\int \hat{C}(s,t)\xi(s)ds=\lambda\xi(t), i.e. \hat{C}\xi=\lambda\xi,

where \hat{C} is the integral operator with a symmetric kernel \hat{C}(s,t). This is a well-studied problem in computing the eigenvalues and eigenfunctions of an integral operator with a symmetric kernel in applied mathematics. So people can refer to those numerical methods to solve the above problem.

However, two common methods used in Statistics are described in Section 8.4 in the fundamental functional data analysis book written by Professors J. O. Ramsay and  B. W. Silverman. One is the discretizing method and the other is the basis function method. For the discretizing method, essentially, we just discretize the smoothed functions X_i(t) to a fine grid of N equally spaced values that span the interval, and then use the traditional PCA, followed by some interpolation method for other points not belonging to be selected grid points. Now for the basis function method, we illustrate it by assuming the mean function equal to 0:

  1. Basis expansion: X_i(t)=\sum_{k=1}^Ka_{ik}\phi_k(t), and then X=(X_1,X_2,\cdots, X_n)^\intercal=A\phi, where A=((a_{ik}))\in{R}^{n\times K} and \phi=(\phi_1,\phi_2,\cdots,\phi_K)^\intercal\in{R}^{K};
  2. Covariance function: \hat{C}(s,t)=\frac{1}{n}X^\intercal(s) X(t)=\frac{1}{n}\phi(s)^\intercal A^\intercal A\phi(t);
  3. Eigenfunction expansion: assume the eigenfunction \xi(t)=\sum_{k=1}^Kb_k\phi_k(t);
  4. Problem Simplification: The above basis expansions lead to

\hat{C}\xi=\int\frac{1}{n}\phi(s)^\intercal A^\intercal A\phi(t)\xi(t)dt= \frac{1}{n}\phi(s)^\intercal A^\intercal A\int\phi(t)\xi(t)dt

=\frac{1}{n}\phi(s)^\intercal A^\intercal A\sum_{k=1}^Kb_k\int\phi(t)\phi_k(t)dt=\frac{1}{n}\phi(s)^\intercal A^\intercal AWb,

where W=\int\phi(t)\phi^\intercal(t)dt\in{R}^{K\times K} and b=(b_1, b_2, \cdots, b_K)^\intercal\in{R}^{K}. Hence the eigen problem boils down to \frac{1}{n}\phi(s)^\intercal A^\intercal AWb=\lambda\phi(s)^\intercal b, \forall s, which is equivalent to

\frac{1}{n}A^\intercal AWb=\lambda b.

Note that the assumptions for the eigenfunctions to be orthonormal are equivalent to b_i^\intercal W b_i=1, b_i^\intercal W b_j=0, i\neq j. Let u=W^{1/2}b, and then we have the above problem as

n^{-1}W^{1/2}A^\intercal AW^{1/2}W^{1/2}b=\lambda W^{1/2}b, i.e. n^{-1}W^{1/2}A^\intercal AW^{1/2}u=\lambda u

which is a traditional eigen problem for symmetric matrix n^{-1}W^{1/2}A^\intercal AW^{1/2}.

Two special cases deserve particular attention. One is orthonormal basis which leads to W=I. And the other is taking the smoothed functional data X_i(t) as the basis function which leads to A=I.

Note that the empirical functional principal components are proved to be the eigenfunctions of the sample covariance operator. This fact connects the FPCA with the so called Karhunen-Lo\grave{e}ve expansion:

X_i(t)=\mu(t)+\sum_{k}\xi_{ik}\phi_k(t), Cov(X_i(t), X_i(s))=\sum_k\lambda_k\phi_k(s)\phi_k(s)

where \xi_{ik} are uncorrelated random variables with mean 0 and variance $\lambda_k$ where \sum_k\lambda_k<\infty, \lambda_1\geq\lambda_2\geq\cdots. For simplicity we assume \mu(t)=0. Then we can easily see the connection between KL expansion and FPCA. \{\phi_k(\cdot)\}_{k} is the series of orthonormal basis functions, and \{<X_i, \phi_k>=\xi_{ik}, i=1, 2, \cdots, n, k=1, 2, \cdots\} are those FPC scores.

So far, we only have discussed how to get the empirical functional principal components, i.e. eigenfunctions/orthonormal basis functions. But to represent the functional data, we have to get those coefficients, which are called FPC scores \{<X_i, \phi_k>=\xi_{ik}, i=1, 2, \cdots, n, k=1, 2, \cdots\}. The simplest way is by numerical integration:

\hat\xi_{ik}=\int X_i(t)\hat\phi_k(t)dt.

 Note that for the above estimation of the FPC scores via numerical integration, we first need the smoothed functional data X_i(t). So if we only have sparsely observed functional data, this method will not provide reasonable approximations. Professor Fang Yao et al. proposed the so called PACE (principal component analysis through conditional expectation) to deal with such longitudinal data.

Degrees of freedom and information criteria are two fundamental concepts in statistical modeling, which are also taught in introductory statistics courses. But what are the exact abstract definitions for them which can be used to derive specific calculation formula in different situations.

I often use fit criteria like AIC and BIC to choose between models. I know that they try to balance good fit with parsimony, but beyond that I’m not sure what exactly they mean. What are they really doing? Which is better? What does it mean if they disagree?Signed, Adrift on the IC’s

Intuitively, the degrees of freedom of a fitting procedure reflects the effective number of parameters used by the fitting procedure. Thus to most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. Is this really true? Regularization aims to improve prediction performance by trading an increase in training error for better agreement between training and prediction errors, which is often captured through decreased degrees of freedom. Is this always the case? When does more regularization imply fewer degrees of freedom?

For the above two questions, I think the most important thing is based on the following what-type question:

What are AIC and BIC? What is degrees of freedom?

Akaike’s Information Criterion (AIC) estimates the relative Kullback-Leibler (KL) distance of the likelihood function specified by a fitted candidate model, from the unknown true likelihood function that generated the data:

D_{KL}(L(y)||L_0(y))=\int L_0(y)\log \frac{L(y)}{L_0(y)}dy=E_0(l(y))-E_0(l_0(y))

where L(y) is the likelihood function specified by a fitted candidate model, L_0(y) is the unknown true likelihood function, and the expectation E_0 is taking under the true model. Note that the fitted model closest to the truth in the KL sense would not necessarily be the model which best fits the observed sample since the observed sample can often be fit arbitrary well by making the model more and more complex. Since E_0(l_0(y)) will be the same for all models being considered, KL is minimized by choosing the model with highest E_0(l(y)), which can be estimated by an approximately unbiased estimator (up to a constant)

l-tr(\hat{J}^{-1}\hat{K})

where \hat{J} is an estimator for the covariance matrix of the parameters based on the second derivatives matrix of l in the parameters and \hat{K} is an estimator based on the cross products of the first derivatives.  Akaike showed that \hat{J} and \hat{K} are asymptotically equal for the true model, so that tr(\hat{J}^{-1}\hat{K})=tr(I), which is the number of parameters. This results in the usual definition for AIC

AIC=-2l+2p.

Schwarz’s Bayesian Information Criterion (BIC) is just comparing the posterior probability with the same prior and hence just comparing the likelihoods under different models:

B=\frac{Pr(M_1|y)}{Pr(M_2|y)}=\frac{Pr(y|M_1)}{Pr(y|M_2)}

which is just the Bayes factor. Schwarz showed that in many kinds of models B can be roughly approximated by \exp(l_1-\frac{1}{2}ln(n)p_1-l_2+\frac{1}{2}ln(n)p_2)

which leads to the definition of BIC

BIC=-2l+ln(n) p.

In summary, AIC and BIC are both penalized-likelihood criteria. AIC is an estimate of a constant plus the relative distance between the unknown true likelihood function of the data and the fitted likelihood function of the model, so that a lower AIC means a model is considered to be closer to the truth. BIC is an estimate of a function of the posterior probability of a model being true, under a certain Bayesian setup, so that a lower BIC means that a model is considered to be more likely to be the true model. Both criteria are based on various assumptions and asymptotic approximations. Despite various subtle theoretical differences, their only difference in practice is the size of the penalty; BIC penalizes model complexity more heavily. The only way they should disagree is when AIC chooses a larger model than BIC. Thus, AIC always has a chance of choosing too big a model, regardless of n. BIC has very little chance of choosing too big a model if n is sufficient, but it has a larger chance than AIC, for any given n, of choosing too small a model.

The effective degrees of freedom for an arbitrary modelling approach is defined based on the concept of expected optimism:

df(\mu, \sigma^2, FIT_{\lambda})=\frac{1}{2\sigma^2}\Big\{E(\|y^*-\hat{y}^{(FIT_{\lambda})}\|_2^2)-E(\|y-\hat{y}^{(FIT_{\lambda})}\|^2_2)\Big\}

where \sigma^2 is the variance of the error term, y^* is an independent copy of data vector y with mean \mu, and FIT_{\lambda} is a fitting procedure with tuning parameter \lambda. Note that the expected optimism is defined as w:=\frac{1}{n}\Big\{E(\|y^*-\hat{y}^{(FIT_{\lambda})}\|_2^2)-E(\|y-\hat{y}^{(FIT_{\lambda})}\|^2_2)\Big\}. And by the optimism theorem, we have that

df(\mu, \sigma^2, FIT_{\lambda})=\frac{1}{\sigma^2}\sum_{i=1}^n cov(\hat\mu_i, y_i).

Why does this definition make sense? In fact, under some regularity conditions, Stein proved that

df=E(\sum_{i=1}^n\frac{\partial\hat\mu_i}{\partial y_i})

which can be regarded as a sensitivity measure of the fitted values to the observations.

In the linear model, we know that (Mallows) the relationship between the expected prediction error (EPE) and the residual sum of squares (RSS)  follows

EPE=E(RSS)+2\sigma^2 p,

which leads to df=p.

Here are some references on this topic:

  1. Dziak, John J., et al. “Sensitivity and specificity of information criteria.” The Methodology Center and Department of Statistics, Penn State, The Pennsylvania State University (2012).
  2. Janson, Lucas, Will Fithian, and Trevor Hastie. “Effective degrees of freedom: A flawed metaphor.” arXiv preprint arXiv:1312.7851 (2013).

Blog Stats

  • 185,514 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 518 other subscribers