. . . the objective of statistical methods is the reduction of data. A quantity of data. . . is to be replaced by relatively few quantities which shall adequately represent. . . the

relevant information contained in the original data.

Since the number of independent facts supplied in the data is usually far greater than the number of facts sought, much of the information supplied by an actual sample is irrelevant. It is the object of the statistical process employed in the reduction of data to exclude this irrelevant information, and to isolate the whole of the relevant information contained in the data.

—Fisher’s 1922 article “On the mathematical foundations of theoretical statistics”

**Sufficiency** is the concept to keep relevant information for the estimation of the target parameter. Since the raw data is of course sufficient, we will look for minimal (i.e. maximal reduction) and sufficient statistic. A **minimal sufficient** statistic may still contain some redundancy. In other words, there may be more than one way to estimate the parameter. Essentially, **completeness** says the only way to estimate 0 is with 0. If T is not complete, then it somehow can be used to estimate the same quantity two different ways.

Note that a further reduction of complete statistic is also complete. Hence the key point of completeness is that it indicates a reduction of the data to the point where there can be at most one unbiased estimator of any :

Thus with the reduction keeping sufficiency, once it reaches completeness, we know that this sufficient and complete statistic is minimal sufficient if there exists one.

Here is a very nice geometric interpretation of completeness: https://stats.stackexchange.com/q/285503

]]>

- doc/statrepmanual.pdf – The StatRep User’s Guide (this manual)
- doc/quickstart.tex – A template and tutorial sample LATEX file
- sas/statrep_macros.sas – The StatRep SAS macros
- sas/statrep_tagset.sas – The StatRep SAS tagset for LaTeX tabular output
- statrep.ins – The LATEX package installer file
- statrep.dtx – The LATEX package itself

Unzip the file statrep.zip to a temporary directory and perform the following steps:

- Step 1: Install the StatRep SAS Macros: Copy the file statrep_macros.sas to a local directory. If you have a folder where you keep your personal set of macros, copy the file there. Otherwise, create a directory such as C:\mymacros and copy the file into that directory.
- Step 2: Install the StatRep LaTeX Package: These instructions show how to install the StatRep package in your LATEX distribution for your personal use.
- a. For MikTEX users: If you do not have a directory for your own packages, choose a directory name to contain your packages (for example, C:\localtexmf). In the following instructions, this directory is referred to as the “root directory”.
- b. Create the additional subdirectories under the above root directory: tex/latex/statrep. Your directory tree will have the following structure: root directory/tex/latex/statrep.
- c. Copy the files statrep.dtx, statrep.ins, statrepmanual.pdf, and statrepmanual.tex to the statrep subdirectory.
- d. In the command prompt, cd to the statrep directory and enter the following command: pdftex statrep.insThe command creates several files, one of which is the configuration file, statrep.cfg.

- Step 3: Tell the StatRep Package the Location of the StatRep SAS Macros. Edit the statrep.cfg file that was generated in Step 2d so that the macro \SRmacropath contains the correct location of the macro file from step 1. For example, if you copied the statrep_macros.sas file to a directory named C:\mymacros, then you de- fine macro \SRmacropath as follows: \def\SRmacropath{C:/mymacros/statrep_macros.sas} Use the forward slash as the directory name delimiter instead of the backslash, which is a special character in LaTeX.

You can now test and experiment with the package. Create a working directory, and copy the file quickstart.tex into it. To generate the quick-start document:

- Compile the document with pdfLATEX. You can use a LATEX-aware editor such as TEXworks, or use the command-line command pdflatex. This step generates the SAS program that is needed to produce the results.
- Execute the SAS program quickstart_SR.sas, which was automatically created in the preceding step. This step generates the SAS results that are requested in the quick-start document.
- Recompile the document with pdfLATEX. This step compiles the quick-start document to PDF, this time including the SAS results that were generated in the preceding step. In some cases listing outputs may not be framed properly after this step. If your listing outputs are not framed properly, repeat this step so that LaTeX can remeasure the listing outputs.

Please refer to the following file for detailed information:

http://support.sas.com/rnd/app/papers/statrep/statrepmanual.pdf

]]>

“A population consisting of an unknown number of distinct species is searched by selecting one member at a time. No a priori information is available concerning the probability that an object selected from this population will represent a particular species. Based on the information available after an n-stage search it is desired to predict the conditional probability that the next selection will represent a species not represented in the n-stage sample.”

Searcher: “I am contemplating extending my initial search an additional m stages, and will so do if the expected number of individuals I will select in the second search who are new species is large. What do you recommend?”

Statistician: “Make one more search and then I will tell you.”

Refer to the Annals of Statistics paper:

[1] Starr, Norman. “Linear estimation of the probability of discovering a new species.” *The Annals of Statistics* (1979): 644-652.

- First of all, Statistics is a science dealing with data, including five main components,
**data collection**(design of experiment, sampling),**data preparation**(storage, reading, organization, cleaning),**exploratory data analysis**(numerical summarization, visualization),**statistical inference**(frequentist and Bayesian) and**communication**(interpretation). - It’s statistician’s mistake putting extremely unequal weights on the development of the five components in the past 50 years, mostly focusing on the fourth component.
- Fortunately, the first component is now showing resurgence under the massive data situation. How to sample the “influential” data points from massive samples is a big and important research topic.
- People outside of traditional statistics community have been picking up the second and third components, like adopting the two undeveloped statistics children. And the adoptive parents are saying that the two children are not statistics, and instead they call them data science.
- But Statistics is really about all of the five equally important components.
- And our Statistician’s goal is to get the two children back to our statistics community. We are all Statistician!

- Deep Learning Master Class
- Advances in Variational Inference
- Numerical Optimization: Understanding L-BFGS
- An exact mapping between the Variational Renormalization Group and Deep Learning
- New ASA Guidelines for Undergraduate Statistics Programs
- 奇异值分解（We Recommend a Singular Value Decomposition）
- 如何简单形象又有趣地讲解神经网络是什么？
- Academic vs. Industry Careers
- Hadley Wickham: Impact the world by being useful
- Statisticians in World War II: They also served
- A Brief Overview of Deep Learning
- Advice for applying Machine Learning
- Deep Learning Tutorial
- Gibbs Sampling in Haskell
- How-to go parallel in R – basics + tips

]]>

In statistics, we have to organize an experiment in order to gain some information about an object of interest. Fragments of this information can be obtained by making observations within some elementary experiments called trials. The set of all trials which can be incorporated in a prepared experiment will be denoted by , which we shall call the **design space**. The problem to be solved in experimental design is how to choose, say trials , called the support points of the design, or eventually how to choose the size of the design, to gather enough information about the object of interest. Optimum experimental design corresponds to the maximization, in some sense, of this information. In specific, the optimality of a design depends on the **statistical model** and is assessed with respect to a **statistical criterion**, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments.

We shall restrict our attention to the parametric situation in the case of a **regression model**, the mean response function is then parameterized as

specifying for a particular with unknown parameter .

A design is specified by an initially arbitrary measure assigning design points to estimate the parameter vector. Here can be written as

where the design support points are elements of the design space , and the associated weights are nonnegative real numbers which sum to one. We make the usual second moment error assumptions leading to the use of least squares estimates. Then the corresponding Fisher **information matrix** associated with is given by

where and .

Now we have to propose the **statistical criteria for the optimum**. It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an (“efficient”) estimator is called the “Fisher information” for that estimator. Because of this reciprocity, ** minimizing the variance** corresponds to

**A**-optimality (“**average**” or**trace**)- One criterion is
**A-optimality**, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients.

- One criterion is

**D**-optimality (**determinant**)- A popular criterion is
**D-optimality**, which seeks to maximize the determinant of the information matrix of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates.

- A popular criterion is

**E**-optimality (**eigenvalue**)- Another design is
**E-optimality**, which maximizes the minimum eigenvalue of the information matrix.

- Another design is

**T**-optimality- This criterion maximizes the trace of the information matrix.

Other optimality-criteria are concerned with the variance of predictions:

**G**-optimality- A popular criterion is
**G-optimality**, which seeks to minimize the maximum entry in the diagonal of the hat matrix. This has the effect of minimizing the maximum variance of the predicted values.

- A popular criterion is

**I**-optimality (**integrated**)- A second criterion on prediction variance is
**I-optimality**, which seeks to minimize the average prediction variance*over the design space*.

- A second criterion on prediction variance is

**V**-optimality (**variance**)- A third criterion on prediction variance is
**V-optimality**, which seeks to minimize the average prediction variance over a set of m specific points.

- A third criterion on prediction variance is

Now back to our example, because the asymptotic covariance matrix associated with the LSE of is proportional to , the most popular regression design criterion is D-optimality, where designs are sought to minimize the determinant of . And the standardized predicted variance function, corresponding to the G-optimality, is

and G-optimality seeks to minimize .

A central result in the theory of optimal design, the General Equivalence Theorem, asserts that the design that is D-optimal is also G-optimal and that

the number of parameters.

Now the optimal design for an **interference model**, professor Wei Zheng will talk about, considers the following model in the block designs with neighbor effects:

where is the treatment assigned to the plot in the -th position of the -th block, and

- is the general mean;
- is the direct effect of treatment ;
- and are respectively the left and right neighbor effects; that’s the interference effect of the treatment assigned to, respectively, the left and right neighbor plots and .
- is the effect of the -th block; and
- is the random error, .

We seed the optimal design among designs , the set of all designs with blocks of size and with treatments.

I am not going into the details of the derivation of the optimal design for the above interference model. I just sketch the outline here. First of all we can write down the **information matrix** for the direct treatment effect , say . Let be the set of all possible block sequences with replacement, which is the design space. Then we try to find the optimal measure among the set to maximize for a given function satisfying the following three conditions:

- is concave;
- for any permutation matrix ;
- is nondecreasing in the scalar .

A measure which achieves the maximum of among for any satisfying the above three conditions is said to be **universally optimal**. Such measure is optimal under criteria of A, D, E, T, etc. Thus we could imagine that all of the analysis is just linear algebra.

As a CS prof at MIT, I have had the privilege of working with some of the very best PhD students anywhere. But even here there are some PhDs that clearly stand out as *great*. I’m going to give two answers, depending on your interpretation of “great”.

For my first answer I’d select four indispensable qualities:

**0. intelligence**

**1. curiosity**

**2. creativity**

**3. discipline and productivity**

(interestingly, I’d say the same four qualities characterize great artists).

In the “nice to have but not essential” category, I would add

**4. ability to teach/communicate with an audience**

**5. ability to communicate with peers**

The primary purpose of PhD work is to advance human knowledge. Since you’re working at the edge of what we know, the material you’re working with is hard—you have to be smart enough to master it (intelligence). This is what qualifying exams are about. But you only need to be smart *enough*—I’ve met a few spectacularly brilliant PhD students, and plenty of others who were just smart enough. This didn’t really make a difference in the quality of their PhDs (though it does effect their choice of area—more of the truly brilliant go into the theoretical areas).

But intelligence is just a starting point. The first thing you actually have to *do* to advance human knowledge is ask questions about why things are the way they are and how they could be made better (curiosity). PhD students spend lots of time asking questions to which they don’t know the answer, so you’d better really enjoy this. Obviously, after you ask the questions you have to come up with the answers. And you have to be able to think in new directions to answer those questions (creativity). For if you can answer those questions using tried and true techniques, then they really aren’t research questions—they’re just things we already know for which we just haven’t gotten around to filling in the detail.

These two qualities are critical for a great PhD, but also lead to one of the most common failure modes: **students who love asking questions and thinking about cool ways to answer them, but never actually *do* the work necessary to try out the answer. Instead, they flutter off to the next cool idea. So this is where discipline comes in: you need to be willing to bang your head against the wall for months (theoretician) or spend months hacking code (practitioner), in order to flesh out your creative idea and validate it. You need a long-term view that reminds you why you are doing this even when the fun parts (brainstorming and curiosity-satisfying) aren’t happening.**

Communication skills are really valuable but sometimes dispensable. Your work can have a lot more impact if you are able to spread it to others who can incorporate it in their work. And many times you can achieve more by collaborating with others who bring different skills and insights to a problem. On the other hand, some of the greatest work (especially theoretical work) has been done by lone figures locked in their offices who publish obscure hard to read papers; when that work is great enough, it eventually spreads into the community even if the originator isn’t trying to make it do so.

My second answer is more cynical. If you think about it, someone coming to do a PhD is entering an environment filled with people who excel at items 0-5 in my list. And most of those items are talents that faculty can continue to exercise as faculty, because really curiosity, creativity, and communication don’t take that much time to do well. The one place where faculty really need help is on productivity: they’re trying to advance a huge number of projects simultaneously and really don’t have the cycles to carry out the necessary work. So another way to characterize what makes a great PhD student is

**0. intelligence**

**1. discipline and productivity**

If you are off the scale in your productivity (producing code, running interviews, or working at a lab bench) and smart enough to understand the work you get asked to do, then you can be the extra pair of productive hands that the faculty member desperately needs. Your advisor can generate questions and creative ways to answer them, and you can execute. After a few years of this, they’ll thank you with a PhD.

If all you want is the PhD, this second approach is a fine one. But you should recognize that in this case that advisor is *not* going to write a recommendation letter that will get you a faculty position (though they’ll be happy to praise you to Google). There’s only 1 way to be a successful *faculty member*, and that’s my first answer above.

**Update**: Here is another article from Professors Mark Dredze (Johns Hopkins University) and Hanna M. Wallach (University of Massachusetts Amherst).

- Frequentist Statistics
- Casella, G. and Berger, R.L. (2001). “Statistical Inference” Duxbury Press.—Intermediate-level statistics book.
- Ferguson, T. (1996). “A Course in Large Sample Theory” Chapman & Hall/CRC.—For a slightly more advanced book that’s quite clear on mathematical techniques.
- Lehmann, E. (2004). “Elements of Large-Sample Theory” Springer.—About asymptotics which is a good starting place.
- Vaart, A.W. van der (1998). “Asymptotic Statistics” Cambridge.—A book that shows how many ideas in inference (M estimation, the bootstrap, semiparametrics, etc) repose on top of empirical process theory.
- Tsybakov, Alexandre B. (2008) “Introduction to Nonparametric Estimation” Springer.—Tools for obtaining lower bounds on estimators.
- B. Efron (2010) “Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction” Cambridge,.—A thought-provoking book.

- Bayesian Statistics
- Gelman, A. et al. (2003). “Bayesian Data Analysis” Chapman & Hall/CRC.—About Bayesian.
- Robert, C. and Casella, G. (2005). “Monte Carlo Statistical Methods” Springer.—about Bayesian computation.

- Probability Theory
- Grimmett, G. and Stirzaker, D. (2001). “Probability and Random Processes” Oxford.—Intermediate-level probability book.
- Pollard, D. (2001). “A User’s Guide to Measure Theoretic Probability” Cambridge.—More advanced level probability book.
- Durrett, R. (2005). “Probability: Theory and Examples” Duxbury.—Standard advanced probability book.

- Optimization
- Bertsimas, D. and Tsitsiklis, J. (1997). “Introduction to Linear Optimization” Athena.—A good starting book on linear optimization that will prepare you for convex optimization.
- Boyd, S. and Vandenberghe, L. (2004). “Convex Optimization” Cambridge.
- Y. Nesterov and Iu E. Nesterov (2003). “Introductory Lectures on Convex Optimization” Springer.—A start to understand lower bounds in optimization.

- Linear Algebra
- Golub, G., and Van Loan, C. (1996). “Matrix Computations” Johns Hopkins.—Getting a full understanding of algorithmic linear algebra is also important.

- Information Theory
- Cover, T. and Thomas, J. “Elements of Information Theory” Wiley.—Classic information theory.

- Functional Analysis
- Kreyszig, E. (1989). “Introductory Functional Analysis with Applications” Wiley.—Functional analysis is essentially linear algebra in infinite dimensions, and it’s necessary for kernel methods, for nonparametric Bayesian methods, and for various other topics.

Remarks from Professor Jordan: “not only do I think that you should eventually read all of these books (or some similar list that reflects your own view of foundations), but I think that you should read all of them three times—**the first time you barely understand, the second time you start to get it, and the third time it all seems obvious**.”