You are currently browsing the tag archive for the ‘Statistics’ tag.

General philosophy of probability theory
Probability is central to science, more than any other part of math. It enters statistics, physics, biology, and even medicine as we will see when and if we discuss tomography. This is the broad view.
There is also a narrow view – one needs to understand it before one can effectively apply it and it has many subtleties. Possibly this is due to the fact that probability, stochasticity, or randomness, may not actually exist! I think it mostly exists in our uncertainty about the world. The real world seems to be deterministic (of course one can never test this hypothesis). It is chaotic and one uses probabilistic models to study it mainly because we don’t know the initial conditions. Einstein said that ”god does not play dice”. My own view is that the world may be deterministic, but I like to think I have free will. I believe that probability should be regarded only as a model of reality.

From the notes of Lawrence A. Shepp

Today I just found a nice list from xi’an’s blog of Top 15 papers for his graduate students’ reading:

  1. B. Efron (1979) Bootstrap methods: another look at the jacknife Annals of Statistics
  2. R. Tibshirani (1996) Regression shrinkage and selection via the lasso J. Royal Statistical Society
  3. A.P. Dempster, N.M. Laird and D.B. Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm J. Royal Statistical Society
  4. Y. Benjamini & Y. Hochberg (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. Royal Statistical Society
  5. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
  6. J. Neyman & E.S. Pearson (1933) On the problem of the most efficient test of statistical hypotheses Philosophical Trans. Royal Statistical Society London
  7. D.R. Cox (1972) Regression models and life-table J. Royal Statistical Society
  8. A. Gelfand & A.F.M. Smith (1990) Sampling-based approaches to calculating marginal densities J. American Statistical Assoc.
  9. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
  10. J.O. Berger & T. Sellke (1987) Testing a point null hypothesis: the irreconciability of p-values and evidence J. American Statistical Assoc

Which ones should I now add? First, Steve Fienberg pointed out to me the reading list he wrote in 2005 for the iSBA Bulletin. Out of which I must select a few ones:

  1. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.
  2. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
  3. J.W.Tukey (1962) The future of data analysis. Annals of Mathematical Statistics
  4. L. Savage (1976) On Rereading R.A. Fisher Annals of Statistics

And then from other readers, including Andrew, I must also pick:

  1. H. Akaike (1973). Information theory and an extension of the maximum likelihood principle. Proc. Second Intern. Symp. Information Theory, Budapest
  2. D.B. Rubin (1976). Inference and missing data. Biometrika
  3. G. Wahba (1978). Improper priors, spline smoothing and the problem of guarding against model errors in regression. J. Royal Statistical Society
  4. G.W. Imbens and J.D. Angrist (1994). Identification and estimation of local average treatment effects. Econometrica.
  5. Box, G.E.P. and Lucas, H.L (1959) Design of experiments in nonlinear situations. Biometrika
  6. S. Fienberg (1972) The multiple recapture census for closed populations and incomplete 2k contingency tables Biometrika

Of course, there are others that come close to the above, like Besag’s 1975 Series B paper. Or Fisher’s 1922 foundational paper. But the list is already quite long. (In case you wonder, I would not include Bayes’ 1763 paper in the list, as it is just too remote from statistics.)

And this year some of his students are reading the following papers:

  1. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
  2. G. Casella & W. Strawderman (1981) Estimation of a bounded mean Annals of Statistics
  3. A.P. Dawid, M. Stone & J. Zidek (1973) Marginalisation paradoxes in Bayesian and structural inference J. Royal Statistical Society
  4. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
  5. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
  6. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.

I think it is also a good list for my own reading.

Blog Stats

  • 185,514 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 518 other subscribers