Recently some papers discussed in our journal club  are focused on integrative clustering of multiple omics data sets. I found that they are all originated from factor analysis and make use of the advantage of factor analysis over principal component analysis.

Let’s recall the model for factor analysis:

X=\mu+LF+\epsilon,

where X\in{R}^p, \mu\in{R}^p, L\in{R}^{p\times r}, F\in{R}^r (r<p) and \epsilon\in{R}^p, with mean \mu and loading matrix L fixed, and factors F\sim\text{N}(0, I_r), \epsilon\sim\text{N}(0, \Psi) with \Psi diagonal. And we also assume that F and \epsilon are uncorrelated. Note that this model is just characterizing the covariance structure of the Gaussian random vector X\sim\text{N}(\mu, LL^\intercal+\Psi). With the observed sample with sample size n,

X_i=\mu+LF_i+\epsilon_i, i=1, 2, \cdots, n,

we can use the EM algorithm to get the MLE of the parameters \mu, L, \Psi (note that you will find maximizing the likelihood directly is hard).

Now for principal component analysis, for clarity, we are going to differentiate the classical (non-probabilistic) principal component analysis and probabilistic principal component analysis. The classical principal component analysis actually has no statistical model. And the probabilistic principal component model is defined as the above factor analysis model with \Psi=\sigma^2 I_p and L is orthonormal. And people can show that as \sigma^2\to 0, it becomes the classical principal component analysis. Actually we know that PCA maximizes data variance captured by the low dimensional projection, or equivalently minimizes the reconstruction error under the L_2 norm of the projected data points with the original data, namely

min\|X-LZ^\intercal\|_F^2, \text{ subject to } L\in{R}^{p\times r} \text{ orthonormal}

where X\in{R}^{p\times n} here is the data matrix, and Z\in{R}^{n\times r}. And we know that solution to this problem is through SVD of the sample covariance: \hat{L}  contains the r eigenvectors corresponding to the largest r eigenvalues. And \hat{Z}^\intercal=\hat{L}^\intercal X are the projected data points. From this analysis, we could see that the difference between factor analysis and the classical principal component analysis is that PCA treats covariance and variance identically, while factor analysis is trying to model to covariance and variance separately. In fact, the r principal components are chosen to capture as much variance as possible, but the r latent variables in a factor analysis model are chosen to explain as much covariance as possible. (Note that all the correlations amongst the variables must be explained by the common factors; if we assume joint normality the observed variables will be conditionally independent given F.)

Now think about what is the difference between the factor analysis and the probabilistic principal component analysis (PPCA). From the above definition, we see that the main difference is that factor analysis allow individual characteristics through the error term by \epsilon\sim\text{N}(0, \Psi) instead of \Psi=\sigma^2 I_p. In this perspective, we have

X=\mu+LF+\epsilon,

with common structure \mu+LF across all components of X and individual characteristics \epsilon_j\sim\text{N}(0, \psi_j). While PPCA does not allow any individual characteristics by assuming \psi_j=\sigma^2 for all j. This essential difference will make factor analysis more useful in integrative data analysis since it has more flexibility.

The AOAS 2013 paper is exactly using the above idea for modeling the integrative clustering:

X_t=L_tF+\epsilon_t, t=1, 2, \cdots, T,

where X_t\in{R}^{p_t} with T data sources. By stacking all the data sources together, we have

X=LF+\epsilon.

which is exactly a simple factor analysis. And this factor analysis model is more useful than PCA in this data integration setup just due to the allowing of individual characteristics for different data sources through \epsilon. And their paper is also dealing with sparsity in L_t.

The 2014 arXived paper is just generalizing the above paper by allowing another layer of individual characteristis:

X_t=L_tF+W_tZ_t+\epsilon_t, t=1, 2, \cdots, T,

But the problem for this one is how to do the estimation. Instead of using EM algorithm as used in the AOAS 2013 paper, they used the one as in the PCA by minimizing the reconstruction error.

On this Tuesday, Professor Xuming He presented their recent work on subgroup analysis, which is very interesting and useful in reality. Think about the following very much practical problem (since the drug is expensive or has certain amount of side effect):

If you are given the drug response, some baseline covariates which have nothing to do with the treatment, and the treatment indicator as well as some post-treatment measurements, how could you come up with a statistical model to tell whether there exist subgroups which respond to the treatment differently?

Think about 5 minutes and continue the following!

Dr He borrowed a very traditional model in Statistics, logistic-normal mixture model to study the above problem. The existence of the two subgroups is characterized by the observed baseline covariates, which have nothing to do with the treatment:

P(\delta_i=1)=\pi(X_i^\intercal\gamma),

where \delta_i is the unobserved membership index. And the observed response follows a normal mixture model

Y_i=Z_i^\intercal(\beta_1+\beta_2\delta_i)+\epsilon_i,

with different means Z_i^\intercal\beta_1 and Z_i^\intercal(\beta_1+\beta_2), where Z_i usually contains X_i but also includes the treatment indicator as well as any post-treatment measurements. Given that there is two subgroups characterized by the baseline covariates (which makes the test problem regular), they tried to test whether the two groups respond to the treatment differently, that is testing the component of \beta_2 which corresponds to the treatment indicator.

Nice work to demonstrate how to come up with a statistical model to study some interesting and practical problems!

But the above part has nothing to do with the title, EM algorithm. Actually you could imagine that they will use EM as a basic tool to study the above mixture model. That’s why I came back to revisit this great idea in Statistics.

Given complete random vector Y=(X, Z) with X observed and Z unobserved, we have the likelihood function p(y;\theta). Then the log marginal likelihood has the following property:

\log p(x;\theta)=\log \int p(x,z;\theta) dz=\log \int \frac{p(x,z;\theta)}{f(z)}f(z)dz

\geq\int \log\Big\{\frac{p(x,z;\theta)}{f(z)}\Big\}f(z)dz,

where the last inequality is from Jensen’s inequality, and f(\cdot) is any density function put on Z. In order to make the bound tight, i.e. to make the above inequality as equality, one possible way is f(z)\propto p(x,z;\theta), which leads to

f(z)=p(x,z;\theta)/\int p(x,z;\theta) dx=p(z|x;\theta).

Then we have

\hat\theta=\arg\max\log p(x;\theta)=\arg\max E_f\Big(\log\{p(x,z;\theta)/f(z)\} \Big)

=\arg\max E_f\Big(\log p(x,z;\theta)\Big).

In summary, we have the following EM procedure:

  1. E step: get the conditional distribution f(z)=p(z|x;\theta);
  2. M step: \hat\theta=\arg\max E_f\Big(\log p(x,z;\theta)\Big)

And the corresponding EM algorithm can be described as the following iterative procedure:

  1. E step: get the conditional distribution f^{(k)}(z)=p(z|x;\hat\theta^{(k)});
  2. M step: \hat\theta^{(k+1)}=\arg\max E_{f^{(k)}}\Big(\log p(x,z;\theta)\Big)

In order to make this procedure effective, in the M step, the condition expectation should be easy to calculate. In fact, usually, since the expectation will be taken under the current \hat\theta^{(k)}, which will not produce any new \theta, we usually get \hat\theta(x,z)=\arg\max\log p(x,z;\theta) first and then by plugging we have \hat\theta=\hat\theta\big(x,E_{f^{(k)}}(z)\big).

And this procedure guarantees the following to make sure the convergence

\ell(\hat\theta^{(k+1)})\geq \int \log\Big\{\frac{p(x,z;\hat\theta^{(k+1)})}{f^{(k)}(z)}\Big\}f^{(k)}(z)dz

\geq \int \log\Big\{\frac{p(x,z;\hat\theta^{(k)})}{f^{(k)}(z)}\Big\}f^{(k)}(z)dz =\ell(\hat\theta^{(k)}).

In summary, EM algorithm is useful when the marginal problem \arg\max\log p(x;\theta) is difficult while the joint problem \arg\max\log p(x, z;\theta) is easy. However Z is unobservable, and the EM algorithm attempts to maximize \log p(x, z;\theta) iteratively, by replacing it with its conditional expectation given the observed data. This expectation is computed with respect to the distribution of the complete data evaluated at the current estimate of \theta.

In the talk given by Professor Xuming He, he mentioned a rule of thumb from practice experience that the EM algorithm produces a good enough estimator in the first few steps.

The core idea of Empirical Likelihood (EL) is to use a maximum entropy discrete distribution supported on the data points and constrained by estimating equations related with the parameters of interest. As such, it is a non-parametric approach in the sense that the distribution of the data does not need to be specified, only some of its characteristics usually via moments. In short, it’s a non-parametric likelihood, which is fundamental for the likelihood-based statistical methodology.

Bayesian Analysis is a very popular and useful method in applications. As we discussed in the last post, it’s essentially an belief updating procedure through data, which is very natural in modeling. Last time, I said I did not get why there is a severe debate between Frequentist and Bayesian. Yesterday, I had a nice talk with Professor Xuming He from University of Michigan. When we talked about the Bayesian analysis, he made a nice point that in Frequentist analysis, the model mis-specification can be addressed in a very rigorous way to conduct valid statistical inference; while in Bayesian analysis, it is very sensitive to the likelihood as well as the prior, but how to do the adjustment is a big problem.

Before the discussion with Dr. Xuming He, intuitively, I thought it’s very natural and potentially very useful to combine empirical likelihood with Bayesian analysis by regarding empirical likelihood as the likelihood used in the Bayesian framework. But now I got to understand the importance of why Professor Nicole Lazar from University of Georgia had a paper on “Bayesian Empirical Likelihood” to discuss the validity of posterior inference: “…can likelihoods other than the density from which the data are assumed to be generated be used as the likelihood portion in a Bayesian analysis?” And the paper concluded that “…while they indicate that it is feasible to consider a Bayesian inferential procedure based on replacing the data likelihood with empirical likelihood, the validity of the posterior inference needs to be established for each case individually.”

But Professor Xuming He made a nice comment that Bayesian framework can be used to avoid the calculation of maximum empirical likelihood estimator by proving of the asymptotically normal posterior distribution with mean around the maximum empirical likelihood estimator. The original idea of their AOS paper was indeed to use the computational advantage from Bayesian side to solve the optimization difficulty in getting maximum empirical likelihood estimator. This reminded me of another paper about “Approximate Bayesian Computation (ABC) via Empirical Likelihood“, which used empirical likelihood to get improvement in the approximation at an overall computing cost that is negligible against ABC.

We know that for general Bayesian analysis, the goal is to be able to simulate from the posterior distribution by using MCMC for example. But in order to use MCMC to simulate from the posterior distribution, we need to be able to evaluate the likelihood. But sometimes, it’s hard to evaluate the likelihood due to the complexity of the model. For example, recently laundry socks problem is a hit online. Since it’s not that trivial to figure out the the likelihood of the process although we have a simple generative model from which we can easily simulate samples, Professor Rasmus Bååth presented a Bayesian analysis by using ABC. Later Professor Christian Robert presented exact ptobability calculations pointing out that Feller had posed a similar problem. And here is another post from Professor Saunak Sen. The basic idea for ABC approximation is to accept values provided the simulated sample is sufficiently close to the observed data point:

  1. Simulate \theta\sim\pi(\cdot), where \pi is the prior;
  2. Simulate x from the generative model;
  3. If \|x-x^*\| is small, keep \theta, where x^* is observed data point. Otherwise reject.

Now how to use empirical likelihood to help ABC? Actually although the original motivation is the same, that is to approximate the likelihood (ABC approximate the likelihood via simulation and empirical likelihood version of ABC is to use empirical likelihood to approximate the true likelihood), it’s more natural to start from the original Bayesian computation (this is also why Professor Christian Robert changed the title of their paper). For the posterior sample, we can generate as the following from the importance sampling perspective:

  1. Simulate \theta\sim\pi(\cdot), where \pi is the prior;
  2. Get the corresponding importance weight as w=f(\theta|x) where f(\theta|x) is the likelihood.

Now if we do not know the likelihood, we can do the following:

  1. Simulate \theta\sim\pi(\cdot), where \pi is the prior;
  2. Get the corresponding importance weight as w=EL(\theta|x) where EL(\theta|x) is the empirical likelihood.

This is the way of doing Bayesian computation via empirical likelihood.

The main difference between Bayesian computation via empirical likelihood and Empirical likelihood Bayesian is that the first one use empirical likelihood to approximate the likelihood in the Bayesian computation and followed by Bayesian inference, while the second one is that use Bayesian computation to overcome the optimization difficulty and followed by studying of the frequentist property.

Last night, I had a discussion about the integrative data analysis (closely related with the discussion of AOAS 2014 paper from Dr Xihong Lin’s group and JASA 2014 paper from Dr. Hongzhe Li’s group) with my friend. If some biologist gave you the genetic variants (e.g. SNP) data and the phenotype (e.g. some trait) data, you were asked to do the association analysis to identify the genetic variants which is significantly associated with the trait. One year later, the biologist got some additional data such as gene expression data which are related with the two data sets given before, and you are now asked to calibrate your analysis to detect the association more efficiently and powerfully by integrating the three data sources. In this data rich age, it’s quite natural to get into this situation in practice. The question is how to come up with a natural and useful statistical framework to deal with such data integration.

For simplicity, we considered the problem that if you are first given two random variables, X, Y to study the association between them. Later on you are given another random variable Z to help to detect the significance association between X and Y. We assume the following true model:

Y=\beta X+\epsilon,

where X is independent with \epsilon. Now the question is what is the characteristic for Z to be helpful to raise the power for the detection.

  • What if X and Z are uncorrelated? If they are uncorrelated, then what if Y and Z are uncorrelated?
  • What if X and Z are correlated?

After thinking about these, you will find that for Z to be useful, it’s ideal that Z is uncorrelated with X and is highly correlated with Y, i.e. highly correlated with the error term \epsilon so that it can be used to explain more variation contained in Y to reduce the noise level.

In order to see why, first notice that the problem exactly depends on how to understand the following multiple linear regression problem:

Y=\alpha X+ \gamma Z+\varepsilon.

Now from the multiple linear regression knowledge, we have

\beta=\alpha+\gamma\times\delta

where Z=\delta X+\eta (see below for the proof). Thus in order to raise the signal to noise ratio, we hope that \alpha=\beta, i.e. \gamma=0 or \delta=0, which can keep the signal large. But in order to reduce the noise, we need \gamma\neq 0. In summary, we need to have \delta=0, which means that X and Z are uncorrelated, and \gamma\neq 0, which means that Z can be used to explain some variability contained in the noise.

Now please think about the question:

What is the difference between doing univariate regression one by one and doing multiple linear regression all at once?

Here is some hint: first we regress Y and Z both onto X,

E(Y|X)=\alpha X+\gamma\delta X, E(Z|X)=\delta X.

And then on one hand we find that \beta=\alpha+\gamma\delta, and on the other hand we regress the residual Y-E(Y|X)=\gamma\eta+\varepsilon onto the residual Z-E(Z|X)=\eta to get \gamma via

Y-E(Y|X)=\gamma [Z-E(Z|X)]+\varepsilon.

This procedure actually is explaining what is the multiple linear regression and what is the meaning for the coefficients (think about the meaning of \gamma from the above explanation).

p-value and Bayes are the two hottest words in Statistics. Actually I still can not get why the debate between frequentist  statistics and Bayesian statistics can last so long. What is the essence arguments behind it? (Any one can help me with this?) In my point of view, they are just two ways for solving practical problems. Frequentist people are using the random version of proof-by-contradiction argument (i.e. small p-value indicates less likeliness for the null hypothesis to be true), while Bayesian people are using learning argument  to update their believes through data. Besides, mathematician are using partial differential equations (PDE) to model the real underlying process for the analysis. These are just different methodologies for dealing with practical problems. What’s the point for the long-last debate between frequentist  statistics and Bayesian statistics then?

Although my current research area is mostly in frequentist statistics domain, I am becoming more and more Bayesian lover, since it’s so natural. When I was teaching introductory statistics courses for undergraduate students at Michigan State University, I divided the whole course into three parts: Exploratory Data Analysis (EDA) by using R software, Bayesian Reasoning and Frequentist Statistics. I found that at the end of the semester, the most impressive example in my students mind was the one from the second section (Bayesian Reasoning).  That is the Monty Hall problem,  which was mentioned in the article that just came out in the NYT. (Note that about the argument from Professor Andrew Gelman, please also check out the response from Professor Gelman.) “Mr. Hall, longtime host of the game show “Let’s Make a Deal,” hides a car behind one of three doors and a goat behind each of the other two. The contestant picks Door No. 1, but before opening it, Mr. Hall opens Door No. 2 to reveal a goat. Should the contestant stick with No. 1 or switch to No. 3, or does it matter?” And the Bayesian approach to this problem “would start with one-third odds that any given door hides the car, then update that knowledge with the new data: Door No. 2 had a goat. The odds that the contestant guessed right — that the car is behind No. 1 — remain one in three. Thus, the odds that she guessed wrong are two in three. And if she guessed wrong, the car must be behind Door No. 3. So she should indeed switch.” What a natural argument! Bayesian babies and Google untrained search for youtube cats (the methods of deep learning) are all excellent examples proving that Bayesian Statistics IS a remarkable way for solving problems.

What about the p-values? This random version of proof-by-contradiction argument is also a great way for solving problems from the fact that it have been helping solve so many problems from various scientific areas, especially in bio-world. Check out today’s post from Simply Statistics: “You think P-values are bad? I say show me the data,” and also the early one: On the scalability of statistical procedures: why the p-value bashers just don’t get it.

The classical p-value does exactly what it says. But it is a statement about what would happen if there were no true effect. That can’t tell you about your long-term probability of making a fool of yourself, simply because sometimes there really is an effect. You make a fool of yourself if you declare that you have discovered something, when all you are observing is random chance. From this point of view, what matters is the probability that, when you find that a result is “statistically significant”, there is actually a real effect. If you find a “significant” result when there is nothing but chance at play, your result is a false positive, and the chance of getting a false positive is often alarmingly high. This probability will be called “false discovery rate” (or error rate), which is different with the concept in the multiple comparison. One possible misinterpretation of p-value is regarding p-value as the false discovery rate, which may be much higher than p-value. Think about the Bayes formula and the tree diagram you learned in introductory course to statistics to figure out the relationship between p-value and the “false discovery rate”.

 

 

I collected the following series on applying for faculty positions in 2011, when I was in my second year PhD. Now it’s my turn to apply for jobs. I will share the following useful materials with all you who want to apply for jobs this year.

  1. Applying for Jobs: Application Materials
  2. Applying for Jobs : Sending out Applications
  3. Applying for Jobs : Phone Interviews
  4. Applying for Jobs: On-Site Interviews
  5. Applying for Jobs: the Job Talk

My academic homepage just has been launched. Welcome to visit: Honglang Wang’s Homepage.

 

 

  1. Interview with Nick Chamandy, statistician at Google
  2. You and Your Researchvideo
  3. Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained
  4. A Survival Guide to Starting and Finishing a PhD
  5. Six Rules For Wearing Suits For Beginners
  6. Why I Created C++
  7. More advice to scientists on blogging
  8. Software engineering practices for graduate students
  9. Statistics Matter
  10. What statistics should do about big data: problem forward not solution backward
  11. How signals, geometry, and topology are influencing data science
  12. The Bounded Gaps Between Primes Theorem has been proved
  13. A non-comprehensive list of awesome things other people did this year.
  14. Jake VanderPlas writes about the Big Data Brain Drain from academia.
  15. Tomorrow’s Professor Postings
  16. Best Practices for Scientific Computing
  17. Some tips for new research-oriented grad students
  18. 3 Reasons Every Grad Student Should Learn WordPress
  19. How to Lie With Statistics (in the Age of Big Data)
  20. The Geometric View on Sparse Recovery
  21. The Mathematical Shape of Things to Come
  22. A Guide to Python Frameworks for Hadoop
  23. Statistics, geometry and computer science.
  24. How to Collaborate On GitHub
  25. Step by step to build my first R Hadoop System
  26. Open Sourcing a Python Project the Right Way
  27. Data Science MD July Recap: Python and R Meetup
  28. git 最近感悟
  29. 10 Reasons Python Rocks for Research (And a Few Reasons it Doesn’t)
  30. Effective Presentations – Part 2 – Preparing Conference Presentations
  31. Doing Statistical Research
  32. How to Do Statistical Research
  33. Learning new skills
  34. How to Stand Out When Applying for An Academic Job
  35. Maturing from student to researcher
  36. False discovery rate regression (cc NSA’s PRISM)
  37. Job Hunting Advice, Pt. 3: Networking
  38. Getting Started with Git

This post is for JSM2013. I will put useful links here and I will update this post during the meeting.

  1. Big Data Sessions at JSM
  2. Nate Silver addresses assembled statisticians at this year’s JSM
  3. Data scientist is just a sexed up word for statistician

What I have learned from this meeting (Key words of this meeting):

Big Data, Bayesian, Statistical Efficiency vs Computational Efficiency

I was in Montreal from Aug 1st to Aug 8th for JSM2013 and traveling.

travel_montreal

(Traveling in Quebec: Olympic Stadium; Underground City; Quebec City; Montreal City; basilique Nortre-Dame; China Town)

people_montreal

(Talks at JSM2013: Jianqing Fan; Jim Berger; Nate Silver; Tony Cai; Han Liu; Two Statistical Peters)

jsm2013talk

(My Presentation at JSM2013)

The following is the list for the talks I was there:

JSM

  • Aug 4th
    • 2:05 PM Analyzing Large Data with R and MonetDB — Thomas Lumley, University of Auckland
    • 2:25 PM Empirical Likelihood and U-Statistics in Survival Analysis — Zhigang Zhang, Memorial Sloan-Kettering Cancer Center ; Yichuan Zhao, Georgia State University
    • 2:50 PM Joint Unified Confidence Region for the Parameters of Branching Processes with Immigration — Pin Ren ; Anand Vidyashankar, George Mason University
    • 3:05 PM Time-Varying Additive Models for Longitudinal Data — Xiaoke Zhang, University of California Davis ; Byeong U. Park, Seoul National University ; Jane-Ling Wang, UC Davis
    • 3:20 PM Leveraging as a Paradigm for Statistically Informed Large-Scale Computation — Michael W. Mahoney, Stanford University
    • 4:05 PM Joint Estimation of Multiple Dependent Gaussian Graphical Models — Yuying Xie, The University of North Carolina at Chapel Hill ; Yufeng Liu, The University of North Carolina ; William Valdar, UNC-CH Genetics
    • 4:30 PM Computational Strategies in Regression of Big Data — Ping Ma, University of Illinois at Urbana-Champaign
    • 4:55 PM Programming with Big Data in R — George Ostrouchov, Oak Ridge National Laboratory ; Wei-Chen Chen, Oak Ridge National Laboratory ; Drew Schmidt, University of Tennessee ; Pragneshkumar Patel, University of Tennessee
    • 5:20 PM Inference and Optimalities in Estimation of Gaussian Graphical Model — Harrison Zhou, Yale University
  • Aug 5th
    • 99 Mon, 8/5/2013, 8:30 AM – 10:20 AM CC-710a
      • Introductory Overview Lecture: Twenty Years of Gibbs Sampling/MCMC — Other Special Presentation
      • 8:35 AM Gibbs Sampling and Markov Chain Monte Carlo: A Modeler’s Perspective — Alan E. Gelfand, Duke University
      • 9:25 AM The Theoretical Underpinnings of MCMC — Jeffrey S. Rosenthal, University of Toronto
      • 10:15 AM Floor Discussion
    • 166 * Mon, 8/5/2013, 10:30 AM – 12:20 PM CC-520c
      • Statistical Learning and Data Mining: Winners of Student Paper Competition — Topic Contributed Papers
      • 10:35 AM Multicategory Angle-Based Large Margin Classification — Chong Zhang, UNC-CH ; Yufeng Liu, The University of North Carolina
      • 10:55 AM Discrepancy Pursuit: A Nonparametric Framework for High-Dimensional Variable Selection — Li Liu, Carnegie Mellon University ; Kathryn Roeder, CMU ; Han Liu, Princeton University
      • 11:15 AM PenPC: A Two-Step Approach to Estimate the Skeletons of High-Dimensional Directed Acyclic Graphs — Min Jin Ha ; Wei Sun, UNC Chapel Hill ; Jichun Xie, Temple University
      • 11:35 AM An Underdetermined Peaceman-Rachford Splitting Algorithm with Application to Highly Nonsmooth Sparse Learning Problems— Zhaoran Wang, Princeton University ; Han Liu, Princeton University ; Xiaoming Yuan, Hong Kong Baptist University
      • 11:55 AM Latent Supervised Learning — Susan Wei, UNC
      • 12:15 PM Floor Discussion
    • 220 Mon, 8/5/2013, 2:00 PM – 3:50 PM CC-710b
      • 2:05 PM Statistics Meets Computation: Efficiency Trade-Offs in High Dimensions — Martin Wainwright, UC Berkeley
      • 3:35 PM Floor Discussion
    • 267 Mon, 8/5/2013, 4:00 PM – 5:50 PM CC-517ab
      • 4:05 PM JSM Welcomes Nate Silver — Nate Silver, FiveThirtyEight.com
    • 209305 Mon, 8/5/2013, 6:00 PM – 8:00 PM I-Maisonneuve, JSM Student Mixer, Sponsored by Pfizer — Other Cmte/Business, ASA , Pfizer, Inc.
    • 268 Mon, 8/5/2013, 8:00 PM – 9:30 PM CC-517ab
      • 8:05 PM Ars Conjectandi: 300 Years Later — Hans Rudolf Kunsch, Seminar fur Statistik, ETH Zurich
  • Aug 6th
    • 280 * Tue, 8/6/2013, 8:30 AM – 10:20 AM CC-510a
      • Statistical Inference for Large Matrices — Invited Papers
      • 8:35 AM Conditional Sparsity in Large Covariance Matrix Estimation — Jianqing Fan, Princeton University ; Yuan Liao, University of Maryland ; Martina Mincheva, Princeton University
      • 9:05 AM Multivariate Regression with Calibration — Lie Wang, Massachusetts Institute of Technology ; Han Liu, Princeton University ; Tuo Zhao, Johns Hopkins University
      • 9:35 AM Principal Component Analysis for High-Dimensional Non-Gaussian Data — Fang Han, Johns Hopkins University ; Han Liu, Princeton University
      • 10:05 AM Floor Discussion
    • 325 * ! Tue, 8/6/2013, 10:30 AM – 12:20 PM CC-520b
      • Modern Nonparametric and High-Dimensional Statistics — Invited Papers
      • 10:35 AM Simple Tiered Classifiers — Peter Gavin Hall, University of Melbourne ; Jinghao Xue, University College London ; Yingcun Xia, National University of Singapore
      • 11:05 AM Sparse PCA: Optimal Rates and Adaptive Estimation — Tony Cai, University of Pennsylvania
      • 11:35 AM Statistical Inference in Compound Functional Models — Alexandre Tsybakov, CREST-ENSAE
      • 12:05 PM Floor Discussion
    • 392 Tue, 8/6/2013, 2:00 PM – 3:50 PM CC-710a
      • Introductory Overview Lecture: Big Data — Other Special Presentation
      • 2:05 PM The Relative Size of Big Data — Bin Yu, Univ of California at Berkeley
      • 2:55 PM Divide and Recombine (D&R) with RHIPE for Large Complex Data — William S. Cleveland, Purdue Universith
      • 3:45 PM Floor Discussion
    • 445 Tue, 8/6/2013, 4:00 PM – 5:50 PM CC-517ab
      • Deming Lecture — Invited Papers
      • 4:05 PM Industrial Statistics: Research vs. Practice — Vijay Nair, University of Michigan
  • Aug 7th
    • 10:35 AM Bayesian and Frequentist Issues in Large-Scale Inference — Bradley Efron, Stanford University
    • 11:20 AM Criteria for Bayesian Model Choice with Application to Variable Selection — Jim Berger, Duke University ; Susie Bayarri, University of Valencia ; Anabel Forte, Universitat Jaume I ; Gonzalo Garcia-Donato, Universidad de Castilla-La Mancha
    • 571 Wed, 8/7/2013, 2:00 PM – 3:50 PM CC-511c
      • Statistical Methods for High-Dimensional Sequence Data — Invited Papers
      • 2:05 PM Linkage Disequilibrium in Sequencing Data: A Blessing or a Curse? — Alkes L. Price, Harvard School of Public Health
      • 2:25 PM Statistical Prioritization of Sequence Variants — Lisa Joanna Strug, The Hospital for Sick Children and University of Toronto ; Weili Li, The Hospital for Sick Children and University of Toronto
      • 2:45 PM On Some Statistical Issues in Analyzing Whole-Genome Sequencing Data — Dan Liviu Nicolae, The University of Chicago
      • 3:05 PM Statistical Methods for Studying Rare Variant Effects in Next-Generation Sequencing Association Studies — Xihong Lin, Harvard School of Public Health
      • 3:25 PM Adjustment for Population Stratification in Association Analysis of Rare Variants — Wei Pan, University of Minnesota ; Yiwei Zhang, University of Minnesota ; Binghui Liu, University of Minnesota ; Xiaotong Shen, University of Minnesota
      • 3:45 PM Floor Discussion
    • 612 Wed, 8/7/2013, 4:00 PM – 5:50 PM CC-517ab
      • COPSS Awards and Fisher Lecture — Invited Papers
      • 4:05 PM From Fisher to Big Data: Continuities and Discontinuities — Peter Bickel, University of California – Berkeley
      • 5:45 PM Floor Discussion
  • Aug 8th
    • 621 Thu, 8/8/2013, 8:30 AM – 10:20 AM CC-516d
      • Recent Advances in Bayesian Computation — Invited Papers
      • 8:35 AM An Adaptive Exchange Algorithm for Sampling from Distribution with Intractable Normalizing Constants — Faming Liang, Texas A&M University
      • 9:00 AM Efficiency of Markov Chain Monte Carlo for Bayesian Computation — Dawn B Woodard, Cornell University
      • 9:25 AM Scalable Inference for Hierarchical Topic Models — John W. Paisley, University of California, Berkeley
      • 9:50 AM Augmented Particle Filters — Yuguo Chen, University of Illinois at Urbana-Champaign
      • 10:15 AM Floor Discussion
    • 661 * ! Thu, 8/8/2013, 10:30 AM – 12:20 PM CC-710b
      • Patterns and Extremes: Developments and Review of Spatial Data Analysis — Invited Papers
      • 10:35 AM Multivariate Max-Stable Spatial Processes — Marc G. Genton, KAUST ; Simone Padoan, Bocconi University of Milan ; Huiyan Sang, TAMU
      • 10:55 AM Approximate Bayesian Computing for Spatial Extremes — Robert James Erhardt, Wake Forest University ; Richard Smith, The University of North Carolina at Chapel Hill

Blog Stats

  • 43,955 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 476 other followers

Twitter Updates

Follow

Get every new post delivered to your Inbox.

Join 476 other followers