You are currently browsing the monthly archive for April 2012.
- LDA explained
- Counting the total number of…
- Significance Test for Kendall’s Tau-b
- dimension reduction in ABC [a review’s review]
- 9 essential LaTeX packages everyone should use
- Linguistic Notation Inside of R Plots! about knitr
- knitr Elegant, flexible and fast dynamic report generation with R
- knitr Performance Report-Attempt 1
- knitr Performance Report-Attempt 2
- Question: Why you need perl/python if you know R/Shell [NGS data analysis]
- SPAMS (SPArse Modeling Software) now with Python and R
- Large-scale Inference and empirical Bayes, they are related with multiple testing
- My setup about some softwares and editors
- Fancy HTML5 Slides with knitr and pandoc
- John talks about Random is as random does
- MCMC at ICMS (1)
- MCMC at ICMS (2)
- MCMC at ICMS (3)
- John Cook: Why and How People Use R
- An Introduction to 6 Machine Learning Models
- Machine Learning: Algorithms that Produce Clusters
- Dirichlet Process for dummies
I just came back from the talk, “Statistical Methods for Analysis of Gut Microbiome Data” , given by Professor Hongzhe Lee from University of Pennysylvania.
I learned this new biological name: Microbiome—-as extended human genomes.
A microbiome is the totality of microbes, their genetic elements (genomes), and environmental interactions in a particular environment. The term “microbiome” was coined by Joshua Lederberg, who argued that microorganisms inhabiting the human body should be included as part of the human genome, because of their influence on human physiology. The human body contains over 10 times more microbial cells than human cells.
There are several research methods:
Targeted amplicon sequencing
Targeted amplicon sequencing relies on having some expectations about the composition of the community that is being studied. In target amplicon sequencing a phylogenetically informative marker is targeted for sequencing. Such a marker should be present in ideally all the expected organisms. It should also evolve in such a way that it is conserved enough that primers can target genes from a wide range of organisms while evolving quickly enough to allow for finer resolution at the taxonomic level. A common marker for human microbiome studies is the gene for bacterial 16S rRNA (i.e. “16S rDNA”, the sequence of DNA which encodes the ribosomal RNA molecule). Since ribosomes are present in all living organisms, using 16S rDNA allows for DNA to be amplified from many more organisms than if another marker were used. The 16S rDNA gene contains both slowly evolving regions and fast evolving regions; the former can be used to design broad primers while the latter allow for finer taxonomic distinction. However, species-level resolution is not typically possible using the 16S rDNA. Primer selection is an important step, as anything that cannot be targeted by the primer will not be amplified and thus will not be detected. Different sets of primers have been shown to amplify different taxonomic groups due to sequence variation.
Targeted studies of eukaryotic and viral communities are limited and subject to the challenge of excluding host DNA from amplification and the reduced eukaryotic and viral biomass in the human microbiome.
After the amplicons are sequenced, molecular phylogenetic methods are used to infer the composition of the microbial community. This is done by clustering the amplicons into operational taxonomic units (OTUs) and inferring phylogenetic relationships between the sequences. An important point is that the scale of data is extensive, and further approaches must be taken to identify patterns from the available information. Tools used to analyze the data include VAMPS, QIIME and mothur.
Metagenomics is also used extensively for studying microbial communities. In metagenomic sequencing, DNA is recovered directly from environmental samples in an untargeted manner with the goal of obtaining an unbiased sample from all genes of all members of the community. Recent studies use shotgun Sanger sequencing or pyrosequencing to recover the sequences of the reads. The reads can then be assembled into contigs. To determine the phylogenetic identity of a sequence, it is compared to available full genome sequences using methods such as BLAST. One drawback of this approach is that many members of microbial communities do not have a representative sequenced genome.
Despite the fact that metagenomics is limited by the availability of reference sequences, one significant advantage of metagenomics over targeted amplicon sequencing is that metagenomics data can elucidate the functional potential of the community DNA. Targeted gene surveys cannot do this as they only reveal the phylogenetic relationship between the same gene from different organisms. Functional analysis is done by comparing the recovered sequences to databases of metagenomic annotations such as KEGG. The metabolic pathways that these genes are involved in can then be predicted with tools such as MG-RAST, CAMERA and IMG/M.
RNA and protein-based approaches
Metatranscriptomics studies have been performed to study the gene expression of microbial communities through methods such as the pyrosequencing of extracted RNA. Structure based studies have also identified non-coding RNAs (ncRNAs) such as ribozymes from microbiota.Metaproteomics is a new approach that studies the proteins expressed by microbiota, giving insight into its functional potential.
He analyzed two statistical methods based on the first technology listed above:
- Kernel-based regression to test the effect of Microbiome composition on an outcome
- Sparse Dirichlet-Multinomial regression for Taxon-level analysis
The following is the abstract of this talk:
With the development of next generation sequencing technology, researchers have now been able to study the microbiome composition using direct sequencing, whose output are taxa counts for each microbiome sample. One goal of microbiome study is to associate the microbiome composition with environmental covariates. In some cases, we may have a large number of covariates and identification of the relevant covariates and their associated bacterial taxa becomes important. In this talk, I present several statistical methods for analysis of the human microbiome data, including exploratory analysis methods such as generalized UniFrac distances and graph-constrained canonical correlations and statistical models for the count data and simplex data. In particular, I present a sparse group variable selection method for Dirichlet-multinomial regression to account for overdispersion of the counts and to impose a sparse group L1 penalty to encourage both group-level and within-group sparsity. I demonstrate the application of these methods with an on-going human gut microbiome study to investigate the association between nutrient intake and microbiome composition. Finally, I present several challenging statistical and computational problems in analysis of shotgun metagenomics data.
- Gaussian free fields for mathematicians
- Gaussian free field and conformal field theory: In these expository lectures, it gives an elementary introduction to conformal field theory in the context of probability theory and complex analysis. It considers statistical fields, and defines Ward functionals in terms of their Lie derivatives. Based on this approach, it explains some equations of conformal field theory and outline their relation to SLE theory.
- SLE and the free field: Partition functions and couplings
- Schramm-Loewner evolution (SLE). See slides by Tom Alberts and 2006 ICM slides by Oded Schramm and St. Flour Lecture Notes by Wendelin Werner . See also Ito’s lemma notes .
The meaning of the term ”Biological Replicate” unfortunately often does not get adequately addressed in many publications. “Biological Replicate” can have multiple meanings, depending upon the context of the study. A general definition could be that biological replicates are when the same type of organism is grown/treated under the same conditions. For example, if one was performing a cell-based study, then different flasks containing the same type of cell (and preferably the exact same lineage and passage number) which have been grown under the same conditions could be considered biological replicates of one another. The definition becomes a bit trickier when dealing with higher-order organisms, especially humans. This may be an entire discussion in and of itself, but in this case, it is important to note that one does not have a well-defined lineage or passage number for humans. Indeed, it is basically impossible to ensure that all of your samples for one treatment or control have been exposed to the same external factors. In this case, one must do all that is possible to accurately portray and group these organisms; thus, one should group according to such traits as gender, age, and other well-established cause-effect traits (smokers, heavy drinkers, etc.).
Also, it may be helpful to outline the contrast between biological and technical replicates. Though people have varying definitions of technical replicates, perhaps the purest form of technical replicate would be when the exact same sample (after all preparatory techniques) is analyzed multiple times. The point of such a technical replicate would be to establish the variability (experimental error) of the analysis technique (mass spectrometry, LC, etc.), thus allowing one to set confidence limits for what is significant data. This is in contrast to the reasoning behind a biological replicate, which is to establish the biological variability which exists between organisms which should be identical. Knowing the inherent variability between “identical” organisms allows one to decide whether observed differences between groups of organisms exposed to different treatments is simply random or represents a “true” biological difference induced by such treatment.
Biological Factor: Single biological parameter controlled by the investigator. For example, genotype, diet, environmental stimulus, age, etc.
Treatment or Treatment Level: An exact value for a biological factor; for example, stress, no-stress, young, old, drug-treated, placebo, etc.
Condition: A single combination of treatments; for example, strain1/stressed/time10, young/drug-treated, etc.
Sample: An entity which has a single condition and is measured experimentally; for example serum from a single mouse, a sample drawn from a pool of yeast, a sample of pancreatic beta cells pooled from 5 diabetic animals, the third blood sample taken from a participant in a drug study.
Biological Measurement: A value measured on a collection of samples; for example, abundance of protein x, abundance of phospho-protein y, abundance of transcript z.
Experiment: A collection of biological measurements on two or more samples.
Replicate: Two sets of measurements, either within a single experiment or in two different experiments, where measurements are made on samples in the same condition.
Technical Replicates: Replicates that share the same sample; i.e. the measurements are repeated.
Biological Replicates: Replicates where different samples are used for both replicates
Question: Technical/Biological Replicates in RNA-Seq For Two Cell Lines
I have a question around the meaning of “biological replicate” in the context of applying RNA-seq to compare two cell lines. Apologies if this is an overly naeve question.
We have two human cell lines, one of which was derived from the other. Both have different phenotypes, and we want to use RNA-seq to explore the genetic underpinnings of the difference.
If we generate one cDNA library for each sample, and sequence each library on two lanes of an Illumina GA flowcell, I understand we will have “technical replicates”. In this scenario, we can expect very little difference between the two replicates in a sample. If we were to use something like DESeq to call differential expression, it would be inappropriate to treat our technical replicates as replicates in DESeq, since that would likely lead to a large list of DE calls that don’t reflect biological differences.
So, I’d like to know if it possible within our model to have “biological replicates” with which we can use DESeq to call biologically meaningful differential expression.
So, two questions:
(1) If we grow up two sets of cells from each of our two cell lines, generate separate cDNA libraries (4 in total), and sequence them on separate lanes, would these be considered “biological replicates” in the sense that it would be appropriate to treat them as replicates within something like DESeq. I suspect not, since the fact that both replicates in a sample derive from a single cell line within a short period of time will mean that they will be very similar anyway, almost as similar as the technical replicate scenario. Perhaps we would need entirely separate cell lines to be considered biological replicates.
(2) In general, how would others address this – does it seem a better approach to go with separate cells and separate libraries, or would this entail extra effort for effectively no benefit?
Two “biological replicate” are two samples that should be identical (as much as you can/want control) but are biologically separated (different cells, different organisms, different populations, colonies…)
You want to check the difference between cell Line A and cell line B. Let’s start assuming they are identical. Even if they are, by random fluctuation, technical issues, intrinsic slightly different environments… you will never observe that all genes have exactly the same expression. You find differences but can’t conclude if they are inevitable fluctuation or result of an actual difference.
So, you want to have 2 independent populations from A and two independent populations from B and then see how the variability WITHIN A1 and A2 compare to B1 and B2. The RNA levels from A1 and A2 WILL NOT be the same because… because biological system are far from being deterministic. They might be very similar, but different.
because A and B would be on different plates (their environment) I would seed A1 and A2, B1 and B2 the same day on 4 distinct (but as similar as possible) dishes, grow them together in the same condition to minimize external influence, and then collect at once from the 4 cell lines, extract RNA…
Since the cost is not growing cell lines, but sequencing, I would recommend to do 4 independent replicates for A and for B (or any other cell lines you may be interested in) in ONE GO, and then freeze the sample or the RNA. Even better, if you could have somebody to give you the lines called alfa, bravo, charlie, delta… (make sure they keep track of what they are in a safe place 😉 ) so that you are not biased while seeding, growing and manipulating the lines, that would be even better!
- M. A. Álvarez, L. Rosasco and N. D. Lawrence, Kernels for vector-valued functions: a review, tech report, 2011.
- A. Argyriou, M. Pontil, and C.A. Micchelli, When is there a representer theorem? Vector versus matrix regularizers, Journal of Machine Learning Research, 10:2507-2529, 2009.
- G. Bakir, T. Hofmann, B. Schölkopf, A. Smola, B. Taskar and S. Vishwanathan (Eds.), Predicting Structured Data, MIT Press, 2007.
- C. Brouard, F. d’Alche-Buc and M. Szafranski, Semi-supervised Penalized Output Kernel Regression for Link Prediction, in Proceedings of the 28th International Conference on Machine Learning (ICML 2011), Bellevue, WA, USA, 2011.
- A. Caponnetto, M. Pontil, C.A. Micchelli and Y. Ying, Universal multi-task kernels, Journal of Machine Learning Research, 9:1615-1646, 2008.
- S. Dabo-Niang and F. Ferraty (Eds.), Functional and Operatorial Statistics, Springer-Verlag, New-York, 2008.
- P. Geurts, L. Wehenkel and F. d’Alché-Buc, Kernelizing outputs of tree-based methods, in Proceedings of the 23rd International Conference on Machine Learning (ICML 2006), Pittsburgh, PA, USA, 2006. ACM 2006, pp. 345-352.
- P. Geurts, L. Wehenkel and F. d’Alché-Buc, Gradient Boosting for Kernelized Output Spaces, in Proceedings of the 24th International Conference on Machine Learning (ICML 2007), Corvallis, Oregon, USA, 2007.
- F. Ferraty, A. Laksaci, A. Tadj and P. Vieu, Kernel regression with functional response, Electronic Journal of Statistics, 5, 159-171, 2011.
- S. Jung, M. Foskey and J. S. Marron, Principal Arc Analysis on direct product manifolds, The Annals of Applied Statistics, 5, 578-603,2011.
- H. Kadri, A. Rabaoui, P. Preux, E. Duflos and A. Rakotomamonjy, Functional Regularized Least Squares Classication with Operator-valued Kernels, in Proceedings of the 28th International Conference on Machine Learning (ICML 2011), Bellevue, WA, USA, 2011.
- H. Kadri, E. Duflos, P. Preux, S. Canu and M. Davy, Nonlinear functional regression: a functional RKHS approach. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), Italy, 2010.
- T. Kato, Perturbation theory for linear operators, Springer-Verlag, Berlin, 1966.
- C.A. Micchelli and M. Pontil, On learning vector-valued functions, Neural Computation, 17:177-204, 2005.
- J. O. Ramsay and B. W. Silverman, Functional Data Analysis, Springer-Verlag, 2nd ed., 2005.
There is a workshop for this:
Information Geometry is applying differential geometry to families of probability distributions, and so to statistical models. Information does however play two roles in it: Kullback-Leibler information, or relative entropy, features as a measure of divergence (not quite a metric, because it’s asymmetric), and Fisher information takes the role of curvature.
One very nice thing about information geometry is that it gives us very strong tools for proving results about statistical models, simply by considering them as well-behaved geometrical objects. Thus, for instance, it’s basically a tautology to say that a manifold is not changing much in the vicinity of points of low curvature, and changing greatly near points of high curvature. Stated more precisely, and then translated back into probabilistic language, this becomes the Cramer-Rao inequality, that the variance of a parameter estimator is at least the reciprocal of the Fisher information. As someone who likes differential geometry, and now is interested in statistics, I find this very pleasing.
As a physicist, I have always been somewhat bothered by the way statisticians seem to accept particular parametrizations of their models as obvious and natural, and build those parameterization into their procedures. In linear regression, for instance, it’s reasonably common for them to want to find models with only a few non-zero coefficients. This makes my thumbs prick, because it seems to me obvious that if I regressed on arbitrary linear combinations of my covariates, I have exactly the same information (provided the transformation is invertible), and so I’m really looking at exactly the same model — but in general I’m not going to have a small number of non-zero coefficients any more. In other words, I want to be able to do coordinate-free statistics. Since differential geometry lets me do coordinate-free physics, information geometry seems like an appealing way to do this. There are various information-geometric model selection criteria, which I want to know more about; I suspect, based purely on this disciplinary prejudice, that they will out-perform coordinate-dependent criteria.
[From Information Geometry]
The following from the abstract of the tutorial given by Shun-ichi Amari, RIKEN Brain Science Institute in Algebraic Statistics 2012.
We give fundamentals of information geometry and its applications. We often treat a family of probability distributions for understanding stochastic phenomena in the world. When such a family includes n free parameters, it is parameterized by a real vector of n dimensions. This is regarded as a manifold, where the parameters play a role of a coordinate system. A natural question arises: What is the geometrical structure to be introduced in such a manifold. Geometrical structure gives, for example, a distance measure between two distributions and a geodesic line connecting two distributions. The second question is to know how useful the geometry is for understanding properties of statistical inference and designing new algorithms for inference.
The first question is answered by the invariance principle such that the geometry should be invariant under coordinate transformations of random variables. More precisely, it should be invariant by using sufficient statistics as random variables. It is surprising that this simple criterion gives a unique geometrical structure, which consists of a Riemannian metric and a family of affine connections which define geodesics.
The unique Riemannian metric is proved to be the Fisher information matrix. The invariant affine connections are not limited to the Riemannian (Levi-Civita) connection but include the exponential and mixture connections, which are dually coupled with respect to the metric. The connections are dually flat in typical cases such as exponential and mixture families.
A dually flat manifold has a canonical divergence function, which in our case is the Kullback-Leibler divergence. This implies that the KL-divergence is induced from the geometrical flatness. Moreover, there exist two affine coordinate systems, one is the natural or canonical parameters and the other is the expectation parameters in the case of an exponential family. They are connected by the Legendre transformation. A generalized Pythagorean theorem holds with respect to the canonical divergence and the pair of dual geodesics. A generalized projection theorem is derived from it.
These properties are useful for elucidating and designing algorithms of statistical inference. They are used not only for evaluating the higher-order characteristics of estimation and testing, but also for elucidating machine learning, pattern recognition and computer vision. We further study the procedures of semi-parametric estimation together with estimating functions. It is also applied to the analysis of neuronal spiking data by decomposing the firing rates of neurons and their higher-order correlations orthogonally.
The dually flat structure is useful for optimization of various kinds. A manifold needs not be connected with probability distributions. The invariance criterion does not work in such cases. However, a convex function plays a fundamental role in such a case, and we obtain a Riemannian metric together with two dually coupled flat affine connections, connected by the Legendre transformation. The Pythagorean theorem and projection theorem play again a fundamental role in applications of information geometry.
Dropbox is an efficient way to synchronize folders between various computers (Windows, Linux, Mac…). It is free up to 2Go. I use it. If you want to try and use the following link, we both get an extra 0.5Go free…
- Martingales which are not Markov chains ( “is there an elementary way to construct martingales which are not Markov chains?”)
- A random walk on the unitary group (an interesting random walk to keep in mind)
- What does a real Brownian motion conditioned to stay inside the segment look like? (it can help to better understand BM)
- Curvature for Markov Chains (it can help to better understand MC)
In the morning, there was a talk given by Subhadeep Mukhopadhyay (Deep) about the “LP Comoment : Concepts and Applications—Finding Patterns in Large Data Sets”. It’s a pretty interesting talk. Two things I want to share here:
One is: “Noise has no pattern, whatever the noise is.” Here since we are looking for patterns from the data. If there is a mechanism which can identify the pattern correctly whatever the noise is, then it is absolutely a good mechanism. From the talk, at least the speaker claimed, the method they proposed can make this happen, which is pretty cool.
The other is about the two cultures Breiman (2001) reminded statisticians awareness of:
1. Parametric modeling culture, pioneered by R.A.Fisher and Jerzy Neyman;
2. Algorithmic predictive culture, pioneered by machine learning research.
Now the speaker claimed that their method is the third one: Nonparametric , Quantile based, Information Theoretic Modeling.
Thus based on the above two things, I am really interested in their method. The following is what I want to study:
Emanuel Parzen wrote lots of papers about this, which is related with the quantize theory.
【update】There is a paper about this topic written by the speaker.
- Social Network Analysis with R
- Publicly available large data sets for database research
- Around the blogs in 80 hours and Random Thoughts (some are about sequencing data)
- Change margins of a single page (latex)
- Bootstrap example
- Exciting News on Three Dimensional Manifolds
- Dr. Perou on Next Generation Sequencing Technology
- RNA-Seq Methods & March Twitter Roundup
- Introduction to Statistical Thought
- An R programmer looks at Julia
- The slides and video can help you get a flavor of the language Julia.
- Why and How People Use R
- Wang, Landau, Markov, and others…
- Linear mixed models in R
- Least Absolute Gradient Selector: Statistical Regression via Pseudo-Hard Thresholding
- Sparse and Unique Nonnegative Matrix Factorization Through Data Preprocessing
- C++ at Facebook
- Calling C++ from R
- C++ Renaissance
- Why haven’t we cured cancer yet? (Revisited): Personalized medicine versus evolution
- Getting ppt figures into LaTeX
- Latex Allergy Cured by knitr
- Melbourne R Users
- sixty two-minute r twotorials