You are currently browsing the tag archive for the ‘Math’ tag.

Today I came across an interesting question on the mathoverflow: what are the biggest problems in probability theory? In the answers, there is one about self-avoiding walks. And the most famous scientist in this field, as far as I know, is Gordon Slade. And several days ago, I also saw a post of this subject. At that time, I have no idea about this area, so I did not have anything feeling about this and then I just skipped it. Now I think I have realized the importance of this field in the whole probability theory. Thus I have to know something about this, at least getting to know what it is. Here I want to share you with the materials I have collected.

http://chromotopy.org/?p=402 (a recent post about the talk given by Professor Slade)

http://gowers.wordpress.com/2010/08/22/icm2010-smirnov-laudatio/ (a post about this area)

http://terrytao.wordpress.com/2010/08/19/lindenstrauss-ngo-smirnov-villani/ (a post about the winners in icm2010, including this area)

Q6: Do “Imaginary Numbers” Really Exist?

A6: [From: http://www.math.toronto.edu/mathnet/plain/answers/imaginary.html]

An “imaginary number” is a multiple of a quantity called “i” which is defined by the property that i squared equals -1. This is puzzling to most people, because it is hard to imagine any number having a negative square. The result: it is tempting to believe that i doesn’t really exist, but is just a convenient mathematical fiction.

This isn’t the case. Imaginary numbers do exist. Despite their name, they are not really imaginary at all. (The name dates back to when they were first introduced, before their existence was really understood. At that point in time, people were imagining what it would be like to have a number system that contained square roots of negative numbers, hence the name “imaginary”. Eventually it was realized that such a number system does in fact exist, but by then the name had stuck.)

Before discussing why imaginary numbers exist, it’s helpful to think about why we’re even asking the question. Why is it so hard to accept that there could be numbers with negative squares? One has to come to terms with the things that seem so puzzling and confusing about this concept and see that they are not really so unreasonable after all, before one can move on to accept the existence of imaginary numbers. Having done that, we can move on to seeing why they exist, and what relevance they have.

Therefore, we will address the following questions (you may select any of the items below to see the explanation):

  • Imaginary Numbers: More Reasonable than they First Appear
  • Imaginary Numbers: How To Show They Exist
  • Imaginary Numbers: Relevance to the Real World
  •  

    Another very insightful article is A Visual, Intuitive Guide to Imaginary Numbers.

    Why some theorems important? There are at least several reasons that come to mind:

    {\bullet } Some theorems are important because of their intrinsic nature. They may not have applications, but they just are beautiful. Or they have a interesting proof.

    {\bullet } Some theorems solve open problems. Of course any such theorem is automatically important, since the field has already decided that the question is interesting.

    {\bullet } Some theorems create whole new directions for mathematics and theory. These are sometimes—not always—relatively easy theorems to prove. But they may be very hard theorems to realize they should be proved. Their importance is that they show us that something new is possible.

    {\bullet } Some theorems are important because they introduce new proof techniques. Or contain a new lemma that is more useful than the theorem proved.

    {\bullet } Some theorems are important because of their “promise.” This is a subjective reason—a theorem may be important because people feel it could be even more important. Here, both the relation to group equations and the constraints-on-interval-graphs view make us feel Klyachko Car Crash Theorem has some hidden possibilities.

    From:http://rjlipton.wordpress.com/2010/12/04/what-makes-a-theorem-important/

    And there is also a paper written by Terry Tao on what’s good mathematics.

    http://www.springer.com/librarians/e-content/ebooks?SGWID=0-40791-12-784104-0

    The Elements of Statistical Learning 

    9780387848570_230x153

    Numerical Optimization 

    0387303030_210x153

    A Modern Introduction to Probability and Statistics 

    1852338962_237x153

    Time Series Analysis -With Applications in R

    9780387759586_214x153

    Applied Statistics Using SPSS, STATISTICA, MATLAB and R 

    9783540719717_231x153 

    An Introduction to Programming and Numerical Methods in MATLAB 

    1852339195_202x153

    Graph Theory 

    9781846289699_245x153

    Lattice -Multivariate Data Visualization with R

    9780387759685_232x153

    The Concise Encyclopedia of Statistics 

    9780387317427_222x153 

    Handbook of Financial Time Series 

    9783540712961_218x153

    Asymptotic Theory of Statistics and Probability 

    9780387759708_246x153

    An Introduction to Ordinary Differential Equations 

    9780387712758_232x153

    Data Manipulation with R 

    9780387747309_231x153 

    Ordinary and Partial Differential Equations 

    9780387791456_231x153

    Bayesian Computation with R 

    9780387922973_231x153 

    I have noticed this concept before. Since I am just new in Probability field, so you should forgive me that I just noticed this academic area several months ago and did not realize the importance of it. Today I attended the regular colloquium of my department and the speaker, Zbigniew J. Jurek, gave a lecture about The Random Integral Representation Conjecture. In this talk, he mentioned free probability. Moreover, he also joked that free statistics will come into being.

    I also fount a useful link about the survey of free probability. I hope it will be useful for you. Terry Tao also have a post about this.

    General philosophy of probability theory
    Probability is central to science, more than any other part of math. It enters statistics, physics, biology, and even medicine as we will see when and if we discuss tomography. This is the broad view.
    There is also a narrow view – one needs to understand it before one can effectively apply it and it has many subtleties. Possibly this is due to the fact that probability, stochasticity, or randomness, may not actually exist! I think it mostly exists in our uncertainty about the world. The real world seems to be deterministic (of course one can never test this hypothesis). It is chaotic and one uses probabilistic models to study it mainly because we don’t know the initial conditions. Einstein said that ”god does not play dice”. My own view is that the world may be deterministic, but I like to think I have free will. I believe that probability should be regarded only as a model of reality.

    From the notes of Lawrence A. Shepp

    Today I just found a nice list from xi’an’s blog of Top 15 papers for his graduate students’ reading:

    1. B. Efron (1979) Bootstrap methods: another look at the jacknife Annals of Statistics
    2. R. Tibshirani (1996) Regression shrinkage and selection via the lasso J. Royal Statistical Society
    3. A.P. Dempster, N.M. Laird and D.B. Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm J. Royal Statistical Society
    4. Y. Benjamini & Y. Hochberg (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. Royal Statistical Society
    5. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
    6. J. Neyman & E.S. Pearson (1933) On the problem of the most efficient test of statistical hypotheses Philosophical Trans. Royal Statistical Society London
    7. D.R. Cox (1972) Regression models and life-table J. Royal Statistical Society
    8. A. Gelfand & A.F.M. Smith (1990) Sampling-based approaches to calculating marginal densities J. American Statistical Assoc.
    9. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
    10. J.O. Berger & T. Sellke (1987) Testing a point null hypothesis: the irreconciability of p-values and evidence J. American Statistical Assoc

    Which ones should I now add? First, Steve Fienberg pointed out to me the reading list he wrote in 2005 for the iSBA Bulletin. Out of which I must select a few ones:

    1. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.
    2. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
    3. J.W.Tukey (1962) The future of data analysis. Annals of Mathematical Statistics
    4. L. Savage (1976) On Rereading R.A. Fisher Annals of Statistics

    And then from other readers, including Andrew, I must also pick:

    1. H. Akaike (1973). Information theory and an extension of the maximum likelihood principle. Proc. Second Intern. Symp. Information Theory, Budapest
    2. D.B. Rubin (1976). Inference and missing data. Biometrika
    3. G. Wahba (1978). Improper priors, spline smoothing and the problem of guarding against model errors in regression. J. Royal Statistical Society
    4. G.W. Imbens and J.D. Angrist (1994). Identification and estimation of local average treatment effects. Econometrica.
    5. Box, G.E.P. and Lucas, H.L (1959) Design of experiments in nonlinear situations. Biometrika
    6. S. Fienberg (1972) The multiple recapture census for closed populations and incomplete 2k contingency tables Biometrika

    Of course, there are others that come close to the above, like Besag’s 1975 Series B paper. Or Fisher’s 1922 foundational paper. But the list is already quite long. (In case you wonder, I would not include Bayes’ 1763 paper in the list, as it is just too remote from statistics.)

    And this year some of his students are reading the following papers:

    1. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
    2. G. Casella & W. Strawderman (1981) Estimation of a bounded mean Annals of Statistics
    3. A.P. Dawid, M. Stone & J. Zidek (1973) Marginalisation paradoxes in Bayesian and structural inference J. Royal Statistical Society
    4. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
    5. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
    6. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.

    I think it is also a good list for my own reading.

    Blog Stats

    • 185,514 hits

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 518 other subscribers