You are currently browsing the tag archive for the ‘Math’ tag.

http://chromotopy.org/?p=402 (a recent post about the talk given by Professor Slade)

http://terrytao.wordpress.com/2010/08/19/lindenstrauss-ngo-smirnov-villani/ (a post about the winners in icm2010, including this area)

Q6: Do “Imaginary Numbers” Really Exist?

An “imaginary number” is a multiple of a quantity called “i” which is defined by the property that i squared equals -1. This is puzzling to most people, because it is hard to imagine any number having a negative square. The result: it is tempting to believe that i doesn’t really exist, but is just a convenient mathematical fiction.

This isn’t the case. Imaginary numbers do exist. Despite their name, they are not really imaginary at all. (The name dates back to when they were first introduced, before their existence was really understood. At that point in time, people were imagining what it would be like to have a number system that contained square roots of negative numbers, hence the name “imaginary”. Eventually it was realized that such a number system does in fact exist, but by then the name had stuck.)

Before discussing why imaginary numbers exist, it’s helpful to think about why we’re even asking the question. Why is it so hard to accept that there could be numbers with negative squares? One has to come to terms with the things that seem so puzzling and confusing about this concept and see that they are not really so unreasonable after all, before one can move on to accept the existence of imaginary numbers. Having done that, we can move on to seeing why they exist, and what relevance they have.

Therefore, we will address the following questions (you may select any of the items below to see the explanation):

• Imaginary Numbers: More Reasonable than they First Appear
• Imaginary Numbers: How To Show They Exist
• Imaginary Numbers: Relevance to the Real World
•

Another very insightful article is A Visual, Intuitive Guide to Imaginary Numbers.

Why some theorems important? There are at least several reasons that come to mind:

${\bullet }$ Some theorems are important because of their intrinsic nature. They may not have applications, but they just are beautiful. Or they have a interesting proof.

${\bullet }$ Some theorems solve open problems. Of course any such theorem is automatically important, since the field has already decided that the question is interesting.

${\bullet }$ Some theorems create whole new directions for mathematics and theory. These are sometimes—not always—relatively easy theorems to prove. But they may be very hard theorems to realize they should be proved. Their importance is that they show us that something new is possible.

${\bullet }$ Some theorems are important because they introduce new proof techniques. Or contain a new lemma that is more useful than the theorem proved.

${\bullet }$ Some theorems are important because of their “promise.” This is a subjective reason—a theorem may be important because people feel it could be even more important. Here, both the relation to group equations and the constraints-on-interval-graphs view make us feel Klyachko Car Crash Theorem has some hidden possibilities.

And there is also a paper written by Terry Tao on what’s good mathematics.

http://www.springer.com/librarians/e-content/ebooks?SGWID=0-40791-12-784104-0

### Bayesian Computation with R

I have noticed this concept before. Since I am just new in Probability field, so you should forgive me that I just noticed this academic area several months ago and did not realize the importance of it. Today I attended the regular colloquium of my department and the speaker, Zbigniew J. Jurek, gave a lecture about The Random Integral Representation Conjecture. In this talk, he mentioned free probability. Moreover, he also joked that free statistics will come into being.

I also fount a useful link about the survey of free probability. I hope it will be useful for you. Terry Tao also have a post about this.

# Math Sex Jokes

Are you 2x? Because I want to integrate you from 10 to 13!

I derived your mom last night.
It was f prime.

How is sex like math?
1. Half the time I get an odd result.
2. If my hands aren’t enough, I end up using my head.
3. I always wonder how the person next to me is doing on his work.
4. My average at each is pretty dismal.

What is 69 and 69?
Dinner for four..

What is 6.9?
Good sex interrupted by a period.

Q: If you go to bed 8 hours before you have to wake up, and your wife wants to have 2 hours of sex, how much sleep will you get?
A: 7 hours, 57 minutes – who cares what she wants!

At this moment 5 million are having sex, 2 million are in gun fights, 91 million at a party, and one sad loser is reading this joke

A graduate student of mathematics who used to come to the university on foot every day arrives one day on a fancy new bicycle. “Where did you get the bike from?” his friends want to know.”It’s a `thank you’ present”, he explains, “from that freshman girl I’ve been tutoring. But the story is kind of weird…” “Tell us!” “Well”, he starts, “yesterday she called me on the phone and told me that she had passed her math final and that she wanted to drop by to thank me in person. As usual, she arrived at my place riding her bicycle. But when I had let her in, she suddenly took all her clothes off, lay down on my bed, smiled at me, and said: `You can get from me whatever you desire!'”

One of his friends remarks: “You made a really smart choice when you took the bicycle.”

“Yeah”, another friend adds, “just imagine how silly you would have looked in girls clothes – and they wouldn’t have fit you anyway!”

Q: How are math and sex the same?
A: I don’t get either one.

A mathematician and an engineer agreed to take part in a psychological test. They sat on one side of a room and waited not knowing what to expect. A door opened on the other side and a naked woman came in the room and stood on the far side. They were then instructed that every time they heard a beep they could move half the remaining distance to the woman. They heard a beep and the engineer jumped up and moved halfway across the room while the mathematician continued to sit, looking disgusted and bored. When the mathematician didn’t move after the second beep he was asked why. “Because I know I will never reach the woman.” The engineer was asked why he chose to move and replied, “Because I know that very soon I will be close enough for all practical purposes!”

A physicist, a mathematician and a computer scientist discuss what is better: a wife or a girlfriend. The physicist: “A girlfriend. You still have freedom to experiment.” The mathematician: “A wife. You have security.” The computer scientist: “Both. When I’m not with my wife, she thinks I’m with my girlfriend. With my girlfriend it’s vice versa. And I can be with my computer without anyone disturbing me…”

Why does 1+1=1?
1 male + 1 female = 1 baby

Q: If you have two friends and six women, how many women do each of your friends get?
A: None.

Q. How do you teach a blond math?
A. Subtract her clothes, divide her legs, and square root her.

Before I root you, are you over 18?

“What happened to your girlfriend, that really cute math student?”
“She no longer is my girlfriend. I caught her cheating on me.”
“I don’t believe that she cheated on you!”
“Well, a couple of nights ago I called her on the phone, and she told me that she was in bed wrestling with three unknowns…”

Sex is like math:
Subtract the clothes,
Divide the legs,
and pray to God you don’t Multiply!

General philosophy of probability theory
Probability is central to science, more than any other part of math. It enters statistics, physics, biology, and even medicine as we will see when and if we discuss tomography. This is the broad view.
There is also a narrow view – one needs to understand it before one can effectively apply it and it has many subtleties. Possibly this is due to the fact that probability, stochasticity, or randomness, may not actually exist! I think it mostly exists in our uncertainty about the world. The real world seems to be deterministic (of course one can never test this hypothesis). It is chaotic and one uses probabilistic models to study it mainly because we don’t know the initial conditions. Einstein said that ”god does not play dice”. My own view is that the world may be deterministic, but I like to think I have free will. I believe that probability should be regarded only as a model of reality.

From the notes of Lawrence A. Shepp

Today I just found a nice list from xi’an’s blog of Top 15 papers for his graduate students’ reading:

1. B. Efron (1979) Bootstrap methods: another look at the jacknife Annals of Statistics
2. R. Tibshirani (1996) Regression shrinkage and selection via the lasso J. Royal Statistical Society
3. A.P. Dempster, N.M. Laird and D.B. Rubin (1977) Maximum likelihood from incomplete data via the EM algorithm J. Royal Statistical Society
4. Y. Benjamini & Y. Hochberg (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. Royal Statistical Society
5. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
6. J. Neyman & E.S. Pearson (1933) On the problem of the most efficient test of statistical hypotheses Philosophical Trans. Royal Statistical Society London
7. D.R. Cox (1972) Regression models and life-table J. Royal Statistical Society
8. A. Gelfand & A.F.M. Smith (1990) Sampling-based approaches to calculating marginal densities J. American Statistical Assoc.
9. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
10. J.O. Berger & T. Sellke (1987) Testing a point null hypothesis: the irreconciability of p-values and evidence J. American Statistical Assoc

Which ones should I now add? First, Steve Fienberg pointed out to me the reading list he wrote in 2005 for the iSBA Bulletin. Out of which I must select a few ones:

1. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.
2. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
3. J.W.Tukey (1962) The future of data analysis. Annals of Mathematical Statistics
4. L. Savage (1976) On Rereading R.A. Fisher Annals of Statistics

And then from other readers, including Andrew, I must also pick:

1. H. Akaike (1973). Information theory and an extension of the maximum likelihood principle. Proc. Second Intern. Symp. Information Theory, Budapest
2. D.B. Rubin (1976). Inference and missing data. Biometrika
3. G. Wahba (1978). Improper priors, spline smoothing and the problem of guarding against model errors in regression. J. Royal Statistical Society
4. G.W. Imbens and J.D. Angrist (1994). Identification and estimation of local average treatment effects. Econometrica.
5. Box, G.E.P. and Lucas, H.L (1959) Design of experiments in nonlinear situations. Biometrika
6. S. Fienberg (1972) The multiple recapture census for closed populations and incomplete 2k contingency tables Biometrika

Of course, there are others that come close to the above, like Besag’s 1975 Series B paper. Or Fisher’s 1922 foundational paper. But the list is already quite long. (In case you wonder, I would not include Bayes’ 1763 paper in the list, as it is just too remote from statistics.)

And this year some of his students are reading the following papers:

1. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
2. G. Casella & W. Strawderman (1981) Estimation of a bounded mean Annals of Statistics
3. A.P. Dawid, M. Stone & J. Zidek (1973) Marginalisation paradoxes in Bayesian and structural inference J. Royal Statistical Society
4. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
5. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
6. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.

I think it is also a good list for my own reading.

### Blog Stats

• 100,617 hits