You are currently browsing the category archive for the ‘Machine Learning’ category.

  1. A nice blog on CS including learnings: https://blog.acolyer.org/ called “the morning paper”: an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer. I hope there is a similar blog on Statistics, reviewing and recommending an interesting/influential/important paper from the world of Statistics.
  2. A wonderful summary of Mathematical Tricks Commonly Used in Machine Learning and Statistics with examples
  3. I just realized that when I teach ridge regression I should have used A Useful Matrix Inverse Equality for Ridge Regression
  4. GANs should be gained much attention in the stats community: Understanding Generative Adversarial Networks. This is a nice post about GANs based on “probably the highest-quality general overview available nowadays: Ian Goodfellow’s tutorial on arXiv, which he then presented in some form at NIPS 2016. “
  5. R or Python? Why not both? Using Anaconda Python within R with {reticulate}
  6. “A heatmap is basically a table that has colors in place of numbers. Colors correspond to the level of the measurement.”

 

 

There has been a Machine Learning (ML) reading list of books in hacker news for a while, where Professor Michael I. Jordan recommend some books to start on ML for people who are going to devote many decades of their lives to the field, and who want to get to the research frontier fairly quickly. Recently he articulated the relationship between CS and Stats amazingly well in his recent reddit AMA, in which he also added some books that dig still further into foundational topics. I just list them here for people’s convenience and my own reference.

  • Frequentist Statistics
    1. Casella, G. and Berger, R.L. (2001). “Statistical Inference” Duxbury Press.—Intermediate-level statistics book.
    2. Ferguson, T. (1996). “A Course in Large Sample Theory” Chapman & Hall/CRC.—For a slightly more advanced book that’s quite clear on mathematical techniques.
    3. Lehmann, E. (2004). “Elements of Large-Sample Theory” Springer.—About asymptotics which is a good starting place.
    4. Vaart, A.W. van der (1998). “Asymptotic Statistics” Cambridge.—A book that shows how many ideas in inference (M estimation, the bootstrap, semiparametrics, etc) repose on top of empirical process theory.
    5. Tsybakov, Alexandre B. (2008) “Introduction to Nonparametric Estimation” Springer.—Tools for obtaining lower bounds on estimators.
    6. B. Efron (2010) “Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction” Cambridge,.—A thought-provoking book.
  • Bayesian Statistics
    1. Gelman, A. et al. (2003). “Bayesian Data Analysis” Chapman & Hall/CRC.—About Bayesian.
    2. Robert, C. and Casella, G. (2005). “Monte Carlo Statistical Methods” Springer.—about Bayesian computation.
  • Probability Theory
    1. Grimmett, G. and Stirzaker, D. (2001). “Probability and Random Processes” Oxford.—Intermediate-level probability book.
    2. Pollard, D. (2001). “A User’s Guide to Measure Theoretic Probability” Cambridge.—More advanced level probability book.
    3. Durrett, R. (2005). “Probability: Theory and Examples” Duxbury.—Standard advanced probability book.
  • Optimization
    1. Bertsimas, D. and Tsitsiklis, J. (1997). “Introduction to Linear Optimization” Athena.—A good starting book on linear optimization that will prepare you for convex optimization.
    2. Boyd, S. and Vandenberghe, L. (2004). “Convex Optimization” Cambridge.
    3. Y. Nesterov and Iu E. Nesterov (2003). “Introductory Lectures on Convex Optimization” Springer.—A start to understand lower bounds in optimization.
  • Linear Algebra
    1. Golub, G., and Van Loan, C. (1996). “Matrix Computations” Johns Hopkins.—Getting a full understanding of algorithmic linear algebra is also important.
  • Information Theory
    1. Cover, T. and Thomas, J. “Elements of Information Theory” Wiley.—Classic information theory.
  • Functional Analysis
    1. Kreyszig, E. (1989). “Introductory Functional Analysis with Applications” Wiley.—Functional analysis is essentially linear algebra in infinite dimensions, and it’s necessary for kernel methods, for nonparametric Bayesian methods, and for various other topics.

Remarks from Professor Jordan: “not only do I think that you should eventually read all of these books (or some similar list that reflects your own view of foundations), but I think that you should read all of them three times—the first time you barely understand, the second time you start to get it, and the third time it all seems obvious.”

p-value and Bayes are the two hottest words in Statistics. Actually I still can not get why the debate between frequentist  statistics and Bayesian statistics can last so long. What is the essence arguments behind it? (Any one can help me with this?) In my point of view, they are just two ways for solving practical problems. Frequentist people are using the random version of proof-by-contradiction argument (i.e. small p-value indicates less likeliness for the null hypothesis to be true), while Bayesian people are using learning argument  to update their believes through data. Besides, mathematician are using partial differential equations (PDE) to model the real underlying process for the analysis. These are just different methodologies for dealing with practical problems. What’s the point for the long-last debate between frequentist  statistics and Bayesian statistics then?

Although my current research area is mostly in frequentist statistics domain, I am becoming more and more Bayesian lover, since it’s so natural. When I was teaching introductory statistics courses for undergraduate students at Michigan State University, I divided the whole course into three parts: Exploratory Data Analysis (EDA) by using R software, Bayesian Reasoning and Frequentist Statistics. I found that at the end of the semester, the most impressive example in my students mind was the one from the second section (Bayesian Reasoning).  That is the Monty Hall problem,  which was mentioned in the article that just came out in the NYT. (Note that about the argument from Professor Andrew Gelman, please also check out the response from Professor Gelman.) “Mr. Hall, longtime host of the game show “Let’s Make a Deal,” hides a car behind one of three doors and a goat behind each of the other two. The contestant picks Door No. 1, but before opening it, Mr. Hall opens Door No. 2 to reveal a goat. Should the contestant stick with No. 1 or switch to No. 3, or does it matter?” And the Bayesian approach to this problem “would start with one-third odds that any given door hides the car, then update that knowledge with the new data: Door No. 2 had a goat. The odds that the contestant guessed right — that the car is behind No. 1 — remain one in three. Thus, the odds that she guessed wrong are two in three. And if she guessed wrong, the car must be behind Door No. 3. So she should indeed switch.” What a natural argument! Bayesian babies and Google untrained search for youtube cats (the methods of deep learning) are all excellent examples proving that Bayesian Statistics IS a remarkable way for solving problems.

What about the p-values? This random version of proof-by-contradiction argument is also a great way for solving problems from the fact that it have been helping solve so many problems from various scientific areas, especially in bio-world. Check out today’s post from Simply Statistics: “You think P-values are bad? I say show me the data,” and also the early one: On the scalability of statistical procedures: why the p-value bashers just don’t get it.

The classical p-value does exactly what it says. But it is a statement about what would happen if there were no true effect. That can’t tell you about your long-term probability of making a fool of yourself, simply because sometimes there really is an effect. You make a fool of yourself if you declare that you have discovered something, when all you are observing is random chance. From this point of view, what matters is the probability that, when you find that a result is “statistically significant”, there is actually a real effect. If you find a “significant” result when there is nothing but chance at play, your result is a false positive, and the chance of getting a false positive is often alarmingly high. This probability will be called “false discovery rate” (or error rate), which is different with the concept in the multiple comparison. One possible misinterpretation of p-value is regarding p-value as the false discovery rate, which may be much higher than p-value. Think about the Bayes formula and the tree diagram you learned in introductory course to statistics to figure out the relationship between p-value and the “false discovery rate”.

 

 

  1. Interview with Nick Chamandy, statistician at Google
  2. You and Your Researchvideo
  3. Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained
  4. A Survival Guide to Starting and Finishing a PhD
  5. Six Rules For Wearing Suits For Beginners
  6. Why I Created C++
  7. More advice to scientists on blogging
  8. Software engineering practices for graduate students
  9. Statistics Matter
  10. What statistics should do about big data: problem forward not solution backward
  11. How signals, geometry, and topology are influencing data science
  12. The Bounded Gaps Between Primes Theorem has been proved
  13. A non-comprehensive list of awesome things other people did this year.
  14. Jake VanderPlas writes about the Big Data Brain Drain from academia.
  15. Tomorrow’s Professor Postings
  16. Best Practices for Scientific Computing
  17. Some tips for new research-oriented grad students
  18. 3 Reasons Every Grad Student Should Learn WordPress
  19. How to Lie With Statistics (in the Age of Big Data)
  20. The Geometric View on Sparse Recovery
  21. The Mathematical Shape of Things to Come
  22. A Guide to Python Frameworks for Hadoop
  23. Statistics, geometry and computer science.
  24. How to Collaborate On GitHub
  25. Step by step to build my first R Hadoop System
  26. Open Sourcing a Python Project the Right Way
  27. Data Science MD July Recap: Python and R Meetup
  28. git 最近感悟
  29. 10 Reasons Python Rocks for Research (And a Few Reasons it Doesn’t)
  30. Effective Presentations – Part 2 – Preparing Conference Presentations
  31. Doing Statistical Research
  32. How to Do Statistical Research
  33. Learning new skills
  34. How to Stand Out When Applying for An Academic Job
  35. Maturing from student to researcher
  36. False discovery rate regression (cc NSA’s PRISM)
  37. Job Hunting Advice, Pt. 3: Networking
  38. Getting Started with Git

In my office I have two NIPS posters on the wall, 2011 and 2012. But I have not been there and I am not computer scientist neither. But anyway I like NIPS without reason. Now it’s time for me to organize posts from others:

  1. NIPS ruminations I
  2. NIPS II: Deep Learning and the evolution of data models
  3. NIPS stuff…
  4. NIPS 2012
  5. NIPS 2012 Conference in Lake Tahoe, NV
  6. Thoughts on NIPS 2012
  7. The Big NIPS Post
  8. NIPS 2012 : day one
  9. NIPS 2012 : day two
  10. Spectral Methods for Latent Models
  11. NIPS 2012 Trends

And among all of the posts, there are several things I have to digest later on:

  1. One tutorial on Random Matrices, by Joel Tropp. People concluded in their posts that

    Basically, break random matrices down into a sum of simpler, independent random matrices, then apply concentration bounds on the sum.—. The basic result is that if you love your Chernoff bounds and Bernstein inequalities for (sums of) scalars, you can get almost exactly the same results for (sums of) matrices.—.

  2. “This year was definitely all about Deep Learning,”  said. The Geomblog mentioned that it’s been in the news recently because of the Google untrained search for youtube cats, the methods of deep learning (basically neural nets without lots of back propagation) have been growing in popularity over a long while. And we have to spend sometime to read Deep Learning and the evolution of data models, which is related with manifold learning.
  3. “Another trend that’s been around for a while, but was striking to me, was the detailed study of Optimization methods.”—The Geomblog.  There are at least two different workshops on optimization in machine learning (DISC and OPT), and numerous papers that very carefully examined the structure of optimizations to squeeze out empirical improvements.
  4. Kernel distances:  An introduction to kernel distance from The Geomblog. “Scott Aaronson (at his NIPS invited talk) made this joke about how nature loves ℓ2. The  kernel distance is “essentially” the ℓ2 variant of EMD (which makes so many things easier). There’s been a series of papers by Sriperumbudur et al. on this topic, and in a series of works they have shown that (a) the kernel distance captures the notion of “distance covariance” that has become popular in statistics as a way of testing independence of distributions. (b) as an estimator of distance between distributions, the kernel distance has more efficient estimators than (say) the EMD because its estimator can be computed in closed form instead of needing an algorithm that solves a transportation problem and (c ) the kernel that optimizes the efficient of the two-sample estimator can also be determined (the NIPS paper).”
  5. Spectral Methods for Latent Models: Spectral methods for latent variable models are based upon the method of moments rather than maximum likelihood.

Besides the papers mentioned in the above hot topics, there are some other papers from Memming‘s post:

  1. Graphical models via generalized linear models: Eunho introduced a family of graphical models with GLM marginals and Ising model style pairwise interaction. He said the Poisson-Markov-Random-Fields version must have negative coupling, otherwise the log partition function blows up. He showed conditions for which the graph structure can be recovered with high probability in this family.
  2. TCA: High dimensional principal component analysis for non-gaussian data: Using an elliptical copula model (extending the nonparanormal), the eigenvectors of the covariance of the copula variables can be estimated from Kendall’s tau statistic which is invariant to the nonlinearity of the elliptical distribution and the transformation of the marginals. This estimator achieves close to the parametric convergence rate while being a semi-parametric model.

Update: Make sure to check the lectures from the prominent 26th Annual NIPS Conference filmed @ Lake Tahoe 2012. Also make sure to check the NIPS 2012 Workshops, Oral sessions and Spotlight sessions which were collected for the Video Journal of Machine Learning Abstracts – Volume 3.

The following four big issues related with big data are really taking the big four aspects into consideration:

From XRDS.

And how to deal with the above four big issues? Here is a post about the Five Trendy Open Source Technologies to help you to deal with big data.

Today I saw a link question from reddit: How important is Java/C++ vs just using R/Matlab for big data?  I learned C++ and Matlab when I was undergraduate and I am now using R by self learning as a PhD student in Stats Department. But living in this big data time, R is really not enough for scientific computing. Hence this link question is really what I want to know. Here I want to organize the interesting materials, including posts, about the programming, especially R and C++.

First I want to mention that top projects languages in GitHub:  JavaScript 20%, Ruby 14%, Python 9%, Shell 8%, Java 8%, PHP 7%, C 7%, C++ 4%, Perl 4%, Objective-C 3% among lots of other languages including R, Julia, Matlab. But for me, I only know about C and C++ among these Top 10 languages. For learning for people like me, I give the description list as follows:

  1. JavaScript
    Javascript is an ojbect-oriented, scripting programming language that runs in your web browser. It runs on a simplified set of commands, easier to code and doesn’t require compiling. It’s an important language since it’s embedded into html that happens to to used in millions of web pages to validate forms, create cookies, detect browsers and improve page design and formatting. Big plus, it’s easy to learn and use.
  2. Ruby and Ruby on Rails
    Ruby is a dynamic, object-oriented, open-source programming language; Ruby on Rails is an open-source Web application framework written in Ruby that closely follows the MVC (Model-View-Controller) architecture. With a focus on simplicity, productivity and letting the computers do the work, in a few years, its usage has spread quickly. Ruby is very similar to Python, but with different syntax and libraries. There’s little reason to learn both, so unless you have a specific reason to choose Ruby (i.e. if this is the language your colleagues all use), I’d go with Python.

    Ruby on Rails is one of the most popular web development frameworks out there, so if you’re looking to do primarily web development you should compare Django (Python framework) and RoR first.

  3. Python
    Python is an interpreteddynamically-typed programming language. Python programs stress code readability, so even non-programmers should be able to decipher a Python program with relative ease. This also makes the language one of the easiest to learn and write code in quickly. Python is very popular and has a strong set of libraries for everything from numerical and symbolic computing to data visualization and graphical user interfaces.
  4. Java
    Java is an object-oriented programming language developed by James Gosling and colleagues at Sun Microsystems in the early 1990s. Why you should learn it: Hailed by many developers as a “beautiful” language, it is central to the non-.Net programming experience. Learning Java is critical if you are non-Microsoft.
  5. PHP
    What is PHP? PHP is an open-source, server side html scripting language well suited for web developers as it can easily be embedded into standard html pages. You can run 100% dynamic pages or hybrid pages, 50% html + 50% php.
  6. C
    C is a standardized, general-purpose programming language. It’s one of the most pervasive languages and the basis for several others (such as C++). It’s important to learn C. Once you do, making the jump to Java or C# is fairly easy, because a lot of the syntax is common. C is a low-level, statically typed, compiled language. The main benefit of C is its speed, so it’s useful for tasks that are very computationally intensive. Because it’s compiled into an executable, it’s also easier to distribute C programs than programs written in interpreted languages like Python. The trade-off of increased speed is decreased programmer efficiency. C++ is C with some additional object-oriented features built in. It can be slower than C, but the two are pretty comparable, so it’s up to you whether these additional features are worth it.
  7. Perl
    Perl is an open-source, cross-platform, server-side interpretive programming language used extensively to process text through CGI programs. Perls power in processing of piles of text has made it very popular and widely used to write Web server programs for a range of tasks.

This rank is only for the users on GitHub, which is biased for you. For me, I think C/C++, R, Julia, Matlab, Java, Python, Perl will be popular among stats sphere.

  1. Advice on learning C++ from an R background
  2. Integrating C or C++ into R, where to start?
  3. R for testing, C++ for implementation?
  4. Some thoughts on Java—compared with C++
  5. A list of RSS C++ blogs
  6. Get started with C++ AMP
  7. C++11 Concurrency Series
  8. Google’s Python Class and Google’s C++ Class from Google Code University
  9. Integrating R and C++
  10. Learn Python on Codecademy
  11. Learn How to Code Without Leaving Your Browser
  12. Minimal Advice to Undergrads on Programming
  13. Learning R Via Python (or the other way around).
  14. Bloom teaches Python for Scientific Computing at Berkeley (available as a podcast).
  15. I would focus on learning three classes of languages to really understand the nature of programming and to have a decent toolkit. Everything else is basically variants on that.Learn a low-level language so you understand what goes on at the bare metal and so you can make hardware dance
    The obvious choice here is C, but assembly language might also be good.Learn a language for architecting large systems
    If you want to build large code bases, you’re going to need one of the strongly typed languages. Personally, I think Java is the best choice here; but C++, Scala and even Ada are acceptable.Learn a language for scripting things together quickly
    There are a few choices here: shell, Python, Perl, Lua. Any of these will do, but Python is probably the foremost. These are great for gluing existing pieces together.

    Now, if you only get three, that’s it. But I’m going to suggest two more categories.

    Learn a language that forces you to think differently about programming
    These are majorly different world perspectives. Examples here would be functional programming, like Haskell, ML, etc, but also logic programming like Prolog.

    Learn a language that lets you build web-based applications quickly
    This could be web2py or Javascript — but the ability to quickly hack together a web demo is really useful today.

The following from Revolutions:

John Myles White, self-described “statistics hacker” and co-author of “Machine Learning for Hackers” was interviewed recently by The Setup. In the interview, he describes his some of his go-to R packages for data science:

Most of my work involves programming, so programming languages and their libraries are the bulk of the software I use. I primarily program in R, but, if the situation calls for it, I’ll use MatlabRuby or Python. …

That said, for me the specific language I use is much less important than the libraries availble for that language. In R, I do most of my graphics using ggplot2, and I clean my data using plyr, reshape, lubridate and stringr. I do most of my analysis using rjags, which interfaces with JAGS, and I’ll sometimes use glmnet for regression modeling. And, of course, I use ProjectTemplate to organize all of my statistical modeling work. To do text analysis, I’ll use the tm and lda packages.

Also in JMW’s toolbox: Julia, TextMate 2, MySQL, Dropbox and a beefy MacBook. Read the full interview linked below for an insightful look at how he uses these and other tools day to day.

The Setup / Interview: John Myles White

There is a workshop for this: 

Object, functional and structured data: towards next generation kernel-based methods – ICML 2012 Workshop

 

Blog Stats

  • 185,514 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 518 other subscribers