You are currently browsing the monthly archive for May 2011.
I gradually found that Python is getting more popularity. So I want to learn it.
1, Python Programming Language – Official Website
2, Introduction to Computer Science and Programming
3, Python Programming Tutorial
6, SciPy 2011
Today I talked with a friend from ANU online, who is interested in AIT. The following is the reading list from his blog:
Here is the list of books I’m current reading, most of them, if not all, are recommended by Prof. Marcus Hutter.
- S. J. Russell and P. Norvig. Artificial Intelligence. A Modern Approach
Prentice-Hall, Englewood Cliffs, 3rd Edition (2010) [An very high level introductory book. Taking comp3620 @ ANU]
- M. Li and P. M. B. Vitanyi. An introduction to Kolmogorov complexity and its applications
Springer, 3rd edition (2008) [What I like most about this book is how they mathematically formalise human’s intuition of complexity]
- D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming
Athena Scientific, Belmont, MA (1996) [If you want to know the formal proofs in RL, there is really no replacement for this book]
- R. Sutton and A. Barto. Reinforcement learning: An introduction
Cambridge, MA, MIT Press (1998) [An introductory book for RL]
- G. Restall. Logic: An Introduction
Fundamentals of Philosophy, Routledge (2006) [Again an introductory book for logic.]
- M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability
Springer, Berlin, 300 pages (2005) [My supervisor’s book. This is a very compact and theoretical book with strong mathematical background assumed]
- C. M. Bishop. Pattern Recognition and Machine Learning
Springer (2006) [This book is normally referred to as the Bible in machine learning. This is the must-learn book.]
- Peter D. Grunwald. The minimum Description Length Pinciple The MIT press. (2007) [MDL principle based on Occam’s Razor]
1, Bay Area Vision Meeting: Unsupervised Feature Learning and Deep Learning –Andrew Ng’s Google talk about unsupervised feature learning and deep learning.
Andrew Ng got bored of improving one algorithm so he decided to improve all algorithms at once…
On his course website at Stanford, Ng provides some tutorials.
This summer semester, I am taking the graphical models course. This post is for this course, collecting online useful materials here.
1, Machine Learning and Probabilistic Graphical Models Course
1, Research Directions for Machine Learning and Algorithms
3, The Birkhoff-Kakutani theorem
A topological space is said to be metrisable if one can find a metric on it whose open balls generate the topology.
Theorem 1 (Birkhoff-Kakutani theorem) Let be a topological group (i.e. a topological space that is also a group, such that the group operations and are continuous). Then is metrisable if and only if it is both Hausdorff and first countable.
1, generalized complex geometry
Marco Gualtieri
2, When can a Connection Induce a Riemannian Metric for which it is the Levi-Civita Connection?
3, 2012 International Conference on Pattern Recognition Applications and Methods
5, Journal of Machine Learning Research (JMLR)
The Indian Buffet Process: An Introduction and Review; Thomas L. Griffiths, Zoubin Ghahramani; 12(Apr):1185–1224, 2011. – JMLR
6, There is a short course on Algorithmic Group Testing and Applications (09/05/2011 — 27/05/2011) by Ngô Quang Hưng at SUNY Buffalo
This is a short course on algorithmic combinatorial group testing and applications. The basic setting of the group testing problem is to identify a subset of “positive” items from a huge item population using as few “tests” as possible. The meaning of “positive”, “tests” and “items” are dependent on the application. For example, dated back to World War II when the area of group testing started, “items” are blood samples, “positive” means syphilis-positive, and a “test” contains a pool of blood samples which results in a positive outcome if there is at least one sample in the pool positive for syphylis. This basic problem paradigm has found numerous applications in biology, cryptography, networking, signal processing, coding theory, statistical learning theory, data streaming, etc. This short course aims to introduce group testing from a computational view point, where not only the constructions of group testing strategies are of interest, but also the computational efficiency of both the construction and the decoding procedures are studied. We will also briefly introduce the probabilistic method, algorithmic coding theory, and several direct applications of group testing.
“…Another main result is related to the design of query-optimal and minimal-adaptive strategies. We have shown that a 2-stage randomized strategy with prescribed success probability can asymptotically achieve the information-theoretic lower bound for d much less than n and growing much slower than n. Similarly, we can approach the entropy lower bound in 4 stages when d = o(n)…”
7, The Trieste look at Knot Theory
This paper is base on talks which I gave in May, 2010 at Workshop in Trieste (ICTP). In the first part we present an introduction to knots and knot theory from an historical perspective, starting from Summerian knots and ending on Fox 3-coloring. We show also a relation between 3-colorings and the Jones polynomial. In the second part we develop the general theory of Fox colorings and show how to associate a symplectic structure to a tangle boundary so that tangles becomes Lagrangians (a proof of this result has not been published before).
Chapter VI of the book “KNOTS: From combinatorics of knot diagrams to combinatorial topology based on knots will be based on this paper.
2010-11 Analysis of Object Data Opening Workshop and Tutorials (with videolectures)
Stor 891, Object Oriented Data Analysis, Home Page taught by J. S. Marron, Fall Semester, 2007
Object oriented data analysis: Sets of trees
What is the difference between functional data analysis and high dimensional data analysis
Functional Data Analysis-A Short Course by Giles Hooker
Notes on Functional Data Analysis by James Ramsay
1, Peter Huber’s reflections on data analysis
Peter Huber’s most famous work derives from his paper on robust statistics published nearly fifty years ago in which he introduced the concept of M-estimation (a generalization of maximum likelihood) to unify some ideas of Tukey and others for estimation procedures that were relatively insensitive to small departures from the assumed model.
2, Guide to Getting Started with R: 2011 Update
3, A Risk Comparison of Ordinary Least Squares vs Ridge Regression
Recent Comments