You are currently browsing the monthly archive for May 2011.

I gradually found that Python is getting more popularity. So I want to learn it.

1, Python Programming Language – Official Website

2, Introduction to Computer Science and Programming

3, Python Programming Tutorial

4, http://www.scipy.org/

5, Scientific Python

6, SciPy 2011

 

 

Today I talked with a friend from ANU online, who is interested in AIT. The following is the reading list from his blog:

Here is the list of books I’m current reading, most of them, if not all, are recommended by Prof. Marcus Hutter.

  • D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming
    Athena Scientific, Belmont, MA (1996) [If you want to know the formal proofs in RL, there is really no replacement for this book]
  • G. Restall. Logic: An Introduction
    Fundamentals of Philosophy, Routledge (2006) [Again an introductory book for logic.]
And I also find the following useful resources:

1, Bay Area Vision Meeting: Unsupervised Feature Learning and Deep Learning –Andrew Ng’s Google talk about unsupervised feature learning and deep learning.

Andrew Ng got bored of improving one algorithm so he decided to improve all algorithms at once…

On his course website at Stanford, Ng provides some tutorials.

2, Neural Networks making a come-back?

3, The Next Generation of Neural Networks

This summer semester, I am taking the graphical models course. This post is for this course, collecting online useful materials here.

1, Machine Learning and Probabilistic Graphical Models Course

2, Graphical Models Toolbox

3, Advances in Probabilistic Graphical Models

1, Research Directions for Machine Learning and Algorithms

2, Resources on Knot theory

3, The Birkhoff-Kakutani theorem

A topological space {X} is said to be metrisable if one can find a metric {d: X \times X \rightarrow [0,+\infty)} on it whose open balls generate the topology.

Theorem 1 (Birkhoff-Kakutani theorem) Let {G} be a topological group (i.e. a topological space that is also a group, such that the group operations {\cdot: G \times G \rightarrow G} and {()^{-1}: G \rightarrow G} are continuous). Then {G} is metrisable if and only if it is both Hausdorff and first countable.

1, generalized complex geometry

Marco Gualtieri

2, When can a Connection Induce a Riemannian Metric for which it is the Levi-Civita Connection?

3, 2012 International Conference on Pattern Recognition Applications and Methods

4, Python Programming

5, Journal of Machine Learning Research (JMLR)

The Indian Buffet Process: An Introduction and Review; Thomas L. Griffiths, Zoubin Ghahramani; 12(Apr):1185–1224, 2011. – JMLR

6, There is a short course on Algorithmic Group Testing and Applications (09/05/2011 — 27/05/2011) by Ngô Quang Hưng at SUNY Buffalo

This is a short course on algorithmic combinatorial group testing and applications. The basic setting of the group testing problem is to identify a subset of “positive” items from a huge item population using as few “tests” as possible. The meaning of “positive”, “tests” and “items” are dependent on the application. For example, dated back to World War II when the area of group testing started, “items” are blood samples, “positive” means syphilis-positive, and a “test” contains a pool of blood samples which results in a positive outcome if there is at least one sample in the pool positive for syphylis. This basic problem paradigm has found numerous applications in biology, cryptography, networking, signal processing, coding theory, statistical learning theory, data streaming, etc. This short course aims to introduce group testing from a computational view point, where not only the constructions of group testing strategies are of interest, but also the computational efficiency of both the construction and the decoding procedures are studied. We will also briefly introduce the probabilistic method, algorithmic coding theory, and several direct applications of group testing.
while looking for group testing on the Google, I found the following abstract for Engineering Competitive and Query-Optimal Minimal-Adaptive Randomized Group Testing Strategies by Muhammad Azam Sheikh and it seems to provide some insight as to why some adaptive strategy might be good for not so sparse defects. From the abstract:

“…Another main result is related to the design of query-optimal and minimal-adaptive strategies. We have shown that a 2-stage randomized strategy with prescribed success probability can asymptotically achieve the information-theoretic lower bound for d much less than n and growing much slower than n. Similarly, we can approach the entropy lower bound in 4 stages when d = o(n)…”

7, The Trieste look at Knot Theory

This paper is base on talks which I gave in May, 2010 at Workshop in Trieste (ICTP). In the first part we present an introduction to knots and knot theory from an historical perspective, starting from Summerian knots and ending on Fox 3-coloring. We show also a relation between 3-colorings and the Jones polynomial. In the second part we develop the general theory of Fox colorings and show how to associate a symplectic structure to a tangle boundary so that tangles becomes Lagrangians (a proof of this result has not been published before).
Chapter VI of the book “KNOTS: From combinatorics of knot diagrams to combinatorial topology based on knots will be based on this paper.

8, Generalized Boosting Algorithms for Convex Optimization

9, The International Machine Learning Society

2010-11 Analysis of Object Data Opening Workshop and Tutorials (with videolectures)

Stor 891, Object Oriented Data Analysis, Home Page taught by J. S. Marron, Fall Semester, 2007

Object oriented data analysis: Sets of trees

What is the difference between functional data analysis and high dimensional data analysis

Functional Data Analysis-A Short Course by Giles Hooker

Notes on Functional Data Analysis by James Ramsay

1, Peter Huber’s reflections on data analysis

Peter Huber’s most famous work derives from his paper on robust statistics published nearly fifty years ago in which he introduced the concept of M-estimation (a generalization of maximum likelihood) to unify some ideas of Tukey and others for estimation procedures that were relatively insensitive to small departures from the assumed model.

2, Guide to Getting Started with R: 2011 Update

3, A Risk Comparison of Ordinary Least Squares vs Ridge Regression

Blog Stats

  • 185,523 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 518 other subscribers