reading: 2 papers from nips2013

1. One-shot learning and big data n=2.
nips 2013.

very special case ( the model seems to be highly designed), and then
show the theory of the specific method --- PCR --- do regression on
the projected matrix to its first k-component.

Inspiration: study specific case, then through comparison to show how
good the work is.

2. On the Sample Complexity of Subspace Learning

convergence rate of the subspace learning algorithms, with special
case such as PCA, kernel-PCA, MDS etc.

It's good to review some knowledge about LLE, Hessian ISOMAP, ISOMAP etc.

subspace is find the minimum linear space that contains the support of
certain measure. The basic assumption is that the data is embedded in
a high dimensional space but has low dimensional intrinsic dimension.

没有评论:

发表评论