more nowadays comments on the book from Michael I. Jordan

There has been a ML reading list of books in hacker news for a while, where you recommend some books to start on ML. (https://news.ycombinator.com/item?id=1055042)

Do you still think this is the best set of books, and would you add any new ones?

[–]michaelijordan[S] 41 points 5 days ago* 

That list was aimed at entering PhD students at Berkeley,
who I assume are going to devote many decades of their lives to the field, and who want to get to the research frontier fairly quickly. I would have prepared a rather different list if the target population was (say) someone in industry who needs enough basics so that they can get something working in a few months.

That particular version of the list seems to be one from a few years ago; I now tend to add some books that dig still further into foundational topics. In particular, I recommend A. Tsybakov's book "Introduction to Nonparametric Estimation" as a very readable source for the tools for obtaining lower bounds on estimators, and Y. Nesterov's very readable "Introductory Lectures on Convex Optimization" as a way to start to understand lower bounds in optimization. I also recommend A. van der Vaart's "Asymptotic Statistics", a book that we often teach from at Berkeley, as a book that shows how many ideas in inference (M estimation---which includes maximum likelihood and empirical risk minimization---the bootstrap, semiparametrics, etc) repose on top of empirical process theory. I'd also include B. Efron's "Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction", as a thought-provoking book.

I don't expect anyone to come to Berkeley having read any of these books in entirety, but I do hope that they've done some sampling and spent some quality time with at least some parts of most of them. Moreover, not only do I think that you should eventually read all of these books (or some similar list that reflects your own view of foundations), but I think that you should read all of them three times---the first time you barely understand, the second time you start to get it, and the third time it all seems obvious.

I'm in it for the long run---three decades so far, and hopefully a few more. I think that that's true of my students as well. Hence the focus on foundational ideas.

[–]leonoel 11 points 5 days ago* 

Amazing, thanks for the answer, I've gone through many of the books at least once, I was a PhD student, now a postdoc doing mostly applied ML.

That is the reason I've used that list as my reference list.

Just as a side comment:

Tsybakov's book is available online at Springer if your University has access to it:http://link.springer.com/book/10.1007%2Fb13794

Nesterov's books is also available: http://link.springer.com/book/10.1007%2F978-1-4419-8853-9

Again, thanks for the answer

[–]zdk 1 point 4 hours ago 

Nesterov's books is also available: http://link.springer.com/book/10.1007%2F978-1-4419-8853-9

Nice! I've been looking for a book like this, thanks.

[–]nzhiltsov 2 points 4 days ago 

Thanks a lot! BTW, I gathered your recommendations on Goodreads: https://www.goodreads.com/review/list/6324945-nikita-zhiltsov?shelf=m-jordan-s-list

[–]dornstar18 4 points 5 days ago 

Will you prepare a list for someone in industry? Please.

[–]98ahsa9d 4 points 5 days ago 

However tempting, avoid anything with "for hackers" in the title, I'd say.

[–]zdk 1 point 4 hours ago 

'For hackers' -> 'I don't want to learn any math'

[–]dornstar18 0 points 5 days ago 

Good advice. Thanks

[–]nzhiltsov 1 point 4 days ago 

In this post, I would like to blend together recommendations from academic and industry researchers:http://nzhiltsov.blogspot.com/2014/09/highly-recommended-books-for-machine-learning-researchers.html

没有评论:

发表评论