Now that that's out of the way, I want to write a quick introduction about myself. I'm currently a PhD student in computing at the University of Utah. My advisor, Suresh Venkatasubramanian, is part of the Data Group at the School of Computing, where we meet weekly to talk about big data, databases, algorithms, high-dimensional geometry, machine learning, and other topics related to data. My interests were initially in geometry and algorithms, but they've slowly moved over to machine learning and optimization. Because of my training though, I'm still interested in geometric interpretations of machine learning.
That's the dry version. Really, I love matrices and matrix math, so I like machine learning and optimization (arXiv version) a lot. When I read about a graph, I'm always a lot more interested in the adjacency matrix or the Laplacian or its spectrum than I am in the combinatorics. When I was working on the space of positive definite matrices, I was way more interested in the algebraic structure than I was in the points. I like computing Lagrangians for some weird reason and I like hearing about neat matrix tricks, especially if they're easy to understand.
Subscribe to the feed, I'll try to keep you interested in the new stuff I learn about.
Matrix-oriented ML is a good space to be in. My favorite recent work in this area is the Mackey et al noisy matrix factorization work [1] which is at the heart of the second place Netflix prize (behind the AT&T team) and Venkat Chandrasekaran's sparse/low rank matrix decompositions stuff.
ReplyDeleteOh and that other stuff by Moeller et al. I forget what it's called though. ;)
[1] http://arxiv.org/abs/1107.0789
[2] http://users.cms.caltech.edu/~venkatc/cspw_slr_sysid09.pdf
Thanks, I'll check those links out. Yeah I'm not sure about that Moeller guy. What a hack. :-)
ReplyDelete