Monday, December 10, 2012

Blog Intro

Hi, everyone. This is a new blog, and as of right now I plan to write about topics in machine learning and algorithms having to do with kernels and related technologies (MKL for example, or other technologies that touch on kernels, like representation learning).  I certainly don't know everything about these technologies, and as I learn about them I'll try to write a post here.  This will mostly be a technical blog, but I may post every now and again on topics that relate to the job market, to science policy, or other loosely-related items.

Now that that's out of the way, I want to write a quick introduction about myself. I'm currently a PhD student in computing at the University of Utah. My advisor, Suresh Venkatasubramanian, is part of the Data Group at the School of Computing, where we meet weekly to talk about big data, databases, algorithms, high-dimensional geometry, machine learning, and other topics related to data. My interests were initially in geometry and algorithms, but they've slowly moved over to machine learning and optimization. Because of my training though, I'm still interested in geometric interpretations of machine learning.

That's the dry version. Really, I love matrices and matrix math, so I like machine learning and optimization (arXiv version) a lot. When I read about a graph, I'm always a lot more interested in the adjacency matrix or the Laplacian or its spectrum than I am in the combinatorics. When I was working on the space of positive definite matrices, I was way more interested in the algebraic structure than I was in the points.  I like computing Lagrangians for some weird reason and I like hearing about neat matrix tricks, especially if they're easy to understand.

Subscribe to the feed, I'll try to keep you interested in the new stuff I learn about.

2 comments:

  1. Matrix-oriented ML is a good space to be in. My favorite recent work in this area is the Mackey et al noisy matrix factorization work [1] which is at the heart of the second place Netflix prize (behind the AT&T team) and Venkat Chandrasekaran's sparse/low rank matrix decompositions stuff.

    Oh and that other stuff by Moeller et al. I forget what it's called though. ;)

    [1] http://arxiv.org/abs/1107.0789
    [2] http://users.cms.caltech.edu/~venkatc/cspw_slr_sysid09.pdf

    ReplyDelete
  2. Thanks, I'll check those links out. Yeah I'm not sure about that Moeller guy. What a hack. :-)

    ReplyDelete