Friday 7 October 2011

Machine Learning - Intro and Gradient Descent

Week 1 - In which we meet Prof. Ng and experience the delights of cost functions and Linear Regression with One Variable.

This is my first week on the Stanford University machine learning course and so far so good -as the man said when he jumped off Nelson's column. Prof. Ng seems to be a pretty decent lecturer who doesn't over estimate the ability of his student and there are plenty of examples and explanations to drive the points home. The lectures come as videos with a few embedded questions to keep you awake, and they are, thankfully, split into bite sized chunks of 10 -15 minutes.

In the intro we get introduced to terminology -the difference between supervised and unsupervised learning. Supervised learning requires a training set, the main  example given is of house price history vs. house size; unsupervised is just trying to make inferences from a bunch of data -say clustering news stories. We are also given the difference between regression (line fitting, trending) problems and classification ones (sorting into buckets, true/false &c.).

Thence we arrive at Linear Regression with One Variable -which is line fitting on a 2D graph basically. The maths arrives, but not too brutally -calculus shows its head, but fortunately you don't have to understand the whole of calculus to use the little bit we want, so all good.

As well as the videos there are also online review questions -which I believe contribute to the final score. The good news is that you not only can, but are actively encouraged to, retake them until you get a perfect score. The real benefit of this is that they become a learning tool rather than just a test, there are answers given as well as just scores and I think that they have helped me understand what's going on rather better than I otherwise would.

I'm now most of the way through the optional linear algebra review -but I won't post on that.

No comments:

Post a Comment

linkedin