Support Vector Machine

Note: this post meant to help clarify the tutorial question number 2  for COMP 9417 – Week 9, School of Computer Science and Engineering, UNSW (s1 – 2017)

Support Vector Machine

Support Vector Machine (SVM) is essentially an approach to learning linear classifiers  which enables SVM to maximising the margin. Here is the picture, inspired by Flach – Fig. 7.6 – 7.7, that shows the difference between decision boundary produced by SVM, and other linear classifiers (such as: linear regression or perceptron).

To achieve that, SVM utilise below objective function, which attempts to find the values of alpha_1,...,alpha_n that maximise the function.

To solve that equation, quadratic optimization solvers typically is used. However, for a simple toy example, we can compute it manually. Here are steps to find a solution for the weight vector textbf{w} , threshold t , and the margin m (from slides 23-28):

  1. Set up Gram matrix for labelled data
  2. Set up expression to be minimised
  3. Take partial derivatives
  4. Set to zero and solve for each multiplier
  5. Solve textbf{w}
  6. Solve t
  7. Solve m

Here are the detail solutions:


    • Lecture slide Supervised Learning – Kernel Methods, Mike Bain, CSE – UNSW
    • Tutorial questions and solutions of Kernel Methods, Mike Bain, CSE – UNSW
    • Flach, P. (2012). Machine learning: the art and science of algorithms that make sense of data. Cambridge University Press

One thought on “Support Vector Machine

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s