Skip to content
MLWave edited this page Jul 4, 2014 · 52 revisions

Given a prediction (p) and a label (y), a loss function (\ell(p,y)) measures the discrepancy between the algorithm's prediction and the desired output. VW currently supports the following loss functions, with squared loss being the default:

|Loss|Function|Minimizer|Example usage| |---|---|---|---|---|---| |Squared|squared loss function|Expectation|Regression
Expected return on stock| |Quantile|quantile loss function|Mean|Regression
What is a typical price for a house?| |Logistic|logistic loss function|Probability|Classification
Probability of click on ad| |Hinge|hinge loss function|0-1 approximation|Classification
Is the digit a 7?| |Classic|Squared loss without importance weight aware updates|Expectation|Regression
Often squared loss performs better.|

To select a loss function in VW see the Command line arguments guide. The Logistic and Hinge loss are for binary classification only, and thus all samples must have class "-1" or "1". More information on loss function semantics can be found in this video course on Online Linear Learning.

Which loss function should I use?

  • If the problem is a binary classification problem your choices should be Logistic or hinge loss. Example: spam vs non-spam, odds of click vs no-click. (Q: when should hinge-loss be used vs logistic?)
  • If the problem is a regression problem, meaning the target label you're trying to predict is a real value -- you should be using Squared or Quantile loss. Example: revenue, height, weight. If you're trying to minimize the mean error, use squared-loss. See: http://en.wikipedia.org/wiki/Least_squares . If OTOH you're trying to predict rank/order and you don't mind the mean error to increase as long as you get the relative order correct, you need to minimize the error vs the median (or any other quantile), in this case, you should use quantile-loss. See: http://en.wikipedia.org/wiki/Quantile_regression
Clone this wiki locally