In relation to this post, the accepted answer explained the penalty and the loss in the regularisation problem of the SVM. However at the end the terms 'l1-loss', 'l2-loss' are used.
As I understand, the objective function in the regularisation problem is the sum of the loss function, e.g. the hinge loss:
\sum_i [1- y_i * f_i]_+
and the penalty term:
\lambda /2 ||\beta ||^2
By saying 'l1 hinge loss', can I interpreted it as l1-norm specified in argument 'penalty' applying to both the loss and the penalty terms?
In the regularisation problem below from the Elements of Statisical Learning (Hastie et al), is it the l1-loss being used?