I am currently learning ML on coursera with the help of course on ML by Andrew Ng. I am performing the assignments in python because I am more used to it rather than Matlab. I have recently come to a problem regarding my understanding of the topic of Regularization. My understanding is that by doing regularization, one can add less important features which are important enough in prediction. But while implementing it, I don't understand why the 1st element of theta(parameters) i.e theta[0] is skipped while calculating the cost. I have referred other solutions but they also have done the same skipping w/o explanation.
Here is the code:
`
term1 = np.dot(-np.array(y).T,np.log(h(theta,X)))
term2 = np.dot((1-np.array(y)).T,np.log(1-h(theta,X)))
regterm = (lambda_/2) * np.sum(np.dot(theta[1:].T,theta[1:])) #Skip theta0. Explain this line
J=float( (1/m) * ( np.sum(term1 - term2) + regterm ) )
grad=np.dot((sigmoid(np.dot(X,theta))-y),X)/m
grad_reg=grad+((lambda_/m)*theta)
grad_reg[0]=grad[0]
`
And here is the formula:
Here J(theta) is cost function h(x) is the sigmoid function or hypothesis. lamnda is the regularization parameter.