While updating theta(weights) in linear regression, it seems to be increasing the cost error, rather than reducing it.
While using numpy, I attempted to update theta on every iteration, by setting a predefined array, values from the theta update and setting theta to the array on every iteration.
theta = [[0],[0]]
theta = np.array(theta)
temp = np.array([[0],[0]])
m = len(Y)
for i in range(iteration):
def hypothesis (theta, X, iteration):
''' Calculates the hypothesis by multiplying the transpose of theta with the features of X in iteration'''
output = np.transpose(theta).dot(np.transpose(X[iteration]))
return int(output)
def cost_function():
'''Calculates cost function to plot (this is to check if the cost function is converging)'''
total = 0
for i in range(m):
total = pow((hypothesis(theta, X, i) - Y[i]),2) + total
return total/(2*m)
def cost_function_derivative(thetapos):
'''Calculates the derivative of the cost function to determine which direction to go'''
cost = 0
for a in range(m):
cost += ((hypothesis(theta, X, a) - int(Y[a])) * int(X[a, thetapos]))
return (cost)
alpher = alpha*(1/m)
for j in range(len(theta)):
temp[j, 0] = theta[j, 0] - float(alpher)*float(cost_function_derivative(j))
print (cost_function())
theta = temp
return hypothesis(theta, X, 5), theta
I was expecting it to output 13, with a theta of [1,2] but alas, my poor code, gave me 0, [0,0]