I guess the function parameter does not change the original variable. For e.g.
x=10
def mycal(someval):
someval = someval * 2
return someval
It will return 20 if I call the function:
mycal(x)
20
But the value of x is still 10:
x
10
Now why does the theta value has changed to 3.7 and 3.2 when I run this code?
import numpy as np
X = 2 * np.random.rand(100,1)
y = 4 +3 * X+np.random.randn(100,1)
def predict_func(slope,intercept,x_test):
pred = ((np.dot(x_test, slope)) + intercept)
return pred
def mse_calc(prediction, y_test):
error = np.mean((prediction - y_test)**2)
return error
def grad_descent(s_theta, l_rate, tolerance, iter_val, x_train, y_train):
n_data = len(x_train)
bias = np.ones((n_data,1))
s_intercept = s_theta[0]
s_slope = s_theta[1:]
prediction = predict_func(s_slope, s_intercept, x_train)
error = mse_calc(prediction, y_train)
x_train_b = np.append(bias, x_train, axis = 1)
for i in range(iter_val):
int_theta = [0 for i in range(len(x_train[0]) + 1)]
n_pt = float(len(x_train))
prediction = predict_func(s_slope, s_intercept, x_train)
int_theta = 2 / n_pt * np.dot(x_train_b.T, (prediction - y_train))
s_theta -= l_rate * int_theta
s_intercept = s_theta[0]
s_slope = s_theta[1:]
prediction = predict_func(s_slope, s_intercept, x_train)
final_error = mse_calc(prediction, y_train)
return s_theta, final_error
theta = np.zeros((len(X[0]) + 1, 1))
tolerance = 0.0001
l_rate = 0.01
iterations = 5000
print (theta)
grad_theta, grad_error = grad_descent(theta, l_rate, tolerance, iterations, X, y)
print (theta)
I expect the theta value to be still 0 while grad_theta should be 3.7
In the following example, the first one is redefining the variable while the next 2 are pointers and appending mutable object.
some_list = some_list + [2]
some_list.append(2)
some_list += [2]
+= is close the option 2 and not option 1 as I expected.
What are += and -= any way?