I'm trying to implement the adaBoost algorithm in python.
The code below runs and works as expected on python 3, but fails on python 2.
On python 2 the line hyp_w_arr[itr] = 0.5 * log((1-err)/err)
returns "divide by zero encountered in long_scalars", while on python 3 everything works.
from numpy import *
import numpy.random
from sklearn.datasets import fetch_mldata
import sklearn.preprocessing
def ADAboost_learn(T,training_data ,training_labels, WL):
for itr in range(T):
print(itr)
hyp_arr[itr][0],hyp_arr[itr][1] = WL(DLW)
hyp_result = [1 if (DLW[index][0][hyp_arr[itr][0]] <= hyp_arr[itr][1])
else -1
for index in range(len(DLW))]
err = sum([DLW[index][2]
if(hyp_result[index] != DLW[index][1]) else 0
for index in range(len(DLW))])
hyp_w_arr[itr] = 0.5 * log((1-err)/err)
are there python/numpy restrictions in 2.0 that dropped in python 3 regarding the division resolution?