0

I have the following code which I need to runt it more than one time. Currently, it takes too long. Is there an efficient way to write these two for loops.

ErrorEst=[]
for i in range(len(embedingFea)):#17000
    temp=[]
    for j in range(len(emedingEnt)):#15000
        if cooccurrenceCount[i][j]>0:
            #print(coaccuranceCount[i][j]/ count_max)
            weighting_factor = np.min(
                        [1.0,
                        math.pow(np.float32(cooccurrenceCount[i][j]/ count_max), scaling_factor)])

            embedding_product = (np.multiply(emedingEnt[j], embedingFea[i]), 1)
            #tf.log(tf.to_float(self.__cooccurrence_count))
            log_cooccurrences =np.log (np.float32(cooccurrenceCount[i][j]))

            distance_expr = np.square(([
                embedding_product+
                focal_bias[i],
                context_bias[j],
                -(log_cooccurrences)]))

            single_losses =(weighting_factor* distance_expr)
            temp.append(single_losses)
    ErrorEst.append(np.sum(temp))
Abrar
  • 621
  • 1
  • 9
  • 17

3 Answers3

0

If you need to increase the performance of your code you should write it in low level language like C and try to avoid the usage of floating point numbers.

Possible solution: Can we use C code in Python?

0x6261627564
  • 134
  • 1
  • 4
0

You could try using numba and wrapping your code with the @jit decorator. Usually the first execution needs to compile some stuff, and will thus not see much speedup, but subsequent iterations will be much faster.

You may need to put your loop in a function for this to work.

from numba import jit

@jit(nopython=True)
def my_double_loop(some, arguments):
    for i in range(len(embedingFea)):#17000
        temp=[]
        for j in range(len(emedingEnt)):#15000
            # ...
Engineero
  • 12,340
  • 5
  • 53
  • 75
0

You can use Numba or Cython

At first make sure to avoid lists where ever possible and write a simple and readable code with explicit loops like you would do for example in C. All input and outputs are only numpy-arrays or scalars.

Your Code

import numpy as np
import numba as nb
import math

def your_func(embedingFea,emedingEnt,cooccurrenceCount,count_max,scaling_factor,focal_bias,context_bias):
    ErrorEst=[]
    for i in range(len(embedingFea)):#17000
        temp=[]
        for j in range(len(emedingEnt)):#15000
            if cooccurrenceCount[i][j]>0:
                weighting_factor = np.min([1.0,math.pow(np.float32(cooccurrenceCount[i][j]/ count_max), scaling_factor)])
                embedding_product = (np.multiply(emedingEnt[j], embedingFea[i]), 1)
                log_cooccurrences =np.log (np.float32(cooccurrenceCount[i][j]))

                distance_expr = np.square(([embedding_product+focal_bias[i],context_bias[j],-(log_cooccurrences)]))

                single_losses =(weighting_factor* distance_expr)
                temp.append(single_losses)
        ErrorEst.append(np.sum(temp))
    return ErrorEst

Numba Code

@nb.njit(fastmath=True,error_model="numpy",parallel=True)
def your_func_2(embedingFea,emedingEnt,cooccurrenceCount,count_max,scaling_factor,focal_bias,context_bias):
    ErrorEst=np.empty((embedingFea.shape[0],2))
    for i in nb.prange(embedingFea.shape[0]):
        temp_1=0.
        temp_2=0.
        for j in range(emedingEnt.shape[0]):
            if cooccurrenceCount[i,j]>0:
                weighting_factor=(cooccurrenceCount[i,j]/ count_max)**scaling_factor
                if weighting_factor>1.:
                    weighting_factor=1.

                embedding_product = emedingEnt[j]*embedingFea[i]
                log_cooccurrences =np.log(cooccurrenceCount[i,j])

                temp_1+=weighting_factor*(embedding_product+focal_bias[i])**2
                temp_1+=weighting_factor*(context_bias[j])**2
                temp_1+=weighting_factor*(log_cooccurrences)**2

                temp_2+=weighting_factor*(1.+focal_bias[i])**2
                temp_2+=weighting_factor*(context_bias[j])**2
                temp_2+=weighting_factor*(log_cooccurrences)**2


        ErrorEst[i,0]=temp_1
        ErrorEst[i,1]=temp_2
    return ErrorEst

Timings

embedingFea=np.random.rand(1700)+1
emedingEnt=np.random.rand(1500)+1
cooccurrenceCount=np.random.rand(1700,1500)+1
focal_bias=np.random.rand(1700)
context_bias=np.random.rand(1500)
count_max=100
scaling_factor=2.5

%timeit res_1=your_func(embedingFea,emedingEnt,cooccurrenceCount,count_max,scaling_factor,focal_bias,context_bias)
1min 1s ± 346 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit res_2=your_func_2(embedingFea,emedingEnt,cooccurrenceCount,count_max,scaling_factor,focal_bias,context_bias)
17.6 ms ± 2.81 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
max9111
  • 6,272
  • 1
  • 16
  • 33