Suppose I've got a set of background data measurements for different frequencies:
import numpy as np
C_bg_100kHz = 100*np.random.random(1000)
C_bg_200kHz = 200*np.random.random(1000)
Where C_bg_100kHz
is some background noise for 100kHz measurements, and C_bg_200kHz
is background noise for 200kHz measurements. I would like to create a function that subtracts the mean of these background data measurements from some array of measurement data, where I can specify as one of the function parameters which background data set I want to subtract. I have managed to make this fuction using eval()
:
def subtract(array_y,freq):
bg = eval('C_bg_' + freq)
return array_y - np.ones(len(array_y))*np.mean(bg)
>>> subtract([50,50,50],'100kHz')
array([-0.36224706, -0.36224706, -0.36224706])
>>> subtract([50,50,50],'200kHz')
array([-47.95860607, -47.95860607, -47.95860607])
Here, I can enter my data as array_y
and subtract, for instance, the C_bg_100kHz
dataset by passing '100kHz'
as the freq
input. Essentially, I want python to translate a string 'C_bg_100kHz'
to the array C_bg_100kHz
. However, this function uses eval()
, which I've seen mentioned as something you don't want to do if you can avoid it. So, my question is whether I can avoid using eval()
for this specific situation.