1

I am trying to calculate the spectral entropy of an array as follows:

array([0.00000000e+00, 1.00743814e-04, 2.01487627e-04, 2.20497985e+01,
       2.20498993e+01, 2.20500000e+01])

From this, I derived my code. however, I found my data was float and so I also adapted code from here to formulate the following:

>>> def ent(data):
...     uniqw, inverse = np.unique(data, return_inverse=True)
...     p_data= np.bincount(inverse)/len(data) # calculates the probabilities
...     ent=entropy(p_data)  # input probabilities to get the entropy
...     return ent

The result is

>>> ent(freqArray)
1.791759469228055

How do I make my entropy lie between 0 and 1

virupaksha
  • 363
  • 2
  • 11
  • 1
    Why do you think the entropy should be no greater than 1? Entropy has units. – Davis Herring Apr 22 '18 at 18:05
  • I am not exactly sure, but the data I am trying to classify has entropies between 0 and 1. I am working on audio frequencies – virupaksha Apr 22 '18 at 18:20
  • You could normalize to the maximum entropy (uniform input). But are your inputs _samples_ or _probabilities_? – Davis Herring Apr 22 '18 at 20:42
  • They are samples not probabilities – virupaksha Apr 23 '18 at 04:15
  • Should I divide them by their sum? – virupaksha Apr 23 '18 at 04:20
  • Just defining the entropy of “real number” (`float`) samples is hard, because you can’t easily approximate the probability distribution like you can with discrete quantities. Are you sure that’s what you want to do? – Davis Herring Apr 23 '18 at 13:09
  • If they are samples, rescaling won’t do anything (disregarding rounding). If they’re probabilities, of course they should be normalized. But the normalization I meant was on the entropy itself, not the data at all. – Davis Herring Apr 23 '18 at 13:11

0 Answers0