I'm confused about why two different bits of code that ought to be doing the same thing (at least when I run it through my mind). I've got three lists of values, called snr
, m1_inj
, and m2_inj
, and I'm trying to remove all non-unique values from each list.
I can't use the numpy uniqueify
, or whatever it's called, because that changes the order of the values, which I can't do, since the values in each list are associate with the values in the same position in the other lists. Note that the values are numerical estimations to about 7 significant digits, so there's a negligible probability that two numbers in the final list would be identical. Here are the two ways I've tried this:
method #1:
snr = [snr[i] for i in xrange(len(snr)) if snr[i] not in snr[:i]]
m1_inj = [m1_inj[i] for i in xrange(len(m1_inj)) if m1_inj[i] not in m1_inj[:i]]
m2_inj = [m2_inj[i] for i in xrange(len(m2_inj)) if m2_inj[i] not in m2_inj[:i]]
method #2:
for i in xrange(len(m1_inj)-1):
if m1_inj[i+1] != m1_inj[i]:
new_m1.append(m1_inj[i+1])
for i in xrange(len(m2_inj)-1):
if m2_inj[i+1] != m2_inj[i]:
new_m2.append(m2_inj[i+1])
for i in xrange(len(snr)-1):
if snr[i+1] != snr[i]:
new_snr.append(snr[i+1])
Almost every time, the first method worked properly, but once in a blue moon, one of the lists has one too few items. These lists all have the same number of unique values. However, I changed the code to the second method, and it resolved my issues. Can anyone think of any reason why this might be? Let me know if I need to provide any more information.