4

In just intonation theory, a type of music theory, intervals between notes are expressed with rational numbers. Doubling a frequency with a 2:1 ratio brings it up an octave and a 1:1 ratio is no change; a unison. So if I have an interval n that is larger than an octave or smaller than a unison (it goes down), it is useful to 'justify' it. That is, to make 1 ≤ n ≤ 2. I've been doing this in Python with the following function:

def justify(n):
    return n / 2 ** floor( log(n,2) )

The actual function involves the fractions library, but this gets the job done with floats and ints. The log finds to what power of 2 n is and floor rounds it down so that the resulting divisor is the nearest power of 2 below n. I've also tried this:

def justify(n):
    return n / int( '1'.ljust( len( bin(n) ) - 2, '0' ), 2 )

This one just take the length of the binary representation and pads zeroes based on that. Of course, that only works with ints. I was wondering if there is any way to perform this operation with bitwise operations. It seems like the binary would lend itself well to the power of 2 operation. At minimum, I would like to see a way to replace 2 ** floor( log(n,2) ) with something bitwise. Extra points if it can handle floats, but I understand that's more complicated.

Matthieu Brucher
  • 21,634
  • 7
  • 38
  • 62
Dan D
  • 153
  • 1
  • 7
  • I guess you want to have something like that: http://stackoverflow.com/questions/1322510/given-an-integer-how-do-i-find-the-next-largest-power-of-two-using-bit-twiddlin – agomcas Mar 04 '15 at 15:37
  • You could do `while n > 2: n = n/2` followed by `while n < 1; n = n*2`. It's not bitwise or O(1), but it does work for integers and floats and fractions. Just don't try to justify anything less than or equal to zero. – Kevin Mar 04 '15 at 15:37
  • 1
    For integers, look into the `bit_length` method: https://docs.python.org/2.7/library/stdtypes.html#int.bit_length. Specifically, you can replace `2**floor(log(n, 2))` with `1 << (n.bit_length() - 1)`. – Mark Dickinson Mar 04 '15 at 15:45
  • The base 2 logarithm of an integer is equivalent to the position of its highest bit set. It's possible to find this in O(log_2(n)) operations for an n-bit integer, but I suspect that the complicated bit twiddling involved will be slower than the built-in functions–lots of work done at the Python level rather than native code. – Slade Mar 04 '15 at 15:50
  • 1
    ... and BTW, for floats you want `math.frexp`: `2 * math.frexp(x)[0]` will give the 'justified' value of a float `x`. – Mark Dickinson Mar 04 '15 at 15:55
  • It looks like `2*frexp(n)[0]` works best for floats and `n / (1<<(n.bit_length()-1))` works best for ints. This will really optimize some of my work. – Dan D Mar 04 '15 at 17:44

2 Answers2

3

math.frexp(x), as Mark Dickinson pointed out in the question comments, is the way to go:

def justify(n):
    return 2*frexp(n)[0]

It works with floats as well as integers.

davidedb
  • 867
  • 5
  • 12
2

For integers only, you can get there in a slightly devious way with the following:

def justify(n):
    return n / 1<<(n.bit_length()-1)

I've no idea if it's faster without significant testing but a quick test with timeit shows it to be about twice as fast as your first snippet.

However, converting n to a float in the numerator (to get a float return) slows it to the same speed as your original.

def justify(n):
    return float(n) / 1<<(n.bit_length()-1)

bit_length gives the minimum number of bits required to represent abs(x) which is actually going to be one more than you want for your calculation.

I would expect log(n,2) to be heavily optimized for powers of two in the base - and it's implemented in C. So you will have trouble beating it.

Possibly changing the denominator to 1<<int(log(n,2)) may give you better performance than the 2** approach .. and it seems to be about 30% faster giving

def justify(n):
    return float(n) / (1<<int(log(n,2)))

It is possible to do it completely with bitwise operators:

def justify_bitwise(n):
   int_n = int(abs(n))
   p = 0
   while int_n != 1:
       p += 1
       int_n >>= 1

   return float(n) / (1<<p)

But timeit clocks this at 2.16 microseconds. An order of magnitude slower than using bit_length

kdopen
  • 8,032
  • 7
  • 44
  • 52
  • No need for this deviousness. Use the `bit_length` method! (Assuming Python 2.7 or Python 3, that is.) – Mark Dickinson Mar 04 '15 at 15:43
  • You want `n.bit_length()`, not `int.bit_length(n)`. And I'd be quite surprised if `int(log(n, 2))` were faster than `n.bit_length() - 1`. On my machine, it's about a factor of 4 slower, but YMMV. – Mark Dickinson Mar 04 '15 at 16:00
  • 1
    `int.bit_length(n)` works too. And yes, it's the `1<<` vs the `2**` approach which seems to give an advantage. `timeit` gives 301ns for `2**int.bit_length(1234)` and 207ns for `1< – kdopen Mar 04 '15 at 16:19
  • 1
    Yes, of course `int.bit_length(n)` works; it's just a peculiar way to call a method on an instance. :-) Do you also write `list.append(my_list, value)` rather than `my_list.append(value)` to append a value to a list? – Mark Dickinson Mar 04 '15 at 16:36
  • I was mucking around in ipython, and `1234.bit_length()` didn't work .. so, cut'n' paste :) – kdopen Mar 04 '15 at 16:38
  • Note also that in Python 2.7, `int.bit_length(n)` fails when `n` has type `long`. – Mark Dickinson Mar 04 '15 at 16:39