1

I am attempting to replicate a DSP algorithm in Python that was originally written in C. The trick is I also need to retain the same behavior of the 32 bit fixed point variables from the C version, including any numerical errors that the limited precision would introduce.

The current options I think are available are:

I know the python Decimal type can be used for fixed-point arithmetic, however from what I can tell there is no way to adjust the size of a Decimal variable. To my knowledge numpy does not support doing fixed point operations.

I did a quick experiment to see how fiddling with the Decimal precision affected things:

>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> sys.getsizeof(a)
104
>>> dc.getcontext().prec = 16
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.1999999999999999555910790149937383830547332763671875')
>>> sys.getsizeof(a)
104

There is a change before/after the precision change, however there are still a large number of decimal places. The variable is still the same size, and has quite a few decimal places after it.

How can I best go about achieving the original objective? I do know that Python ctypes has the C language float, but I do not know if that will be useful in this case. I do not know if there is even a way to accurately mimic C type fixed point math in Python.

Thanks!

alphasierra
  • 149
  • 7
  • C fixed point math: Isn't this simply calculating with integral types under the assumption that one increment represents a possible fractional value of a certain physical quantity? (E.g. +1 => +0.012 V) This in mind, I don't see the problem to do this in Python as well. – Scheff's Cat Jun 16 '19 at 10:30
  • I believe my impression was not that bad: [SO: Fixed Point Arithmetic in C Programming](https://stackoverflow.com/a/11816993/7478597). ;-) – Scheff's Cat Jun 16 '19 at 10:34
  • @Scheff Yes, fixed-point arithmetic is essentially calculating with integer types, but it is required that the bit-width of the variables be known and unchanging. As I understand it, the OP is looking for a way to explicitly control the bit-width of the variables. – Elliot Alderson Jun 16 '19 at 17:10
  • Pretty much. If it came down to it I could write my own type which would do the job manually, but I was hoping there was something pre-existing that I could use to do the job. – alphasierra Jun 16 '19 at 17:26

1 Answers1

0

I recommend fxpmath module for fixed-point operations in python. Maybe with that you can emulate the fixed point arithmetic defining precision in bits (fractional part). It supports arrays and some arithmetic operations.

Repo at: https://github.com/francof2a/fxpmath

Here an example:

from fxpmath import Fxp

x = Fxp(1.1, True, 32, 16)   # (val, signed, n_word, n_frac)

print(x)
print(x.precision)

results in:

1.0999908447265625
1.52587890625e-05
francof2a
  • 154
  • 4
  • How would you compare your library to https://github.com/rwpenney/spfpm and https://github.com/Schweitzer-Engineering-Laboratories/fixedpoint ? – Daniel Mar 08 '22 at 18:02
  • I am not internalized in the capabilities of each of these libraries. I imagine there are many ways to compare them, from functional to performance. I don't know if your question aims to know how to compare them or to tell you how they differ. The first is not in my plans, but it would be nice if someone do it (that's always good). As for the latter, from what I know, fxpmath supports working with Numpy and with numbers larger than 64 bits; and from what I see, the others do not. Maybe it not so good in speed, I don't know. – francof2a Mar 09 '22 at 15:34
  • That's good enough for me. I need some library to execute some experiments and so far your `fxpmath` seems to be just fit for the work. – Daniel Mar 10 '22 at 03:30
  • I had the same problem and went through the different libraries available. The jupyter notebook results are hosted as a github gist: https://gist.github.com/snhobbs/6441850a39c7c01b34468f9fee6b82a4 – Simon Hobbs Apr 11 '22 at 21:30
  • @SimonHobbs, this comparison is welcome! Have you reach some conclusions about which one is more convenient? – francof2a Apr 15 '22 at 21:10
  • For simulating a fixed type my preference is spfpm. It’s got the a complete list of defined operators and when an operation causes an overflow it defaults to raising a fault. I’m sure the others are good too but spfpm met all my requirements exactly with default settings. – Simon Hobbs Apr 16 '22 at 22:39