8

I have a shamefully naive question: What is the best way to convert a uint32_t to a double between 0 and 1?

My naive way is

double myconvert(uint32_t a)
{
    double n = static_cast<double>(std::numeric_limits<uint32_t>::max() - std::numeric_limits<uint32_t>::min());
    return static_cast<double>(a) / n;
}

But I was wondering if there is a better way?

Tom de Geus
  • 5,625
  • 2
  • 33
  • 77
  • 1
    Perhaps I'm old-fashioned, but I find mathematical code with `static_cast` hard to read. I prefer to promote implicitly all the coefficients in a term using `1.0` at the start. – Bathsheba Apr 20 '21 at 15:12
  • 1
    Hmm. Naïve *and* old-fashioned ... that's pushing it a bit. :-) – Adrian Mole Apr 20 '21 at 15:31
  • i don't understand the purpose. so you are casting uint32 input into a double then dividing it by 0x41EFFFFFFFE00000 which is the max uint32 cast to a double. – user_number153 Apr 20 '21 at 16:02
  • 3
    Note that neither of the casts in the code in the question is needed. – Pete Becker Apr 20 '21 at 16:04
  • Does the range include 1.0 or not? If it doesn`t, some minor modification should be made: Assume linear mapping of points on a circle to the range [0,1) or mapping the [0,2pi). – Red.Wave Apr 25 '21 at 16:43
  • @Red.Wave Good point, thanks. For me it would include 1.0. But indeed one should take care – Tom de Geus Apr 26 '21 at 07:58
  • If it is naive or could depend on the purpose. If the uint32 is a random number and the goal is to generate a corresponding random double between 0 and 1 I believe there are less naive algorithms for that which accounts for the rounding of the double to keep the randomness uniform. – Forss Apr 26 '21 at 11:16
  • @Forss Indeed, I'm using it to get a random number [0, 1]. I think this post could benefit for your considerations as an answer. It would be great if you would be willing to post one! – Tom de Geus Apr 26 '21 at 11:50
  • There are a lot of answers for that more specific question here https://stackoverflow.com/q/1340729/3918852 (see e.g. the answer by DarthGizka). – Forss Apr 27 '21 at 06:14
  • 2
    What do you consider better, in objective terms? – TylerH Apr 29 '21 at 18:09

3 Answers3

8

std::numeric_limits<uint32_t>::min() is 0. Although removing the subtraction doesn't improve the generated assembly since it is known at compile time, it can simplify the function.

Another potential improvement is to calculate the complement of the divisor and use multiplication. You might think that optimiser would do that conversion automatically, but that's often not possible with floating point due to strict rules of IEEE-754.

Example:

return a * (1.0 / std::numeric_limits<uint32_t>::max());

Note that in the division used to calculate the complement, both operands are known at compile time, so the division is pre-calculated.

As you can see here, GCC does not do the optimisation automatically. It does if you use -ffast-math at the cost of IEEE-754 conformance

I checked Agner Fog's instruction tables and randomly chose Zen3 architecture, and double division has about 3 times greater latency than multiplication.

eerorika
  • 232,697
  • 12
  • 197
  • 326
6

It is safe to assume that std::numeric_limits<uint32_t>::min() will be zero (what unsigned integer is less than this?), so you can simplify your formula considerably:

double myconvert(uint32_t a)
{
    return static_cast<double>(a) / std::numeric_limits<uint32_t>::max();
}
Adrian Mole
  • 49,934
  • 160
  • 51
  • 83
5

1.0 * a / std::numeric_limits<uint32_t>::max()

is one way.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483