0

I am porting an algorithm from C to Go. And I got a little bit confused. This is the C function:

void gauss_gen_cdf(uint64_t cdf[], long double sigma, int n)
{
    int i;
    long double s, d, e;

    //Calculations ...

    for (i = 1; i < n - 1; i++) {
       cdf[i] = s;
    }   
}

And in the for loop value "s" is assigned to element "x" the array cdf. How is this possible? As far as I know, a long double is a float64 (in the Go context). So I shouldn't be able to compile the C code because I am assigning an long double to an array which just contains uint64 elements. But the C code is working fine.

So can someone please explain why this is working?

Thank you very much.

UPDATE:

The original C code of the function can be found here: https://github.com/mjosaarinen/hilabliss/blob/master/distribution.c#L22

  • Even in C your code is not meaningful and probably has some [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior). So improve the C code to make it standard compliant. – Basile Starynkevitch Jul 18 '17 at 05:18
  • Why not just use an existing implementation of a gaussian distribution in Go (such as [go-gaussian](https://github.com/chobie/go-gaussian/)) instead of porting C code again (and introducing your own bugs)? – dolmen Jul 19 '17 at 13:21

2 Answers2

1

The assignment cdf[i] = s performs an implicit conversion to uint64_t. It's hard to tell if this is intended without the calculations you omitted.

In practice, long double as a type has considerable variance across architectures. Whether Go's float64 is an appropriate replacement depends on the architecture you are porting from. For example, on x86, long double is an 80-byte extended precision type, but Windows systems are usually configured in such a way to compute results only with the 53-bit mantissa, which means that float64 could still be equivalent for your purposes.

EDIT In this particular case, the values computed by the sources appear to be static and independent of the input. I would just use float64 on the Go side and see if the computed values are identical to those of the C version, when run on a x86 machine under real GNU/Linux (virtualization should be okay), to work around the Windows FPU issues. The choice of x86 is just a guess because it is likely what the original author used. I do not understand the underlying cryptography, so I can't say whether a difference in the computed values impact the security. (Also note that the C code does not seem to properly seed its PRNG.)

Florian Weimer
  • 32,022
  • 3
  • 48
  • 92
  • I updated the question and added a link to the original source code. Thank you for the explanation regarding the different architectures. –  Jul 18 '17 at 11:57
  • Thanks, I had a brief look at the C sources and updated my answer. – Florian Weimer Jul 18 '17 at 12:11
1

C long double in golang

The title suggests an interest in whether of not Go has an extended precision floating-point type similar to long double in C.

The answer is:


Why this is working?

long double s = some_calculation();
uint64_t a = s;

It compiles because, unlike Go, C allows for certain implicit type conversions. The integer portion of the floating-point value of s will be copied. Presumably the s value has been scaled such that it can be interpreted as a fixed-point value where, based on the linked library source, 0xFFFFFFFFFFFFFFFF (2^64-1) represents the value 1.0. In order to make the most of such assignments, it may be worthwhile to have used an extended floating-point type with 64 precision bits.

If I had to guess, I would say that the (crypto-related) library is using fixed-point here because they want to ensure deterministic results, see: How can floating point calculations be made deterministic?. And since the extended-precision floating point is only being used for initializing a lookup table, using the (presumably slow) math/big library would likely perform perfectly fine in this context.

Brent Bradburn
  • 51,587
  • 17
  • 154
  • 173