I am trying to implement 64-bit arithmetic on the WebGL or WebGL2 shaders based on 32-bit floats. One of the basic functions which is needed there is the function which is splitting any float number into two "non overlapping" floats. The first float contains first half of the original float's fraction bits and the second float contains the second half. Here is the implementation of this function:
precision highp float;
...
...
vec2 split(const float a)
{
const float split = 4097.0; // 2^12 + 1
vec2 result;
float t = a * split; // almost 4097 * a
float diff = t - a; // almost 4096 * a
result.x = t - diff; // almost a
result.y = a - result.x; //very small number
return result;
}
This function is working as expected if I pass to it arguments defined in the shader:
precision highp float;
...
...
float number = 0.1;
vec2 splittedNumber = split(number);
if (splittedNumber.y != 0.0)
{
// color with white
// we step here and see the white screen
}
else
{
//color with black
}
But whenever the number depends somehow from any uniform everything starts to behave differently:
precision highp float;
uniform float uniformNumber;
...
...
float number = 0.2;
if (uniformNumber > 0.0)
{
// uniform number is positive,
// so we step here
number = 0.1;
}
vec2 splittedNumber = split(number);
if (splittedNumber.y != 0.0)
{
// color with white
}
else
{
//color with black
// we step here and see the black screen
}
So in the second situation when the "number" depends on the uniform the split function is somehow became optimized and return back vec2 with the zero y value.
There is a similar question on the stackoverflow on the similar problem in OpenGL Differing floating point behaviour between uniform and constants in GLSL The suggestion there was to use the "precise" modifier inside the function "split". Unfortunately in WebGL/WebGL2 shaders there is no such modifier.
Do you have any suggestion how get rid of the optimizations in my case and implement the "split" function?