I am wondering why OpenGL makes use of float rather than double in its functions. Double should be much more accurate than float.
-
2A noticeable share of existing hardware still does not support double precision at all. Now consider OpenGL has been around for 20 years... Also, until 2-3 years ago at least one major card manufacturer did not even support 32-bit float precision. Fact is, nobody ever noticed the difference. – Damon Apr 04 '12 at 14:21
-
Thats true, how can you noticed the difference. Even you cannot notice that in your bank account if its float, how can you notice the picture differences by your naked eye – Eric Yin Apr 04 '12 at 14:24
-
1Besides, most of the time when people want to use `double` instead of `float`, they are doing something fundamentally wrong (such as using double precision because one wants to render at planetary scale). – Damon Apr 04 '12 at 14:24
-
@EricYin On bank account, one can notice wether it's a binary floating point number (i.e. any of float, double, or the likes). Even with 80 bit floats, some rounding errors occur that wouldn't occur with decimal floating point numbers. (Of course, there are still inaccuracies, and probably some binary floats don't have, but they're well-known to and expected by everyone.) – Apr 04 '12 at 14:27
-
7@EricYin: Money in your bank account is a very good example of "doing fundamentally wrong". This is __not__ something you would use either `float` or `double` for (though it's a frequent beginner mistake to assume that). Only fixed-point, never anything different. Floating point math is by principle unable to represent some numbers exactly, or do exact math with them. And while you won't notice your account being wrong by 1/1000 cent, the bank _will_ notice if that's the case for 50 million customers, or after a billion transactions in a month. – Damon Apr 04 '12 at 14:28
-
2@EricYin: If your bank account uses floats, their programmer should be fired and bank should be closed for the greater good. Floats are not suitable for financial calculation. You HAVE to use bignums there. – SigTerm Apr 04 '12 at 15:41
2 Answers
In the past, many OpenGL functions did have a double
variant. glMultMatrix
for example has f
and d
variations. Most of these don't exist anymore, but that has nothing to do with float
vs. double
. glMultMatrixd
and glMultMatrixf
are gone in core GL 3.1 and above.
In core OpenGL, there are still functions that have double
variants. glDepthRange
takes double
, though there is a float
version (introduced mainly for GL ES compatibility). There are some functions that don't have double
variants, like glBlendColor
.
Sometimes, OpenGL is just being inconsistent. Other times, it is simply following a reasonable principle: not lying to the user.
Take glBlendColor
. If you could pass it double-precision values, that would imply that floating-point blending took place with double-precision accuracy. Since it most certainly does not (on any hardware that exists), providing an API that offers that accuracy is a tacit lie to the user. You're taking high-precision values to a low-precision operation. Though the same logic is true of glDepthRange
(no double-precision depth buffers are not available), yet it takes doubles
. So again, inconsistency.
The glUniform*
suite of functions is a much better example. They set state into the current program object. Until GL 4.0, the double
versions did not exist. Why? Because that would have been a lie. GLSL pre-4.0 did not allow you to declare a double
, for the simple and obvious reason that no pre-4.0 hardware could implement it. There's no point in allowing the user to create a double
if the hardware couldn't handle it.

- 8,183
- 7
- 53
- 101

- 449,505
- 63
- 781
- 982
-
1"`glViewport` takes `double`" - I guess you meant `glDepthRange` since `glViewport(GLint, GLint, GLsizei, GLsizei)`. – plasmacel Dec 17 '18 at 15:30
Because most of the time you don't need the precision and doubles are twice the size.

- 2,118
- 13
- 13