I hear that GL_QUADS
are going to be removed in the OpenGL versions > 3.0, why is that? Will my old programs not work in the future then? I have benchmarked, and GL_TRIANGLES
or GL_QUADS
have no difference in render speed (might even be that GL_QUADS
is faster). So whats the point?
-
9You have benchmarrked the two in *your* test program, with *your* hardware and *your* GPU. Don't assume that your conclusion holds for *all* GPUs now and forever. – jalf Jul 10 '11 at 22:24
-
@jalf: I didn't downvote it, but it's not really an _answerable_ question. It's purely speculative; unless any of us are actually sitting members of the Khronos OpenGL ARB, any answer to the question of why quads were removed would be guesswork. – Nicol Bolas Jul 10 '11 at 22:27
-
9@jalf Kinda harsh comment for a very particular statement. I really didn't find that assumption in OPs question. – Captain Giraffe Jul 10 '11 at 22:29
-
@Captain Giraffe: Why did he mention benchmarks? He used it to support his point that removing GL_QUADS would be a mistake. Hence the "So whats the point ?" line. – Nicol Bolas Jul 10 '11 at 22:30
-
1@Captain: How is it harsh? I'm pointing out something he may or may not have taken into consideration. But he says that he has benchmarked the two, and found no difference in render speed. Don't you think it's relevant to point out that his results might not apply as generally as he thought? – jalf Jul 10 '11 at 22:30
-
Well, that wasn't my intention at least. :) – jalf Jul 10 '11 at 22:35
3 Answers
The point is that your GPU renders triangles, not quads. And it is pretty much trivial to construct a rectangle from two triangles, so the API doesn't really need to be burdened with the ability to render quads natively. OpenGL is going through a major trimming process, cutting a lot of functionality that made sense 15 years ago, but no longer match how the GPU works, or how the GPU is ever going to work. The fixed function pipeline is gone from the latest versions too, I believe, because, once again, it's no longer necessary, and it no longer matches how the GPU works (programmable shaders).
The point is that the smaller and tighter the OpenGL API can be made, the easier it is for vendors to write robust, high-performance drivers, and the easier it is to learn to use the API correctly and efficiently.
A few years ago, practically anything in OpenGL could be done in 3-5 different ways, which put a lot of burden on the developer to figure out which implementation is the right one if you want optimal performance.
So they're trying to streamline the API.

- 243,077
- 51
- 345
- 550
-
Nice answer, but what about the different versions of OpenGL? E.g. OpenGL 1vs2vs3vs4? – Jul 10 '11 at 22:28
-
-
@pst: what about them? I'm not sure what you mean. Older versions of the API obviously aren't affected by what happens in later versions. – jalf Jul 10 '11 at 22:31
-
@Nicol Bolas In terms of evolution and "simplification", mainly. It would just be interesting to also have a resource or a small summary of trends -- some already in post, but not tied to a version for reference. – Jul 10 '11 at 22:32
-
@pst: I still don't see what you're getting at. What do you want to know? – jalf Jul 10 '11 at 22:32
-
By v3.0, control of OpenGL was transferred to the Khronos group, following nearly a decade of stagnation and very minor updates. Since then, Khronos has been trying to 1) catch up with DX in terms of features and functionality, and 2) streamline the API, getting rid of old cruft that doesn't make sense on modern GPUs. – jalf Jul 10 '11 at 22:37
-
3@jalf: Technically, there wasn't a decade of stagnation and minor updates; GLSL is hardly minor. The only reason Khronos got the ARB (and most Khronos ARB members were members of the original) was because SGI, the former owners of the ARB, was dying. No, the stagnation happened when the ARB decided to try to rewrite the whole API. Which took 2 years, and didn't actually _happen_. – Nicol Bolas Jul 10 '11 at 22:45
-
@Nicol: Please, how long did it take them to get 2.0 out the door? 2.1? OpenGL was *hugely* behind Direct3D before Khronos took over. It was a fossil. Yes, they managed to get GLSL out the door, but still lightyears behind the state of HLSL and Cg at the time. – jalf Jul 11 '11 at 05:07
-
@jalf: There is a difference between "behind the state of" and "getting it out there." The problems with GLSL were not because it took a long time. It was because a number of terrible decisions were made in its formation and standardization. If the right decisions had been made, it would have taken just as long to produce, but the outcome would have been better. Time wasn't the main problem with GLSL; _GLSL_ was the main problem with GLSL. Also, HLSL and Cg were all released within a year or so of GLSL, so it wasn't terribly far behind. – Nicol Bolas Jul 11 '11 at 05:16
-
@Nicol GLSL, HLSL and Cg aren't just singular entities. All three came in many different versions, with different features and limitations. And I'm not sure why you're choosing to focus on GLSL specifically. Like I said, we're looking at something close to a decade of very slow progress. The fact that they managed to get a shading language out the door doesn't mean that OpenGL *as a whole* suddenly no longer looked like a fossil. I'm sorry if you're emotionally invested in the API, but I'm not making it up when I'm saying they were completely out of tune with the 3d world for a number of years – jalf Jul 11 '11 at 13:09
-
3@jalf: I agree that OpenGL was "out of tune with the 3d world" for a time. I even wrote a [semi-popular answer](http://programmers.stackexchange.com/questions/60544/why-do-game-developers-prefer-windows/88055#88055) about their failings. The problem is that you first said that there was "a decade of stagnation and very minor updates," which is a very different thing. The ARB's problem wasn't failing to update the API; the ARB's problem was updating it _wrongly_. It isn't _stagnation_ if you're going downhill... – Nicol Bolas Jul 11 '11 at 20:23
-
@jalf, i've been testing between triangles and quads a lot now, and every time i notice quads is faster. does this make sense in the means of how GPU's work? could quads actually be optimized inside the GPU better than triangles? or could it be something to do about memory manager: triangles takes more memory. but i have set my VBO buffers to the same size though... maybe its about how many buffers i render then? – Rookie Jul 12 '11 at 13:10
-
2@jalf, tested now more, no matter what settings i have, the GL_QUADS is always faster. i have tried setting all buffer counts to same, all buffer sizes to same, and so on. But GL_QUADS always wins, the lowest difference between those two i got when i set them to use exact same amount of buffers: 33fps with triangles, 36fps with quads. SO, have anyone else done such tests ? this makes me think why do they want to get rid off this if its even faster. – Rookie Jul 12 '11 at 13:34
-
1@Rookie: but once again, you haven't tested it with different GPUs under different OSes. How can you be sure it's faster in *all* those cases? Another important point is that OpenGL translates your quads to triangles *anyway*, because the GPU can only render triangles. So in this case, OpenGL doesn't do anything you couldn't have done yourself just as efficiently. So if your code shows quads being faster, then it sounds like your triangle rendering code is just not as well writen as your quad rendering ditto. :) – jalf Jul 12 '11 at 14:47
-
3And that is why it's getting removed. 15 years ago, quads were just as good as triangles, because you either rendered on the CPU, which has no dedicated hardware for either case, or on various GPU-like hardware, some of which supported quads natively. But today, *no* GPU supports quads natively. So OpenGL's support for quads in a modern implementation boils down to it internally cutting every quad in half, and rendering the resulting triangles. There's just no need for the 3d API to do that. If you really want quads, you could easily define an external helper library – jalf Jul 12 '11 at 14:49
-
@jalf, my quad/triangle rendering codes are identical, except that i push 2 more extra vertices in the triangle rendering code, which then results in larger buffer(s). also, wouldnt it make sense that the quad splitting in half would actually be faster because it uses 2 vertices less of memory access ? if you have Vertex, Color, TexCoord, Normal, VertexAttr, then it is wasting _a lot_ of time reading those, when it could use the already read values... not to mention how much you would save in memory size! – Rookie Jul 12 '11 at 17:20
-
@Rookie: no, because the splitting happens *no matter what*. Once again, the GPU doesn't udnerstand quads. So the choice is, either you split the quad, or OpenGL does it for you (on the CPU side). And if your code is "exactly the same", then that is your problem. YOu don't need to send more vertices. Upload a buffer which contains (probably among a number of other vertices) the 4 vertices that define your quad. Then upload an index buffer which contains the 6 vertex indices needed to render the two triangles. If you're calling the API inefficiently, it'll skew your results. – jalf Jul 12 '11 at 19:22
-
@jalf, but i dont want to send anything to the GPU.. i thought thats the whole point of using VBO: the data is already there, no need to send anything, unlike with vertex arrays. – Rookie Jul 13 '11 at 10:53
-
@Rookie: the data is only "already there" if it was perviously sent to the GPU. But let me rephrase then. In order to render a rectangle, the GPU needs four vertices, in a single VBO, and an index buffer containing (at least) 6 indices. The GPU only sees each unique vertex once, but for some of them (the ones that are part of both triangles), it encounters two indices pointing to it. So the GPU never sees 6 separate vertices, which would have been costlier as you said. It sees four vertices, exactly as it would in the quad case. – jalf Jul 13 '11 at 11:04
-
So you give the GPU all the information describing your 4 vertices, and then you tell it "draw a triangle from vertices 0, 1 and 2", and then "draw a triangle from vertices 1, 3 and 2". That's what your triangle rendering code *should* be doing, and it is exactly what your quad rendering code makes OpenGL do for you. – jalf Jul 13 '11 at 11:06
-
@jalf, but is it possible to upload the indexes to GPU just once? i couldnt find a function for index pointer (glIndexPointer was for colors), is the only way to do that is to send them again every frame with glDrawElements() ? even with just uploading indexes, it would take 150MB/s in my app, and i really want to take the advantage of the VBO in the means of "upload once", and use them kinda like display lists, except that i can easily modify them. – Rookie Jul 13 '11 at 12:53
-
@Rookie: Yes it is possible, and it is standard practice, even. It's been a while since I used OGL though, but I think you're simply looking for Index Buffer Objects (IBO). Nothing prevents you from making your VBO's "upload once", and in fact they *should be* as far as possible. But you're mixing two different discussions here. The way OpenGL renders a quad is, on the CPU side, to split it into 4 vertices, and 6 indices, uploading those, and rendering two triangles from them. And that is exactly what you should do as well if you don't use GL_QUAD, so there is zero extra overhead – jalf Jul 13 '11 at 13:01
-
the other issue is performance, and here, of course, you should try to "upload once" as far as possible. Both your vertices and indices can likely be uploaded once, and then rendering many many times, and that is exactly what you should do. But that's true for triangles as well as quads – jalf Jul 13 '11 at 13:02
-
I just googled it, the index buffer is created with `glBufferDataArb(GL_ELEMENT_ARRAY_BUFFER_ARB, ...)`. That should get you started. – jalf Jul 13 '11 at 13:03
-
thanks for your help and patience on my questions, i'll take your advice. – Rookie Jul 13 '11 at 14:04
-
1@jalf: "an index buffer containing (at least) 6 indices" — an old comment, but would it not be possible to send 5 indices by sending a TRIANGLE_STRIP using 4 indices and then a resetting index? – Neil G Mar 05 '14 at 22:15
-
Quads aren't exactly interchangable. The interpolation is different: [Low polygon cone - smooth shading at the tip](http://stackoverflow.com/questions/15283508/low-polygon-cone-smooth-shading-at-the-tip) – jozxyqk Apr 23 '14 at 07:21
-
What about trangle strips and triangle fans? They are made of triangles too, so applying the same logic, they should be deprecated too :q – SasQ Oct 29 '20 at 13:42
People have already answered quite well on your question. On top of their answer, one of the reason that GL_QUADS
being deprecated is because of quads's undefined nature.
For example try to model a 2d square with points (0,0,0), (1,0,0), (1,1,1), (0,1,0)
. This is flat quad with one corner dragged up. It is impossible to draw a NORMAL flat square in such way. Depending on drivers, it will be split to 2 triangles either one or another way - which we can't control. Such a model MUST be modeled with two triangles. - All three points of a triangle always lies on a same plane.
-
3quads don't have "undefined nature" just because you can use them to define something that isn't geometrically FLAT. who said a poly has to be flat? if the programmer wants it flat, he can make it flat. it'd simply be the programmers naivety to think points not all on the same plane could make a flat face. and you still can't achieve that even with triangles, so bringing this up was irrelevant. they were obviously only ever gonna use tri's for all their poly's. they HAVE TO. interpolation is FLAT. quads were just a way of putting two triangles together with 4 verts instead of 6. – Puddle Sep 27 '19 at 13:28
-
This answer is mostly correct except that some hardware e.g.SEGA Saturn could render such quad without any problems. However newer hardware can't render quads so they emulate them by two triangles which will give problem in some cases like the one mentioned. – Lee Jun 02 '20 at 19:35
It isn't "going" to be anything. As with a lot of other functionality, GL_QUADS
was deprecated in version 3.0 and removed in version 3.1. Obviously this is all irrelevant if you create a compatibility context.
Any answer that anyone might give for the reason for deprecating them would be sheer speculation.

- 23,478
- 6
- 59
- 81

- 449,505
- 63
- 781
- 982
-
2I'm not sure answers for deprecating them is entirely speculation ;-) In any case, it is much more aggressive to actually remove a feature in a "minor". – Jul 10 '11 at 22:30
-
4@pst: Major versions exist now to correspond with actual new hardware. GL 3.2 can be implemented on the same hardware as 3.1 and as 3.0. But 4.0 cannot be implemented on the same hardware as 3.3. Sort of like Direct3D major versions nowadays. – Nicol Bolas Jul 10 '11 at 22:33