1

So recently with all the commotion on the new API for 3d graphics which supposedly accelerate graphics by a huge amount I am wondering why this hasn't been done before. What is so different between the old way and the new way?

I would like to know how it is achieved and maybe even an overview how it works from the gpu to the driver to the API of DX or OGL.

genpfault
  • 51,148
  • 11
  • 85
  • 139
rowan.G
  • 717
  • 7
  • 13
  • I think the question you're asking is way too broad. I suggest only concentrating on only one thing (for example, _why is Mantle supposed to be faster than DirectX or OpenGL_) to get a meaningful answer. – DarkDust Feb 22 '15 at 16:15
  • Mantle does not do anything that DirectX and OpenGL will not eventually do through extensions. It's more or less a stop-gap to lower API overhead and ironically, it only seems effective on CPU-bound applications running on AMD CPUs. – Andon M. Coleman Feb 22 '15 at 16:26
  • I assumed mantle, dx12 and ogl next all do basically the same thing. is this wrong? – rowan.G Feb 22 '15 at 16:49
  • You're correct but Mantle has a much thinner layer between the application and the driver. The upcoming DirectX 12 will provide something similar to Mantle, as will glNext which (apparently) Valve are going to unveil at the GDC on March 5th (2015). – Robinson Feb 23 '15 at 13:40
  • Do you know some 3D programming in any of those APIs? If so you could check out how a low level GPU API works here https://research.ncl.ac.uk/game/mastersdegree/workshops/ps3introductiontogcm/ I doubt glNext will be like that (specially the idea of having to align textures by hand), but it gives you a good idea of how far the rabbit hole goes :D – TheStack Feb 24 '15 at 11:40

2 Answers2

6

The main thing that has happened is that GPUs themselves have become pretty standardized even across the various vendors. DirectX and OpenGL have historically put a lot of effort into hiding the differences between the various vendor's GPU designs, many of which in the past had quite distinct hardware architecture--OpenGL has always been a much 'leakier' abstraction with all those zillions of vendor-specific extensions.

DirectX 12 and Mantle both expose a lot more of the underlying behavior of the GPU than in the past, so the runtime libraries are a much thinner abstraction over the underlying driver. This approach won't work on every video card, particularly many of the older ones which traditional DirectX and OpenGL could support--with OpenGL carrying a huge legacy tail of support going back decades. The push to normalize GPU features for DirectX 10 then DirectX 11 have made the consumer-grade PC graphics hardware from different vendors a lot less quirky since the early days of DirectX.

The other thing to keep in mind is that this trade-off also demands a lot more of the application/game itself to properly handle CPU/GPU synchronization, efficiently drive the hardware, track and exhaustively describe the state combinations they use, and generally operate without nearly the level of safety & support from the runtime.

Writing a DirectX 12 application takes a lot more graphics programming skill than DirectX 11 or OpenGL today. Setting up a device/swapchain and simple render scene in Direct3D 9 could take a few dozen lines of code, in Direct3D 11 a few hundred, but in DirectX 12 a whole lot more because the application itself has to do all the individual steps that used to be done 'by magic' in the runtime. This lets applications/games tweak the exact sequence and maybe skip steps they don't actually need, but it's on them and not the runtime to make that happen.

To quote Uncle Ben, With great power comes great responsibility.

Developers have insisted they want a 'Console-like' API on PC for years, so now they will get to find out if they really want it after all. For game engine writers, AAA titles with custom engines, and the technology demo scene, it's pretty darn cool to have so much nearly direct control over the hardware. For mere mortals, it will be a lot easier to use DirectX 11 for a while longer or use an engine that has already implemented DirectX 12.

Chuck Walbourn
  • 38,259
  • 2
  • 58
  • 81
  • I'm not sure I agree it's going to be easier to use DirectX 11. It seems to me that DirectX 12 will be a lot simpler with a lot less "fu" involved once you get the hang of the basic principles. – Robinson Feb 24 '15 at 11:51
  • Essentially nothing happens 'by magic' in DX12, but conversely nothing happens unless you explicitly do it. For some people, DX12 will be the logical option and that's great for them. For others, DX11 makes more sense in terms of productivity and ease-of-use. And for many others, some higher-level tool like Unity, Monogame, UE, etc. makes more sense. – Chuck Walbourn Feb 24 '15 at 16:39
  • I expect the tutorials and templates will cover the basics comprehensively. I've struggled with boiler plate since OpenGL1.1 and D3D 6.0. I imagine people will write basic libraries on top of the new APIs to simplify in any case. – Robinson Feb 24 '15 at 16:42
  • Sure, over time DirectX 12 will be better supported--I'll eventually have support for it with DirectX Tool Kit--but you will still have a lot more restrictions and due-diligence than with DirectX 11. In the limit, a library that implements all the usability support for DX12 that you have with DirectX 11 is... DirectX 11. – Chuck Walbourn Feb 24 '15 at 20:37
  • It almost certainly wouldn't be that, no. – Robinson Feb 25 '15 at 10:30
0

For overview how it works from the gpu to the driver to the API, you may start with this Graphics Pipeline

proton
  • 658
  • 1
  • 7
  • 26