3

EDIT: This is imo not an exact duplicate, because this question is for a solution spesific to Ubuntu, while the other is for a cross-platform solution.

In order to save power it is common in recent graphics architectures to dynamically switch between a discrete high-performance and an integrated lower-performance GPU, where the high-performance GPU is only enabled when the need for extra performance is present.

This technology is branded as Optimus for nvidia GPUs.

However due to the non-standardized way in which these technologies work, managing them from a developer's perspective can be a nightmare. For example in this PDF from nvidia on the subject, they explain the many intricacies, limitations and pitfalls that you will have to worry about as a developer to manage nvidia Optimus on just one platform.

As an example, in the linked PDF above, the following is a tip for selecting GPU on Windows:

extern "C" {
 _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

However that will only work on Windows platform. What would be the equivalent under Ubuntu?

I am especially interrested in how I can make this work reliably with OpenCL and OpenGL (interop), as that is the intended goal of my project. I am working with C++14/Qt5.7/OpenCL/OpenGL codebase under Ubuntu 16.04-amd64 using nvidia hardware (closed source driver v367).

Mr. Developerdude
  • 9,118
  • 10
  • 57
  • 95

0 Answers0