I want to write a cross-platform code for computer vision issue. This code should be run on PC with GPU (nVidia), iPhone with GPU and Some Android-Based device that may contain GPU and may not. I want to get the max possible utilization of the exist hardware. My programing language is C++ 11 and my computer vision library is OpenCV. What is the best framework, Layer, technique... etc to use in order to write an isolated high-level code that can utilize from the GPU if it is available.
P.S. this could be shown as off-topic as asking for recommendation. But really here I am not asking between many available options. I am just asking about how this usually is done or what is the state of art in this field.