7

In my scenario I fetch screen from a device(it only produces tiff image) and transfer it to jpeg and sent it over network to the client(client only support jpeg encoding)

java code
    public byte[] getscreen(){
    /*
    logic for fetching tiff image from the device
    */
     if(tiffimage == null )
     return null;
     byteOutput = new ByteArrayOutputStream();
     ImageIO.write(tiffImage, "jpeg", byteOutput);
    return byteOutput;
    }

For the device to produce the image it's taking 10ms - 1 sec depending on the resolution of the device (please note no change can be done at this side, it produces only tiff image) and the size is from 3 MB -12 MB depending on the resolution.

Now converting the image to JPEG is taking some time. my query is, can we use the GPU power for converting the image from tiff to JPEG so that i can get improved FPS in my client side?

P.S: The application is running in various machines which have graphics cards such as (NVDIA, AMD,Intel HD graphics) I want to know whether this can be done, if so how to approach the solution.

Cœur
  • 37,241
  • 25
  • 195
  • 267
pavan
  • 91
  • 5
  • 1
    not sure if GPU are working better than CPU for converting image codecs here is a link to a similar question https://stackoverflow.com/questions/7662773/qt-convert-raw-image-to-jpg-using-hardware-acceleration-gpu – WhileTrueSleep Jan 05 '16 at 09:55
  • same here: it's not better to use gpu over cpu: http://stackoverflow.com/questions/22866901/using-java-with-nvidia-gpus-cuda – Martin Frank Jan 05 '16 at 10:42
  • see also http://stackoverflow.com/questions/3384970/java-gpu-programming using gpu in java – Martin Frank Jan 05 '16 at 10:44
  • @MartinFrank It it not *necessarily* better, but it *may* be better. And depending on the task (as described in the answer that you linked to), the GPU may bring a *significant* speedup (but I don't know whether this applies to JPG compression). In any case, the [NVIDIA NPP](https://developer.nvidia.com/npp) contain some subroutines for JPG compression, so this might be a realistic application case. – Marco13 Jan 05 '16 at 10:44
  • so - **if** you can do it and optimize performance on nvidia... how good would this optimization work on **other** GPUs (as described on your question)? – Martin Frank Jan 05 '16 at 10:47
  • can you add some numbers? you say it takes 10ms - 1s to create an image... how long does it take to convert the image? – Martin Frank Jan 05 '16 at 10:48
  • 1
    does your application work in parallel threads (one for taking images and one for converting the image) i'm hoping you're solving the right problem... – Martin Frank Jan 05 '16 at 10:51
  • Take a look at [Java bindings for OpenCL](http://www.jocl.org/) – erikvimz Jan 05 '16 at 11:17

1 Answers1

3

MPEG is roughly about just that: a lot of JPEG image encoding operations one after another, plus some logic involving differences for P frames, etc. I wrote a simple MPEG encoder using a GPU once, which gave some speedup factor (don't remember exactly by how much though). That said, in order to properly answer your question: yes, there might be some time difference, but for one picture only, that difference is probably negligible, including offset times for offloading the picture data to the GPU device, etc.

gustafbstrom
  • 1,622
  • 4
  • 25
  • 44
  • the image fetching is a continuous task( client asks for image once its establish the connection with server ) and the images are sent to client when image is fetched from device,There can be 1-4 device connected to a server and client can ask 1-4 devices screens .I want to improve the FPS by overcoming the convertion time (convertion time some time takes 10 ms - 1 sec depending on the size of the image) – pavan Jan 05 '16 at 13:30