I am trying to achieve the following:
1) I have a byte array on the java side that represents an image.
2) I need to give my native code access to it.
3) The native code decodes this image using GraphicsMagick and creates a bunch of thumbnails by calling resize. It also calculates a perceptual hash of the image which is either a vector or a unint8_t array.
4) Once I return this data back to the Java side different threads will read it. The thumbnails will be uploaded to some external storage service via HTTP.
My questions are:
1) What would be the most efficient way to pass the bytes from Java to my native code? I have access to it as a byte array. I don't see any particular advantage to passing it as a byte buffer (wrapping this byte array) vs a byte array here.
2) What would be the best way to return these thumbnails and perceptual hash back to the java code? I thought of a few options:
(i) I could allocate a byte buffer in Java and then pass it along to my native method. The native method could then write to it and set a limit after it is done and return the number of bytes written or some boolean indicating success. I could then slice and dice the byte buffer to extract the distinct thumbnails and perceptual hash and pass it along to the different threads that will upload the thumbnails. The problem with this approach is I don't know what size to allocate. The needed size will depend on the size of the thumbnails generated which I don't know in advance and the number of thumbnails (I do know this in advance).
(ii) I could also allocate the byte buffer in native code once I know the size needed. I could memcpy my blobs to the right region based on my custom packing protocol and return this byte buffer. Both (i) and (ii) seem complicated because of the custom packing protocol that would have to indicate the the length of each thumbnail and the perceptual hash.
(iii) Define a Java class that has fields for thumbnails: array of byte buffers and perceptual hash: byte array. I could allocate the byte buffers in native code when I know the exact sizes needed. I can then memcpy the bytes from my GraphicsMagick blob to the direct address of each byte buffer. I am assuming that there is also some method to set the number of bytes written on the byte buffer so that the java code knows how big the byte buffers are. After the byte buffers are set, I could fill in my Java object and return it. Compared to (i) and (ii) I create more byte buffers here and also a Java object but I avoid the complexity of a custom protocol. Rationale behind (i), (ii) and (iii) - given that the only thing I do with these thumbnails is to upload them, I was hoping to save an extra copy with byte buffers (vs byte array) when uploading them via NIO.
(iv) Define a Java class that has an array of byte arrays (instead of byte buffers) for the thumbnails and a byte array for the perceptual hash. I create these Java arrays in my native code and copy over the bytes from my GraphicsMagick blob using SetByteArrayRegion. The disadvantage vs the previous methods is that now there will be yet another copy in Java land when copying this byte array from the heap to some direct buffer when uploading it. Not sure that I would be saving any thing in terms of complexity vs (iii) here either.
Any advice would be awesome.
EDIT: @main suggested an interesting solution. I am editing my question to follow up on that option. If I wanted to wrap native memory in a DirectBuffer like how @main suggests, how would I know when I can safely free the native memory?