I am facing an issue converting the android camera proprietary image format to a jpeg, and then encode it to a string. I’ve looked at the consensus methods for doing this, but my results are incomplete; the images are 95% grey boxes with only a sliver of what the camera sees.
The method I'm using for the JPEG conversion is based on what's found in this link (other answers seem to give solutions that are very similar)
Once I have the JPEG bytes, I pass that into Java's Base64.encoder object which has an encodeToString method. I tried to work around this by simply taking a screenshot of the app's screen, but the resulting image has the AR session greyed out.
The image I'm using comes from ARCore's acquireCameraImage method, which returns an image in the YUV_420_888 format.
Screen grab showing partial image loading
fun encode(image: Image) {
val bytes = convertToJPEG(image)
image.close()
val encoder = Base64.getEncoder()
val str = encoder.encodeToString(bytes)
println("ENCODED!: " + str)
}
fun convertToJPEG(image: Image): ByteArray {
val yBuffer = image.planes[0].buffer
val uBuffer = image.planes[1].buffer
val vBuffer = image.planes[2].buffer
val ySize = yBuffer.remaining()
val uSize = uBuffer.remaining()
val vSize = vBuffer.remaining()
val bytes = ByteArray(ySize + uSize + vSize)
yBuffer.get(bytes, 0, ySize)
vBuffer.get(bytes, ySize, vSize)
uBuffer.get(bytes, ySize + vSize, uSize)
// jpeg below
val stream = ByteArrayOutputStream()
val yuv = YuvImage(bytes, ImageFormat.NV21, image.width, image.height, null)
yuv.compressToJpeg(Rect(0, 0, image.width, image.height), 80, stream)
return stream.toByteArray()
}