We are obtaining our image from the takePicture() function - here we are using here the jpeg callback (the third parameter) since we were not able to obtain the raw image - even after setting the callback buffer to maximum size. So the image gets compressed to JPEG format - we on the other hand need our image to be the same format as the preview frames: YCbCr_420_SP (NV21) (this format is expected by a third party library we use and we don't have the resources for a reimplementation)
We tried to set the picture format in the parameters when initializing the camera with setPictureFormat(), which saddly didn't help. I guess this function only applies to the raw callback.
We have access to the OpenCV C library on the JNI side, but don't know how to implement the conversion with IplImage.
So currently we are using the following java implementation for the conversion, which has really poor performance (about 2 seconds for a picture with dimensions of 3840x2160):
byte [] getNV21(int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
Does someone know how the conversion would look like with the help of OpenCV C or alternatively can offer a more efficient java implementation?
Update: After reimplementing the camera class to use the camera2 API we are receiving an Image object of the format YUV_420_888. We are using then the following function for conversion to NV21:
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
return nv21;
}
While this function works fine with cameraCaptureSessions.setRepeatingRequest
, we get a segmentation error when calling cameraCaptureSessions.capture
. Both request YUV_420_888 format via ImageReader.