I'm using the Android camera in my app.
MY SETUP
1) I find out the dimensions of the App Window.
Display display = getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
int width = size.x;
int height = size.y;
(Step 2 and 3 are done in SurfaceCreated
and SurfaceChanged
)
2) I set the camera preview to the app size (in my case it is 480x800).
param.setPreviewSize(height, width);
3) I set the camera resolution (picture size) equal to that of the preview size.
param.setPictureSize(height, width);
Here are the rest of the parameters.
param.setPreviewFormat(ImageFormat.NV21);
camera.setPreviewDisplay(surfaceHolder);
camera.setDisplayOrientation(90);
4) I have the method which deals with capturing an image:
public void onPictureTaken(byte[] data, Camera camera) {
...
MY PROBLEM
The length of data
(stream of bytes) varies depending on the picture taken (depending on the quality I'm presuming).
However my phone Window resolution is 480*800
(as I said earlier), yet the stream of bytes (data
) is generally around 28000
in size.
If the resolution is 480*800
then we are talking about 384000
pixels. And if each pixel requires a Y,U and V value, assuming a byte is dedicated to each, that would mean the array of bytes (data
) should be 384000*3 = 1152000
in length.
And 1152000 > 28000
. By a lot!!
The reason this is bothering me is because I am using the methods given here on SO, for converting these YUV values into RGB values.
I call it like this:
int[] rgbs = new int[width*height];
decodeYUV(rgbs, data, width, height);
Where width, height
are the dimensions of the App Window 800*480
.
And I keep getting Array Out of Bounds exceptions, and exceptions like buffer size < minimum
.
Anyone know what the problem might be? Any help would be hugely appreciated.