I'm writing a small android app where a user can place an image inside the live preview of the camera and take a picture of this. The app will then combine the two images appropriately -- All of this is working fine.
I understand you can get/set the PreviewSize using Camera.getParameters(), I assume this is related to the size of the realtime "camera feed".
However, the size of my SurfaceView where the camera preview is shown is different from the reported (and used) PreviewSizes. For example, in the emulator my available SurfaceView happens to be 360x215, while the PreviewSize is 320x240. Still, the entire SurfaceView is filled with the preview.
But the picture that's generated in the end is (also?) 320x240. How does android compensate for these differences in size and aspect ratio? Is the image truncated?
Or am I simply misunderstanding what the PreviewSize is about - is this related to the size of the generated pictures, or is it related to the "realtime preview" that's projected on the SurfaceView? Are there any non-trivial Camera examples that deal with this?
I need to know how what transformation takes place to, eventually, copy/scale the image correctly into the photo, hence these questions.