22

I want to do image processing with a raw image without showing it on the screen, which obviously reduces performance.

According to the answers to this thread Taking picture from camera without preview it was not possible in Android 1.5, but does anybody know if it is possible in Android 4 (API level 15)?

Community
  • 1
  • 1
Dawamaha
  • 287
  • 1
  • 5
  • 9
  • Just tried the dummy SurfaceTexture solution (method 2) on a Samsung Galaxy s5. It does fail after a few frames. My workaround is to actually call updateTexImage, but use an invalid texture name with a valid GL context. After muting the resulting exception, everything works smoothly. – user3693576 Jun 18 '14 at 10:40

3 Answers3

47

In Android 4, the simplest option to receiving raw image data without displaying it on screen is to use the Camera.setPreviewTexture() call to route the preview frames to the GPU.

You can use this in two ways:

  1. Do your actual processing on the GPU: Set up an OpenGL context (OpenGL ES 2 tutorial), and create a SurfaceTexture object in that context. Then pass that object to setPreviewTexture, and start preview. Then, in your OpenGL code, you can call SurfaceTexture.updateTexImage, and the texture ID associated with the SurfaceTexture will be updated to the latest preview frame from the camera. You can also read back the RGB texture data to the CPU for further processing using glReadPixels, if desired.
  2. Do your processing on the CPU: You can simply create a dummy SurfaceTexture object without any OpenGL context set up. Pass any integer you want as the texture ID, and connect the SurfaceTexture to the camera using setPreviewTexture. As long as you don't call updateTexImage, the SurfaceTexture will simply discard all data passed into it by the camera. Then set up preview callbacks using setPreviewCallback, and use that data (typically in a YUV format) for CPU processing. This is probably less efficient than #1, but does not require knowing OpenGL.
Eddy Talvala
  • 17,243
  • 2
  • 42
  • 47
  • Has anyone come across a simple example of using glReadPixels after SurfaceTexture.updateTexImage returns? – wobbals Jun 13 '13 at 23:13
  • @wobbals: You can't directly `glReadPixels()` the Camera output (same issue as http://stackoverflow.com/questions/19366660/how-to-save-surfacetexture-as-bitmap/19370209#19370209). Render it to a pbuffer first. Some pointers to examples here: http://stackoverflow.com/questions/20710204/android-camera-preview-on-surfacetexture/21104769#21104769 – fadden Jan 14 '14 at 02:02
1

Since I am not allowed to comment. Regarding Eddy's answer. You need to work with this in the NDK as using the Java interface will negate any performance benefit. Having to work with a PixelBuffer is absolutely insane from a performance standpoint. Your conversion from RGBA888 to YUV also needs to be done in C.

Do not try using a TextureView as is will be even worse. You would have to copy the Pixels into a Bitmap then from a Bitmap into an Array all before the conversion to YUV. This, by itself, takes almost 30% of the cpu utilization on a brand spanking new Nexus 7 2013.

The most efficient way is to talk to Camera.h directly and bypass all of the Android APIs. You can create your own buffer and intercept the YUV data before it goes anywhere else.

bond
  • 1,054
  • 8
  • 15
  • 1
    While the preview callbacks are not as efficient as I'd like, they're not awful. You get a byte[] of YUV data per frame, which can be sent through JNI to native processing code with zero copies. No conversion to Bitmap or anything else required. Using Camera.h directly is a bad idea, because that interface is private and subject to change any time. – Eddy Talvala Nov 15 '13 at 23:26
  • The preview callbacks only work with a Surface. The question was about how to do it without displaying it at all. The only way to do it without a Surface is using PixelBuffer. The standard callback w/Surface is better from a performance standpoint. Each callback has two cycles. The Camera copies into the internal buffer; then into the RGBA surface buffer. The second cycle copies the internal buffer into the callback buffer. – bond Nov 24 '13 at 16:47
  • I'm really not sure what you mean by PixelBuffer - there's no API by that name I'm aware of in Android (do you mean Bitmap?). The preview callbacks work fine with setPreviewTexture() in addition to setPreviewDisplay(), and the former does not require drawing preview to any UI element. The copies in the native and JNI layers are a performance drag, I agree, which is why I also suggested the GPU processing approach, which has no overhead. – Eddy Talvala Nov 25 '13 at 22:22
  • PixelBuffer is the low level buffer used when you provide the Camera with a Texture instead of a Surface. The PixelBuffer backs the GL buffer. You can actually create a "fake" texure and force the Camera API to drump the raw data into the PixelBuffer. This is all hidden. android_platform_cts has an example or two of how to do this. I very much dislike how I cannot create paragraphs here. http://www.opengl.org/wiki/Pixel_Buffer_Object – bond Nov 26 '13 at 23:16
  • 1
    FWIW, an example of capturing the Camera preview and saving it as an MPEG file without displaying anything: http://bigflake.com/mediacodec/#CameraToMpegTest . A Surface is a place where pixels go; it isn't necessarily tied to the screen compositor. Also, for some limited definitions of "image processing" (e.g converting color to B&W), you can do it all on the GPU, which is about as efficient as you can possibly get. – fadden Jan 14 '14 at 02:09
  • @fadden With Surface you get GPU compositing; nothing else. Color conversion if you do the conversion as a GL shader otherwise it is still all CPU. The problem I have with this approach is the poor design of the buffer mechanism which copies and re-copies the pixels over and over. – bond May 04 '15 at 13:49
  • Camera pixel data sent to a Surface is in YUV format, and treated as an "external texture" by the GLES driver. If you render that texture, the YUV to RGB conversion is performed by the GLES driver, which usually means the GPU or a dedicated hardware block does the conversion. It's possible the driver could choose to do it in software, but that's not expected. A Surface is a queue of buffers where data is passed by handle, not copied. If you want to manipulate the data as a Java-language `byte[]`, then copying is necessary, but otherwise not. – fadden May 04 '15 at 15:42
  • The new camera API is different; the old one only gave reliable NV16 which had to be converted to standard YUV Interlaced before going anywhere else. My problem when doing camera work in 4.0 was preventing double conversion of the NV16 (once for the Texture, once for the encoder) – bond May 04 '15 at 15:57
1

Showing preview on the screen does not have performance consequences. On all devices I met, the camera output is "wired" to a surface or texture with no CPU involved, all color conversion and scaling taken care of by dedicated hardware.

There may be other reasons to "hide" the preview, but keep in mind that the purpose of the API initially was to make sure that the end user sees whatever arrives from the camera to the application, for privacy and security reasons.

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307