13

According to the android camera docs from the Java SDK side, the camera preview frames have to be supplied a (visible and active) surface to be drawn to in order to access the frame data. I have linked a few of the things I have come across here (I'm new so capped at 2 hyperlinks), but I went over tons of documentation of various things before winding up posting my own question here on SO.

My Problems:

a) I explicitly don't want to draw the camera preview to the screen, I just want the byte data (which I can get from here) straight from the camera buffer if possible.

b) Yes, I saw this: Taking picture from camera without preview.

However, this dictates that apps using this library have to insert this (seemingly) arbitrary view into their app layout, which has to be visible all the time (they can't switch layouts, change visibility of parent containers or use a subactivity) during app life-cycle.

In fact, my needs are similar to this poster's, except I want continuous real-time stream of the camera preview data, not a capture saved to image on disk. Hence PictureCallback works for him together with the myCamera.takePicture call. For obvious reasons, writing continuous captures to disk is not a solution for me, so that won't work in my case. myCamera.takePicture is also much slower than getting the preview frames.

c) I have started dabbling with the NDK, and have gotten a pretty good feel for it. However, according to this accessing camera data via native is just not supported or recommended, and a huge hassle for device compatibility even then.

If this is outdated, and there are solid NDK routes to acquiring camera data from android devices, I couldn't find them, so if you could point them out to me that would be great.

d) I want to make this library accessible from tools like Unity (in the form of a unity plugin), for which I want to be able to just compile it into a JAR (or .so for native) and expect it to work on android apps that import the library that way so that they can use the camera without specific UI/layout configurations needed from the app developer.

Context:

I want to be able to create a vision-processing library for use in android apps, and don't want to limit the apps using the library to have to use specific app layouts and draw specific views to the screen in order to use the vision processing results - for a very simple example, if an app wants to use my library to get the average color of what the camera sees, and wants to tint an image on the screen that color.

Whatever suggestions I can get towards any of the points I have will be super-helpful. Thanks a lot for your time!

Community
  • 1
  • 1
kOrc
  • 416
  • 5
  • 11

2 Answers2

5

I completely forgot I had this question up. 2 years and a couple of Android SDK versions later, we have a working system.

We're using an extended SurfaceTexture, cameraSurface with a reference to the required camera. Using cameraSurface's SurfaceTexture, we call

mCamera.setPreviewTexture(mSurfaceTexture);
mCamera.startPreview();

Then override the current active camera's previewCallback from your activity or wherever you need it.

mCamera.setPreviewCallback(new PreviewCallback() {

     public void onPreviewFrame(final byte[] data, final Camera camera) {
         // Process the contents of byte for whatever you need
     }
});

This allows you to continuously process what the camera sees, rather than just having to go through stored images. Of course, this will be limited to your camera's preview resolution (which may be different from the still capture or video resolutions).

If I get the time, I'll try to throw up a barebones working demo.

Edit: The SDK I had linked to is no longer freely available. You can still request one via the BitGym contact page.

kOrc
  • 416
  • 5
  • 11
  • 2
    It is great that you found a solution to your problem; thanks for sharing it. Let me add a couple of hints that may not be essential for your use case, but can make a difference for someone else. First, if you use [Camera.setPreviewCallbackWithBuffer()](http://developer.android.com/reference/android/hardware/Camera.html#setPreviewCallbackWithBuffer(android.hardware.Camera.PreviewCallback)), you can significantly reduce garbage collection during video capture. Second, to push the callbacks off the UI thread, you must use a [Handler thread](http://stackoverflow.com/a/20693740/192373). – Alex Cohn May 24 '14 at 14:19
  • Doesn't this need opengl and android 5.0 (camera api 2) to work on data? What if I need to do some renderscript operations using only camera api1 on it? – huseyin tugrul buyukisik Aug 30 '15 at 14:01
  • Works from about Android API level 11 onwards which is 3.0 (different flavors have slightly different behaviour though). It does require openGL ES 2, which is available on pre ice cream sandwich devices. – kOrc Sep 03 '15 at 22:38
  • I'm trying to write an app very similar to yours and I've ran into the same problem. Is your code using SurfaceTexture still available? I tried the link you gave but its no longer valid. Would really appreciate seeing a working demo, thanks! – CSharp May 16 '16 at 12:04
  • is there any answer available for Camera2 API? – Bhavesh Hirpara Jun 26 '19 at 13:40
0

There is no getting around having the SurfaceView for image preview. What you could do is delegate the responsibility of capturing the byte[] to the apps implementing your library which would allows them to use your library for not only images from the camera but allows images that have already been taken.

tasomaniac
  • 10,234
  • 6
  • 52
  • 84
Justin Slade
  • 500
  • 2
  • 7
  • The issue was trying to do it continuously, not just at isolated instances of time. Delegating that responsibility to apps is really messy, it is much preferred the developer can tell our library "start processing"/"stop processing", and we give them the results of our processing that they can access via a "lastProcessingOutput" or equivalent. That would make it as drag-and-drop as possible to still give the user as much control. If we want them to be able to process stored images, we could just include that as a flag+url, but we're a motion detection library so stills aren't very meaningful. – kOrc May 24 '14 at 11:09
  • @Justin-slade: Instead of SurfaceView, you can use SurfaceTexture via TextureView or directly connect to OpenGL. – Alex Cohn May 24 '14 at 14:23