0

I have ported remote frame buffer receive C code on Android 4.2.2 which receives frame buffer from host in RGB565 format. Able to render the received framebuffer following the standard android example frameworks/native/services/surfaceflinger/tests/resize/resize.cpp. Following is the code snippet used

sp<Surface> surface = client->createSurface(String8("resize"),
        800, 480, PIXEL_FORMAT_RGB_565, 0);


SurfaceComposerClient::openGlobalTransaction();
surface->setLayer(100000);
SurfaceComposerClient::closeGlobalTransaction();

Surface::SurfaceInfo info;
surface->lock(&info);
ssize_t bpr = info.s * bytesPerPixel(info.format);
/* rfb is the remote famebuffer filled by C stack*/
memcpy((uint16_t*)info.bits, rfb, 800*480*2);
surface->unlockAndPost();

But I am not able to upscale the received buffer to render full screen on android. For eg:- Host sends 800*480 but android device screen is 1024*786 Also have following doubts,
1. Does creating surface in native the right way to handle these kind of problem?
2. How to do upscale raw image and render on Android native?
3. While writing app, whether app can control this surface being it created on native?

I am new to android and it will be great if someone can guide me on right path to handle this problem

t3rmin4tor
  • 15
  • 4
  • Do you need to interact with SurfaceFlinger directly? As opposed to creating an app having it display the received frames. As far as scaling goes, you manage that by changing the window size so that the surface is rescaled when it's composited. – fadden Jun 20 '15 at 21:17
  • @fadden No need, if I can render those buffers using an application. Rright now I don't know how to do that. Can you give some links to some example application which does something similar? BTW, how these buffers can be passed to application to render? – t3rmin4tor Jun 21 '15 at 13:36

1 Answers1

0

You're currently using private SurfaceFlinger APIs, which require privileged access. If you need to do that, I think you want to use the setSize() call to change the size of the window (which is independent of the size of the underlying Surface). This section in the arch doc shows how to read part of the adb shell dumpsys SurfaceFlinger output to see what the actual sizes are -- that'll tell you if the call is working. (Normally you'd go through the Window Manager for this, but you're bypassing most of the Android framework.)

If you can do what you need in an unprivileged app, your code will be more portable and far less likely to break with changes to the operating system. The best way to go about it would be to create an OpenGL ES texture and "upload" the pixels with glTexImage2D() to a GL_UNSIGNED_SHORT_5_6_5 texture. (I'm reasonably confident that the GLES 565 format matches the Android gralloc format, but I haven't tried it.) Once you have the image in a GLES texture you can render it however you like -- you're no longer limited to rectangles.

Some examples can be found in Grafika. In particular, the "texture upload benchmark" activity demonstrates uploading and rendering textures. (It's a benchmark, so it's using an off-screen texture, but other activities such as "texture from camera" show how to do stuff on-screen.)

The GLES-based approach is significantly more work, but you can pull most of the pieces out of Grafika. The Java-language GLES code is generally just a thin wrapper around the native equivalents, so if you're determined to use the NDK for the GLES work it's a fairly straight conversion. Since all of the heavy lifting is done by the graphics driver, though, there's not much point in using the NDK for this. (If the pixels are arriving through native-only code, wrap the buffer with a "direct" ByteBuffer to get access from Java-language code.)

fadden
  • 51,356
  • 5
  • 116
  • 166
  • Thanks for the detailed explanation. I have a basic app with some controls and the software is layered in following manner, Native C stack->JNI->Service->AIDL->JAVA API JAR->App. Will there be any performance hit, if I pass ByteBuffer across these layers? – t3rmin4tor Jun 21 '15 at 18:31
  • There shouldn't be a noticeable hit from passing the ByteBuffer so long as you are using a direct ByteBuffer -- which is just a pointer to memory and a length -- and not copying the data itself. There may be slightly more overhead because you are uploading to a texture and rendering the texture on a Surface, rather than copying data directly to the Surface itself, but those operations are performed by the GPU. If your service and your app are in separate processes then you'd need to use shared memory to transfer it without adding a copy. – fadden Jun 21 '15 at 23:50
  • Yes. The service and app are in separate processes. AFIK, AIDL is implemented over binder which is the IPC mechanism. Can you please explain if there is any other shared memory mechanism for sharing buffer across multiple processes? – t3rmin4tor Jun 22 '15 at 03:33
  • Binder can do it, but it's geared toward small pieces, i.e. a structure with the pointers to shared memory rather than multi-megabyte chunks. You can try MemoryFile from Java or use ashmem directly, e.g. http://stackoverflow.com/questions/16099904/how-to-use-shared-memory-ipc-in-android – fadden Jun 22 '15 at 15:26
  • I am trying to implement scaling on native to start with(setSize as you mentioned) and then proceed towards having a more generic and platform independent implementation using GLES by passing buffer to app space. Added following code after memcpy to do upscale `SurfaceComposerClient::openGlobalTransaction(); surface->setSize(1024,600); SurfaceComposerClient::closeGlobalTransaction();` But display rendering got corrupted after adding above code. Some junk started displaying on LCD. Am I doing any obvious mistake here? – t3rmin4tor Jun 22 '15 at 18:52
  • Hmm. What does the HWC summary in the "dumpsys SurfaceFlinger" output show after the `setSize()` call? (https://source.android.com/devices/graphics/architecture.html#composition has a truncated example -- check before and after to see if the size is changing as expected). I haven't really thought about this stuff for a year (wow... it was exactly one year on Saturday), so it's possible I'm giving you bad advice. :-( – fadden Jun 23 '15 at 05:17
  • Finally `setSize` function didn't work. Based on this post [http://stackoverflow.com/questions/24675618/android-ffmpeg-bad-video-output] found there are some ANativeWindow operation that support scaling. Did following and upscaling works now(800X48 to 1024X600). `createSurface(..,1024,600); ANativeWindow_setBuffersGeometry(..,800,480); memcpy();` I don't know what sequence of operations happens beneath and whether it uses software scaling or hardware scaling. – t3rmin4tor Jun 24 '15 at 11:43
  • That seems reasonable -- create the surface at full size, which sets the window size and surface size, then change the surface size. Make sure to check the "stride" size in the ANativeWindow_Buffer struct (https://android.googlesource.com/platform/frameworks/native/+/master/include/android/native_window.h#55). Also, if you want to use the current display size, see https://android.googlesource.com/platform/frameworks/native/+/lollipop-release/opengl/tests/lib/WindowSurface.cpp for an example. – fadden Jun 24 '15 at 16:33