2

I am currently working on a platform in the native Android framework where I use GraphicBuffer to allocate memory and then create an EGLImage from it. This is then used as a texture in OpenGL which I render to (with a simple fullscreen quad).

The problem is when I read the rendered pixel data from the GraphicBuffer, I expect it to be in a linear RGBA format in memory but the result is a texture which contains three parallell smaller clones of the image and with overlapping pixels. Maybe that description doesn't say much but the point is the actual pixel data makes sense but the memory layout seems to be something other than linear RGBA. I assume this is because the graphics drivers store the pixels in an internal format other than linear RGBA.

If I render to a standard OpenGL texture and read with glReadPixels everything works fine, so I assume the problem lies with my custom memory allocation with GraphicBuffer.

If the reason is the drivers' internal memory layout, is there any way of forcing the layout to linear RGBA? I have tried most of the usage flags supplied to the GraphicBuffer constructor with no success. If not, is there a way to output the data differently in the shader to "cancel out" the memory layout?

I am building Android 4.4.3 for Nexus 5.

//Allocate graphicbuffer
outputBuffer = new GraphicBuffer(outputFormat.width, outputFormat.height, outputFormat.bufferFormat,
        GraphicBuffer::USAGE_SW_READ_OFTEN |
        GraphicBuffer::USAGE_HW_RENDER |
        GraphicBuffer::USAGE_HW_TEXTURE);

/* ... */

//Create EGLImage from graphicbuffer
EGLint eglImageAttributes[] = {EGL_WIDTH, outputFormat.width, EGL_HEIGHT, outputFormat.height, EGL_MATCH_FORMAT_KHR,
        outputFormat.eglFormat, EGL_IMAGE_PRESERVED_KHR, EGL_FALSE, EGL_NONE};

EGLClientBuffer nativeBuffer = outputBuffer->getNativeBuffer();

eglImage = _eglCreateImageKHR(display, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, nativeBuffer, eglImageAttributes);

/* ... */

//Create output texture
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);

/* ... */

//Create target fbo
glGenFramebuffers(1, &targetFBO);
glBindFramebuffer(GL_FRAMEBUFFER, targetFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);

glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* ... */
//Read from graphicbuffer
const Rect lockBoundsOutput(quadRenderer->outputFormat.width, quadRenderer->outputFormat.height);

status_t statusgb = quadRenderer->getOutputBuffer()->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &result);
genpfault
  • 51,148
  • 11
  • 85
  • 139
generalus
  • 31
  • 5

1 Answers1

0

I managed to find the answer myself and I was wrong all along. The simple reason was that although I was rendering a 480x1080 texture the memory allocated was padded to 640x1080 so I just needed to remove the padding after each row and the output texture made sense.

generalus
  • 31
  • 5
  • Hey, I'm trying to implement exactly the same thing but according to this [article](http://stackoverflow.com/a/25535693/4116251) it should not be possible to read a texture from the `GraphicBuffer` object. Could you share the code for it? – focs Jul 03 '15 at 12:48
  • What do you mean? If you render to a texture that is based on a GraphicBuffer, you just need to synchronize the rendering and get the pointer to the GraphicBuffer by using the lock() function and you can read the texture. – generalus Aug 27 '15 at 07:54