3

I am working on an embedded OpenGL graphics application running on an Intel Atom z530 with the GMA500 graphics hardware. (It's my understanding that the GMA500 is a PowerVR under the hood, but I'm not sure). I'm running with the Tungsten Graphics "Gallium" driver on Ubuntu 9.10 Karmic Koala. Oh, you should also know that i have 1 GB of available system memory.

Here's the problem: I have code that allocates a bunch of 512x512x32 textures (about 1MB apiece). When I get to about 118-120 of these, I get an "out of memory" error from OpenGL, and I also get this message on the console: "error: INTEL_ESCAPE_ALLOC_REGION failed".

This, along with simple measurements while looking at "top", indicate to me that I'm hitting up against an ~128MB limit for textures. The odd thing is this: this architecture doesn't have dedicated video ram, it's shared. And I can tell for sure that OpenGL is using system ram for the textures because I can see the "free" ram going down in 'top'. So why would I get an 'out of memory' error? I would expect opengl to simply use more of my available system ram. Why would there be such a hard limit? Is there some way to change what this apparent "hard limit" is set to?

Thanks! Chris


Here's my output from glxinfo:

$ glxinfo

name of display: :0.0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
    GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
    GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method, 
    GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_hyperpipe, 
    GLX_SGIX_swap_barrier, GLX_SGIX_fbconfig, GLX_MESA_copy_sub_buffer
client glx vendor string: SGI
client glx version string: 1.4
client glx extensions:
    GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, 
    GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_allocate_memory, 
    GLX_MESA_copy_sub_buffer, GLX_MESA_swap_control, 
    GLX_MESA_swap_frame_usage, GLX_OML_swap_method, GLX_OML_sync_control, 
    GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, 
    GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, 
    GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap
GLX version: 1.2
GLX extensions:
    GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, 
    GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_swap_control, 
    GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGIS_multisample, 
    GLX_SGIX_fbconfig, GLX_EXT_texture_from_pixmap
OpenGL vendor string: Tungsten Graphics, Inc.
OpenGL renderer string: Gallium 0.1, pipe/psb/Poulsbo on IEGD
OpenGL version string: 2.0 Mesa 7.1
OpenGL shading language version string: 1.10
OpenGL extensions:
    GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, 
    GL_ARB_fragment_shader, GL_ARB_multisample, GL_ARB_multitexture, 
    GL_ARB_occlusion_query, GL_ARB_pixel_buffer_object, 
    GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shader_objects, 
    GL_ARB_shading_language_100, GL_ARB_shading_language_120, GL_ARB_shadow, 
    GL_ARB_texture_border_clamp, GL_ARB_texture_compression, 
    GL_ARB_texture_cube_map, GL_ARB_texture_env_add, 
    GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, 
    GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, 
    GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, 
    GL_ARB_transpose_matrix, GL_ARB_vertex_buffer_object, 
    GL_ARB_vertex_program, GL_ARB_vertex_shader, GL_ARB_window_pos, 
    GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, 
    GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, 
    GL_EXT_blend_logic_op, GL_EXT_blend_minmax, GL_EXT_blend_subtract, 
    GL_EXT_clip_volume_hint, GL_EXT_compiled_vertex_array, 
    GL_EXT_copy_texture, GL_EXT_draw_range_elements, 
    GL_EXT_framebuffer_object, GL_EXT_framebuffer_blit, GL_EXT_fog_coord, 
    GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, 
    GL_EXT_pixel_buffer_object, GL_EXT_point_parameters, 
    GL_EXT_polygon_offset, GL_EXT_rescale_normal, GL_EXT_secondary_color, 
    GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, 
    GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_subtexture, 
    GL_EXT_texture, GL_EXT_texture3D, GL_EXT_texture_compression_s3tc, 
    GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, 
    GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, 
    GL_EXT_texture_filter_anisotropic, GL_EXT_texture_lod_bias, 
    GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, 
    GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_APPLE_packed_pixels, 
    GL_ATI_blend_equation_separate, GL_ATI_separate_stencil, 
    GL_IBM_rasterpos_clip, GL_IBM_texture_mirrored_repeat, 
    GL_INGR_blend_func_separate, GL_MESA_ycbcr_texture, GL_MESA_window_pos, 
    GL_NV_blend_square, GL_NV_light_max_exponent, GL_NV_point_sprite, 
    GL_NV_texture_rectangle, GL_NV_texgen_reflection, GL_OES_read_format, 
    GL_SGI_color_matrix, GL_SGIS_generate_mipmap, 
    GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp, 
    GL_SGIS_texture_lod, GL_SUN_multi_draw_arrays

    ...truncated visuals part...
Dan Lowe
  • 51,713
  • 20
  • 123
  • 112
sidewinderguy
  • 2,394
  • 4
  • 24
  • 24
  • Could you copy paste the result of `glxinfo` command? (the lines before the visual array) – tibur Jan 12 '11 at 21:33
  • 3
    Just a sidenote: OpenGL often keep textures around in RAM to be able to change textures in GPU-RAM quickly. So you can't use increased RAM-usage as an indicator of no-dedicated-texture-ram. (This can be worked around using buffers, I believe. There's been talk about adding extensions for GPU-only textures) – Macke Jan 12 '11 at 21:45
  • Isn't MESA a software-only renderer? (my opengl-linux-fu is a bit weak..) – Macke Jan 12 '11 at 21:48
  • 1
    I think in this case I can. For one thing, there is no GPU-RAM with this hardware, it's shared RAM, so the GPU is always using system RAM. Also, I carefully measured the system RAM usage, and it matches very closely with the theoretical size of these textures. – sidewinderguy Jan 12 '11 at 21:59
  • Yes MESA is a software rasterizer. However, I'm not using MESA - take a look at the vendor string and renderer string. ;-) – sidewinderguy Jan 12 '11 at 22:01
  • 1
    @Marcus: Mesa has a software rasterizer, yes, but is also has numerous hardware drivers. – genpfault Jan 12 '11 at 22:03

3 Answers3

5

Shared video memory does not mean that all the available RAM can be used for textures. Usually the graphics unit get's only a slice of the system memory, which is not available to the rest of the system at all. In your case that may be 128MiB. This kind of the same thing like the AGP aperture used by on board chipset graphics, or the framebuffer size of Intel Core integrated graphics.

Since OpenGL declares a purely virtual object model it must keep a copy of each object in "persistent" memory (the contents of the GPU's memory may be invalidated at any time, for example by VT switches, GPU resets, stuff like that), that's whats consumed from the regular system memory.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • In my case, the GPU is definitely using "regular" system memory for textures. I can see the available ram go down in 'top' as I create the textures. If the GPU was using memory that was not available to the rest of the system, then I wouldn't see the "free" ram go down in 'top'. (And BTW, it goes down by exactly the amount I would expect, so this isn't due to other memory use in my app). – sidewinderguy Jan 13 '11 at 16:32
  • However, I think you are right about the GPU getting a fixed "slice" of system memory. I just wish I could tell the driver to use more somehow... – sidewinderguy Jan 14 '11 at 19:05
  • 1
    @sidewinderguy: The memory you seeing consumed additionally to the GPU slice is the backingstore of the OpenGL driver. The contents of the GPU memory (textures, vertex buffer objects, FBOs. etc) may be trashed, for example if the system is switched to another X11 session. Or if you put the system into hibernation – on wakeup the GPU is reset and the contents of the GPU memory slice damaged. However OpenGL must assure, that applications can use their objects anytime. So there's a copy of each object kept in system memory. – datenwolf Jan 14 '11 at 19:13
3

Use smaller or compressed textures, or palletized ones. Also be wary about geometry/display lists which also suck GPU resources.

(You can do the palette lookup yourself in a shader if your GL implemention doesn't support such textures.)

Macke
  • 24,812
  • 7
  • 82
  • 118
  • Smaller would be nice, however I have to maintain a certain level of detail. I had considered compressed textures and they sound good, but are they slower to draw to and/or render? I have very limited CPU/GPU performance. Good point about the display lists... I'll have to try running without those and see if I get some benefit. – sidewinderguy Jan 12 '11 at 22:03
  • I've never used palletized textures, are they just as fast to draw to/render as normal textures? Do they take significantly less memory? – sidewinderguy Jan 12 '11 at 22:04
  • 1
    That leads me to a question: how could you have 120 textures of 512x512 and show them at full resolution? – tibur Jan 12 '11 at 22:29
  • @sidewinderguy: They're slower due to the palette-lookup for each pixel, but if you stick with 256 colors, they're a third of the size compared to 8-bit RGB textures. – Macke Jan 12 '11 at 22:34
  • @Marcus: The slowdown is undesireable. However, one interesting thing about my application is that I really only need about 5 or 6 colors, so I could possibly get some huge savings in memory usage. – sidewinderguy Jan 12 '11 at 22:50
  • 1
    @tibur: These textures are all "tiles" containing some "imagery" that will be displayed as part of a 3D map. So they are not all displayed at the same time, it depends on your pan/zoom which tiles you see and at what resolution. – sidewinderguy Jan 12 '11 at 22:52
  • 1
    @sidewinderguy: It's not really possible to go below 8-bit textures (well, with a lot of math you could pack two 4-bit textures into one and select each using a shader.. but that's just stupid). But if 8-bit won't get you there, perhaps you could think about managing your mipmaps dynamically (since you don't use all the textures all the time). Think Google Earth. That's more complicated, but scales better and doesn't interfere as much with the rest of your app. – Macke Jan 12 '11 at 23:03
  • 4
    @sidewinderguy compressed DXT textures are even faster to draw, because of better cache usage. The decoding is done entierly in the TMUs. Modern chips don't support palletized textures anymore - the driver will decompress to 32 bit before loading them. – Axel Gneiting Jan 13 '11 at 00:05
  • Compression sounds great! However, you should know that for my particular app, I create all the textures at runtime (drawing into them using an FBO). So my question is, can I draw into a compressed texture? If so, would that be slow/fast? – sidewinderguy Jan 14 '11 at 19:08
  • BTW, thanks for the discussions regarding bit depth/etc. It made me realize that it was ridiculous to be using 32-bit textures (I'm drawing solid colors into them, and I don't need many colors). I switched to 16-bit and that is helping a lot. – sidewinderguy Jan 14 '11 at 19:11
1

Have you taken into account the lower resolution copies of the texture that get created to do mipmapping?

pre-calculated, optimized collections of images that accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts.

These reduce in steps of powers of 2 so you'll have a 256x256, a 128x128, a 64x64, ... image accompanying the main texture. This will eat into your texture memory much faster than if you just had the single image.

In the example they use on Wikipedia the original texture is 256x256 and they take the mip-map textures all the way down to 1x1. By their calculations

The increase in storage space required for all of these mipmaps is a third of the original texture

This assumes that you haven't turned it off of course.

As to how you increase the amount of memory you can have access to - sorry no idea.

ChrisF
  • 134,786
  • 31
  • 255
  • 325
  • But that only results in 1/4 + 1/16 + 1/64 + ... < 1/2 more memory used, so it still should be under 192MB – brian_d Jan 12 '11 at 21:36
  • @brian_d - it's been a while since I did this stuff so my maths is a bit rusty. – ChrisF Jan 12 '11 at 21:39
  • 1
    Good point about the mipmapping. I definitely should turn it off if it isn't already (any idea how to do that in opengl? ;-) -- The problem is, even with these possible savings, I'm gonna need more ram (I'm hoping for 512MB). – sidewinderguy Jan 12 '11 at 21:46
  • 1
    @sidewiderguy - I'm sorry I can't remember (it's been a while) & I think that may be the issue. Each of your textures actually occupies approximately 1.36MB. Dividing 128 by 1.36 gives approximately 94, which is lower than your count of 118 though. – ChrisF Jan 12 '11 at 21:49
  • Don't use `gluBuild2DMipmaps`, or don't use http://www.opengl.org/registry/specs/SGIS/generate_mipmap.txt. Seems to mean that you don't use mipmaps... – tibur Jan 12 '11 at 21:50
  • 1
    @ChrisF: My real problem isn't to figure out why I could only get 118 textures allocated. The real issue is, how do I make more RAM available to me. Regardless, thanks for the input on this mipmapping stuff, I will keep this in mind and I'm sure it will help. – sidewinderguy Jan 12 '11 at 22:13
  • @sidewinderguy - Ah. I was really addressing this bit "So why would I get an 'out of memory' error?" - as you seemed confused as to why you'd run out of memory before you thought you should have. – ChrisF Jan 12 '11 at 22:14
  • @ChrisF - Right, sorry for the ambiguity :-) – sidewinderguy Jan 12 '11 at 22:49