I'll preface this by saying I'm rather new to these topics, and have been giving myself a crash course the last few days for a school project. I apologize that this is quite long, but as I'm not sure where the problem is I figured I'd try to be thorough.
I'm attempting to use a framebuffer to draw to some RGBA textures, which are used later for other drawing (I'm using the values in the textures as vectors to draw particles -- textures store values such as their position and velocity).
However, as best I can tell, the textures after I draw them are blank. Sampling values in the texture gives back (0, 0, 0, 1). This also seems to be confirmed by my later drawing procedure, as it appears that all the particles are being drawn overlapped at origin (this observation is based on my blend function and some hard-coded test colour values).
I'm using OpenGL 4 with SDL2 and glew.
I'll attempt to post everything relevant in an orderly fashion:
The framebuffer class creates the framebuffer and attaches (3) textures. It has:
GLuint buffer_id_
that is the framebuffer's id, initialized to 0, unsigned int width, height
, which are the width and height of the textures in pixels, numTextures
, which is how many textures to create for the framebuffer (again, in this case 3), and std::vector<GLuint> textures_
for all the textures associated with the framebuffer.
bool Framebuffer::init() {
assert( numTextures <= GL_MAX_COLOR_ATTACHMENTS );
// Get the buffer id.
glGenFramebuffers( 1, &buffer_id_ );
glBindFramebuffer( GL_FRAMEBUFFER, buffer_id_ );
// Generate the textures.
for (unsigned int i = 0; i < numTextures; ++i) {
GLuint tex;
glGenTextures( 1, &tex );
glBindTexture(GL_TEXTURE_2D, tex);
// Give empty image to OpenGL.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, 0);
// Use GL_NEAREST, since we don't want any kind of averaging across values:
// we just want one pixel to represent a particle's data.
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// This probably isn't necessary, but we don't want to have UV coords past the image edges.
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
textures_.push_back(tex);
}
glBindTexture(GL_TEXTURE_2D, 0);
// Bind our textures to the framebuffer.
GLenum drawBuffers[GL_MAX_COLOR_ATTACHMENTS];
for (unsigned int i = 0; i < numTextures; ++i) {
GLenum attach = GL_COLOR_ATTACHMENT0 + i;
glFramebufferTexture(GL_FRAMEBUFFER, attach, textures_[i], 0);
drawBuffers[i] = attach;
}
glDrawBuffers( numTextures, drawBuffers );
if ( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE ) {
std::cerr << "Error: Failed to create framebuffer." << std::endl;
return false;
}
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
return true;
}
Two matching framebuffers are made by the particle system, so that one can be used as input to the shader programs and the other as output.
bool AmbientParticleSystem::initFramebuffers() {
framebuffers_[0] = new Framebuffer(particle_texture_width_, particle_texture_height_, NUM_TEXTURES_PER_FRAMEBUFFER);
framebuffers_[1] = new Framebuffer(particle_texture_width_, particle_texture_height_, NUM_TEXTURES_PER_FRAMEBUFFER);
return framebuffers_[0]->init() && framebuffers_[1]->init();
}
Then I attempt to draw an initial state to the first framebuffer:
void AmbientParticleSystem::initParticleDrawing() {
glBindFramebuffer( GL_FRAMEBUFFER, framebuffers_[0]->getBufferID() );
// Store the previous viewport.
GLint prevViewport[4];
glGetIntegerv( GL_VIEWPORT, prevViewport );
glViewport( 0, 0, particle_texture_width_, particle_texture_height_ );
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_DEPTH_TEST );
glBlendFunc( GL_ONE, GL_ZERO);
init_shader_->use(); // This sets glUseProgram to the init shader.
GLint attrib = init_variable_ids_[constants::init::Variables::IN_VERTEX_POS]->id;
glEnableVertexAttribArray( attrib );
glBindBuffer( GL_ARRAY_BUFFER, full_size_quad_->getBufferID() );
glVertexAttribPointer( attrib, full_size_quad_->vertexSize,
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 ); // Array buffer offset
glDrawArrays( GL_TRIANGLES, 0, full_size_quad_->numVertices );
// Here I'm just printing some sample values to confirm it is blank/black.
glBindTexture( GL_TEXTURE_2D, framebuffers_[0]->getTexture(0) );
GLfloat *pixels = new GLfloat[particle_texture_width_ * particle_texture_height_ * 4];
glGetTexImage( GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, pixels);
std::cout << "some pixel entries:\n";
std::cout << "pixel0: " << pixels[0] << " " << pixels[1] << " " << pixels[2] << " " << pixels[3] << std::endl;
std::cout << "pixel10: " << pixels[40] << " " << pixels[41] << " " << pixels[42] << " " << pixels[43] << std::endl;
std::cout << "pixel100: " << pixels[400] << " " << pixels[401] << " " << pixels[402] << " " << pixels[403] << std::endl;
std::cout << "pixel10000: " << pixels[40000] << " " << pixels[40001] << " " << pixels[40002] << " " << pixels[40003] << std::endl;
// They all print 0, 0, 0, 1
delete pixels;
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glDisableVertexAttribArray( attrib );
init_shader_->stopUsing();
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
// Return the viewport to its previous state.
glViewport(prevViewport[0], prevViewport[1], prevViewport[2], prevViewport[3] );
}
You can see here is also where I try getting some pixel values, which all return (0, 0, 0, 1).
The full_size_quad_ used here is defined by:
// Create quad for textures to draw onto.
static const GLfloat quad_array[] = {
-1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
std::vector<GLfloat> quad(quad_array, quad_array + 18);
full_size_quad_ = new VertexBuffer(3, 6, quad);
VertexBuffer is my own class, which I don't think I'll need to show here. It just glGens and glBinds the vertex array and buffers.
The shader used here has this for the vertex shader:
#version 400 core
in vec3 inVertexPos;
void main() {
gl_Position = vec4(inVertexPos, 1.0);
}
and this for the fragment shader:
#version 400 core
layout(location = 0) out vec4 position;
layout(location = 1) out vec4 velocity;
layout(location = 2) out vec4 other;
uniform vec2 uResolution;
// http://stackoverflow.com/questions/4200224/random-noise-functions-for-glsl
float rand(vec2 seed) {
return fract(sin(dot(seed.xy,vec2(12.9898,78.233))) * 43758.5453);
}
void main() {
vec2 uv = gl_FragCoord.xy/uResolution.xy;
vec3 pos = vec3(uv.x, uv.y, rand(uv));
vec3 vel = vec3(-2.0, 0.0, 0.0);
vec4 col = vec4(1.0, 0.3, 0.1, 0.5);
position = vec4(pos, 1.0);
velocity = vec4(vel, 1.0);
other = col;
}
uResolution is the width and height of the textures in pixels, set by:
init_shader_->use();
glUniform2f( init_uniform_ids_[constants::init::Uniforms::U_RESOLUTION]->id, particle_texture_width_, particle_texture_height_ );
Out of curiosity I tried changing position = vec4(pos, 1.0);
to different values, but my pixel printing still gave 0 0 0 1.
I've been debugging this for about 20-30 hours now, and looked up a dozen or so different tutorials and other questions here, but the last several hours I don't feel I've gained any ground.
Anything here standing out for why the textures appear to be blank/black, or any thing else that needs addressing? I was using this project to learn about shaders, so I'm quite new to all of this. Any help would be immensely appreciated.