This is something of a follow-up to my question Draw textures to canvas async / in sequence deletes old textures, but with a different approach recommended to me by a friend. I am just learning WebGL, so bear with me.
My goal
- Load images asyncronously
- Render those images to a single webgl canvas in a side-by-side "tiled" fashion. Each image will contain coordinates to dictate where in the canvas it should be rendered
- On each async image load, treat the whole canvas as a single texture, then apply some image processing in the shader to the texture as a whole
Using a framebuffer
My understanding is that you can create a framebuffer, render textures to it, and then render the framebuffer to a target texture, and then render the target texture to the screen.
// First I create the frame buffer, and a target texture to render to,
// and attach the texture to the framebuffer
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
const targetTexture = gl.createTexture();
gl.framebufferTexture2D(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D,
targetTexture,
0
);
My idea is that on every image load, you can create a texture from the image. After enabling the vertex attributes on each texture, you can then call drawArrays
, which would then draw to the framebuffer. After doing that, you should be able to unbind the framebuffer, then call drawArrays
again, which should...draw the framebuffer to the screen? This is where I am getting confused:
// Let's pretend we have a few tile urls in an array for now:
tiles.forEach((tile) => {
const image = new Image();
image.onload = () => render(image, tile);
image.src = tile.path;
});
function render(tileImage: HTMLImageElement, tile: Tile) {
// look up where the vertex data needs to go.
var positionLocation = gl.getAttribLocation(program, 'a_position');
var texcoordLocation = gl.getAttribLocation(program, 'a_texCoord');
// Create a buffer to put three 2d clip space points in
var positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
// Set a rectangle the same size as the image.
// see Appending of question for details
setRectangle(
gl,
tile.position.x,
tile.position.y,
tileImage.width,
tileImage.height
);
// provide texture coordinates for the rectangle.
var texcoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.bufferData(
gl.ARRAY_BUFFER,
new Float32Array([
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0,
]),
gl.STATIC_DRAW
);
// Create a texture and bing it to the gl context
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
// Set the parameters so we can render any size image.
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
// Upload the tile image to the texture
gl.texImage2D(
gl.TEXTURE_2D,
0,
gl.RGBA,
gl.RGBA,
gl.UNSIGNED_BYTE,
tileImage
);
// lookup uniforms
var resolutionLocation = gl.getUniformLocation(program, 'u_resolution');
var textureSizeLocation = gl.getUniformLocation(program, 'u_textureSize');
// Tell WebGL how to convert from clip space to pixels
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
// Tell it to use our program (pair of shaders)
gl.useProgram(program);
// Turn on the position attribute
gl.enableVertexAttribArray(positionLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
// Turn on the texcoord attribute
gl.enableVertexAttribArray(texcoordLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.vertexAttribPointer(texcoordLocation, 2, gl.FLOAT, false, 0, 0);
// set the resolution and size of image
gl.uniform2f(resolutionLocation, gl.canvas.width, gl.canvas.height);
gl.uniform2f(textureSizeLocation, 256, 256);
// bind frame buffer and draw arrays - draw TO the framebuffer?
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// Unbind framebuffer and draw...to the canvas?
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
It is in the last few lines that I get confused. The reason I know this is not working, is because if I put an artificial delay on each image load, you can see that each image is drawn to the canvas, but when the next one is drawn, the previous one disappears.
Codesandbox demonstrating the issue
I have read many discussions on this. In WebGL display framebuffer?, where gman shows how to render to a framebuffer, then to the screen, for a single image. The question How to work with framebuffers in webgl? is very similar as well. Most of the questions I've found have been either like this - rendering a simple single image to a framebuffer, then to the screen - or far beyond my level at this point, i.e. using a framebuffer to render to the faces of a spinning cube. I can't seem to find any information on how to take simple 2d images, and render them to a webgl canvas in an async way.
I have also seen several recommendations to draw images to a 2d canvas, and use that as the source of a singular 2d texture. For example, in the question Can I create big texture from other small textures in webgl?, gman recommends:
If you have to do it at runtime for some reason then the easiest way to combine images into a single texture is to first load all your images, then use the canvas 2D api to draw them into a 2D canvas, then use that canvas as a source for texImage2D in WebGL
I don't understand why this is preferable.
How can I take these images async and stitch them together within a single webgl canvas?
Appendix:
export function setRectangle(
gl: WebGLRenderingContext,
x: number,
y: number,
width: number,
height: number
) {
const x1 = x,
x2 = x + width,
y1 = y,
y2 = y + height;
gl.bufferData(
gl.ARRAY_BUFFER,
// prettier-ignore
new Float32Array([
x1, y1,
x2, y1,
x1, y2,
x1, y2,
x2, y1,
x2, y2]),
gl.STATIC_DRAW
);
}