1

Just focusing on the uniforms/attributes/varyings for a single vertex/fragment shader pair, I'm wondering how you might model the following system using textures. Focusing on 2D.

  • position: The current object's position.
  • translation: The objects proposed next position based on some CPU calculations up front.
  • velocity: The objects velocity.
  • rotation: The objects next rotation.
  • forces (like gravity or collision): The object's summed forces acting on it in each direction.
  • temperature: The object's temperature.
  • mass/density: The object's mass/density.
  • curvature: Moving along a predefined curve (like easing).

At first I wanted to do this:

attribute vec3 a_position;
attribute vec3 a_translation;
attribute vec3 a_velocity;
attribute vec3 a_rotation;
attribute vec3 a_force;
attribute vec3 a_temperature;
attribute vec3 a_material; // mass and density
attribute vec4 a_color;
attribute vec4 a_curvature;

But that might run into the problem of too many attributes.

So then I remember about using textures for this. Without going into too much detail, I'm just wondering how you might structure the uniforms/attributes/varyings to accomplish this.

attribute vec2 a_position_uv;
attribute vec2 a_translation_uv;
attribute vec2 a_velocity_uv;
attribute vec2 a_rotation_uv;
attribute vec2 a_force_uv;
attribute vec2 a_temperature_uv;
attribute vec2 a_material_uv;
attribute vec2 a_color_uv;
attribute vec2 a_curvature_uv;

If we did that, where the attributes all referenced texture coordinates, then the texture could store vec4 data perhaps, and so we might avoid the too-many-attributes problem.

But I'm not sure now how to define the textures for both shaders. Wondering if it's just like this:

uniform sampler2D u_position_texture;
uniform sampler2D u_translation_texture;
uniform sampler2D u_velocity_texture;
uniform sampler2D u_rotation_texture;
uniform sampler2D u_force_texture;
uniform sampler2D u_temperature_texture;
uniform sampler2D u_material_texture;
uniform sampler2D u_color_texture;
uniform sampler2D u_curvature_texture;

Then in main in the vertex shader, we can use the textures however to calculate the position.

void main() {
  vec4 position = texture2D(u_position_texture, a_position_uv);
  vec4 translation = texture2D(u_translation_texture, a_translation_uv);
  // ...
  gl_Position = position * ...
}

In this way we don't need any varyings in the vertex shader for passing through the color necessarily, unless we want to use the result of our calculations in the fragment shader. But I can figure that part out. For now I just would like to know if it's possible to structure the shaders like this, so the final vertex shader would be:

attribute vec2 a_position_uv;
attribute vec2 a_translation_uv;
attribute vec2 a_velocity_uv;
attribute vec2 a_rotation_uv;
attribute vec2 a_force_uv;
attribute vec2 a_temperature_uv;
attribute vec2 a_material_uv;
attribute vec2 a_color_uv;
attribute vec2 a_curvature_uv;

uniform sampler2D u_position_texture;
uniform sampler2D u_translation_texture;
uniform sampler2D u_velocity_texture;
uniform sampler2D u_rotation_texture;
uniform sampler2D u_force_texture;
uniform sampler2D u_temperature_texture;
uniform sampler2D u_material_texture;
uniform sampler2D u_color_texture;
uniform sampler2D u_curvature_texture;

void main() {
  vec4 position = texture2D(u_position_texture, a_position_uv);
  vec4 translation = texture2D(u_translation_texture, a_translation_uv);
  // ...
  gl_Position = position * ...
}

And the final fragment shader might be along the lines of:

uniform sampler2D u_position_texture;
uniform sampler2D u_translation_texture;
uniform sampler2D u_velocity_texture;
uniform sampler2D u_rotation_texture;
uniform sampler2D u_force_texture;
uniform sampler2D u_temperature_texture;
uniform sampler2D u_material_texture;
uniform sampler2D u_color_texture;
uniform sampler2D u_curvature_texture;

varying vec2 v_foo
varying vec2 v_bar

void main() {
  // ...
  gl_Color = position * ... * v_foo * v_bar
}
Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
user10869858
  • 481
  • 1
  • 3
  • 16

2 Answers2

3

The question you linked is not about too many attributes but too many varyings, 99.9% of WebGL implementations support up to 16 attributes which is not only on par with the maximum number of texture units supported on most platforms but should be fine assuming that you don't need to transfer all that data from the vertex to the fragment shader. If you're not doing any larger batching you might just use uniforms to begin with. That said if you, for whatever reason decide to go with textures you'd probably use only one UV coordinate and align all your data textures, otherwise you'd really just have almost doubled your bandwidth requirements for no reason.

With that out of the way, your dataset itself can be compacted quite a bit. You could store position and rotation as a quaternion(in 2D you could even just use a vec3 with x,y,α) the velocity and torque(which is missing from your original dataset) are really just the delta of the current position and the next one, so you only need to store one of those sets(either velocity/torque or next position/rotation), force seems irrelevant as you'd apply those on the CPU, mass and temperature are scalar values so they'd totally fit into one vec2 along with some other jazz. But the more I try to make sense of it the more immature this seems, you can't really do the simulation on the GPU yet half of your attributes are simulation attributes that are not required for rendering and it feels like you're prematurely optimizing something that isn't even close to existing yet, so word of advice: just build it and see.

LJᛃ
  • 7,655
  • 2
  • 24
  • 35
1

LJ's answer is arguably the right thing to do but if you want to store data in textures all you need is an index per vertex

attribute float index;

You then compute UV coords from that

uniform vec2 textureSize;  // size of texture

float numVec4sPerElement = 8.;
float elementsPerRow = floor(textureSize.x / numVec4sPerElement);
float tx = mod(index, elementsPerRow) * numVec4sPerElement;
float ty = floor(index / elementsPerRow);
vec2 baseTexel = vec2(tx, ty) + 0.5;

Now you can pull out the data. (note: assuming it's a float texture)

vec4 position    = texture2D(dataTexture, baseTexel / textureSize);
vec4 translation = texture2D(dataTexture, (baseTexel + vec2(1,0)) / textureSize);
vec4 velocity    = texture2D(dataTexture, (baseTexel + vec2(2,0)) / textureSize);
vec4 rotation    = texture2D(dataTexture, (baseTexel + vec2(3,0)) / textureSize);
vec4 forces      = texture2D(dataTexture, (baseTexel + vec2(4,0)) / textureSize);

etc...

Of course you might interleave the data more. Like say position above is vec4 maybe position.w is gravity, translation.w is mass, etc...

You then put the data in a texture

position0, translation0, velocity0, rotation0, forces0, .... 
position1, translation1, velocity1, rotation1, forces1, .... 
position2, translation2, velocity2, rotation2, forces2, .... 
position2, translation3, velocity3, rotation3, forces3, .... 

const m4 = twgl.m4;
const v3 = twgl.v3;
const gl = document.querySelector('canvas').getContext('webgl');
const ext = gl.getExtension('OES_texture_float');
if (!ext) {
  alert('need OES_texture_float');
}


const vs = `
attribute float index;

uniform vec2 textureSize;
uniform sampler2D dataTexture;

uniform mat4 modelView;
uniform mat4 projection;

varying vec3 v_normal;
varying vec4 v_color;

void main() {
  float numVec4sPerElement = 3.;  // position, normal, color
  float elementsPerRow = floor(textureSize.x / numVec4sPerElement);
  float tx = mod(index, elementsPerRow) * numVec4sPerElement;
  float ty = floor(index / elementsPerRow);
  vec2 baseTexel = vec2(tx, ty) + 0.5;

  // Now you can pull out the data.

  vec3 position = texture2D(dataTexture, baseTexel / textureSize).xyz;
  vec3 normal   = texture2D(dataTexture, (baseTexel + vec2(1,0)) / textureSize).xyz;
  vec4 color    = texture2D(dataTexture, (baseTexel + vec2(2,0)) / textureSize);

  gl_Position = projection * modelView * vec4(position, 1);

  v_color = color;
  v_normal = normal;
}
`;

const fs = `
precision highp float;

varying vec3 v_normal;
varying vec4 v_color;

uniform vec3 lightDirection;

void main() {
  float light = dot(lightDirection, normalize(v_normal)) * .5 + .5;
  gl_FragColor = vec4(v_color.rgb * light, v_color.a);
}
`;

// compile shader, link, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);

// make some vertex data
const radius = 1;
const thickness = .3;
const radialSubdivisions = 20;
const bodySubdivisions = 12;
const verts = twgl.primitives.createTorusVertices(
    radius, thickness, radialSubdivisions, bodySubdivisions);
/*
  verts is now an object like this
  
  {
    position: float32ArrayOfPositions,
    normal: float32ArrayOfNormals,
    indices: uint16ArrayOfIndices,
  }
*/

// covert the vertex data to a texture
const numElements = verts.position.length / 3;
const vec4sPerElement = 3;  // position, normal, color
const maxTextureWidth = 2048;  // you could query this
const elementsPerRow = maxTextureWidth / vec4sPerElement | 0;
const textureWidth = elementsPerRow * vec4sPerElement;
const textureHeight = (numElements + elementsPerRow - 1) /
                      elementsPerRow | 0;

const data = new Float32Array(textureWidth * textureHeight * 4);
for (let i = 0; i < numElements; ++i) {
  const dstOffset = i * vec4sPerElement * 4;
  const posOffset = i * 3;
  const nrmOffset = i * 3;
  data[dstOffset + 0] = verts.position[posOffset + 0];
  data[dstOffset + 1] = verts.position[posOffset + 1];
  data[dstOffset + 2] = verts.position[posOffset + 2];
  
  data[dstOffset + 4] = verts.normal[nrmOffset + 0];
  data[dstOffset + 5] = verts.normal[nrmOffset + 1];
  data[dstOffset + 6] = verts.normal[nrmOffset + 2];  
  
  // color, just make it up
  data[dstOffset +  8] = 1;
  data[dstOffset +  9] = (i / numElements * 2) % 1;
  data[dstOffset + 10] = (i / numElements * 4) % 1;
  data[dstOffset + 11] = 1;
}

// use indices as `index`
const arrays = {
  index: { numComponents: 1, data: new Float32Array(verts.indices), },
};

// calls gl.createBuffer, gl.bindBuffer, gl.bufferData
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);

const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, textureWidth, textureHeight, 0, gl.RGBA, gl.FLOAT, data);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

function render(time) {
  time *= 0.001;  // seconds
  
  twgl.resizeCanvasToDisplaySize(gl.canvas);
  
  gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
  gl.enable(gl.DEPTH_TEST);
  gl.enable(gl.CULL_FACE);

  const fov = Math.PI * 0.25;
  const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  const near = 0.1;
  const far = 20;
  const projection = m4.perspective(fov, aspect, near, far);
  
  const eye = [0, 0, 3];
  const target = [0, 0, 0];
  const up = [0, 1, 0];
  const camera = m4.lookAt(eye, target, up);
  const view = m4.inverse(camera);

  // set the matrix for each model in the texture data
  const modelView = m4.rotateY(view, time);
  m4.rotateX(modelView, time * .2, modelView);
  
  gl.useProgram(programInfo.program);
  
  // calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
  twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
  
  // calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
  twgl.setUniforms(programInfo, {
    lightDirection: v3.normalize([1, 2, 3]),
    textureSize: [textureWidth, textureHeight],
    projection: projection,
    modelView: modelView,
  });  
  
  // calls gl.drawArrays or gl.drawElements
  twgl.drawBufferInfo(gl, bufferInfo);

  requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>

Be aware that pulling data out of textures is slower than getting them from attributes. How much slower probably depends on the GPU. Still, it may be faster than whatever alternative you're considering.

You might also be interested in using textures for batching draw calls. effectively storing things that are traditionally uniforms in a texture.

https://stackoverflow.com/a/54720138/128511

gman
  • 100,619
  • 31
  • 269
  • 393
  • Beautiful! I did not realize you could do it like that just a float for index attribute and a single texture, that's great! – user10869858 Mar 15 '19 at 04:44
  • "Be aware that pulling data out of textures is slower than getting them from attributes." But by having them in textures you can access the neighbor points data and such, which is a huge win I assume. – user10869858 Mar 15 '19 at 05:39
  • "You then compute UV coords from that" Maybe you could explain that part, the equations, I'm not quite following. Why `numVec4sPerElement` why `elementsPerRow`, and the meaning of `tx` and `ty`, and the `+ 0.5` on `baseTexel`. – user10869858 Mar 15 '19 at 05:50
  • for * 0.5 see https://stackoverflow.com/a/27439675/128511 for the rest how about you explain it to me and I'll tell you if you are correct. – gman Mar 15 '19 at 06:28
  • The `8.` is an integer with weird formatting. `mod` is modulus, but I'm not sure why you chose that. Will read more about tx ty on your link first and try again. – user10869858 Mar 15 '19 at 08:21
  • Maybe [this search will help](https://www.google.com/search?q=indexing+a+1d+array+as+a+2d+array). Also why are you concentrating on the 8 instead of the name of the variable? Maybe compare to the live sample where it's 3 instead of 8 and see if you can figure out why. – gman Mar 15 '19 at 08:24