As part of a sustained effort to learn the basics of graphics programming under WebGPU, I find the need to parse glTF files (in Javascript). As part of the parsing process I need to read a Data URL containing geometry data encoded in Base64 format, and extract arrays of information there-from (with information such as typing supplied in the file).
I seem to be having a problem correctly reading the raw data from string into JavaScript, despite checking the semantics of the functions called against the MDN Web docks and fiding no floor in my implementation. Specifics to follow:
I have been using this minimal gltTF file as an example to check my code against, and in this case my problem reduces down to correctly parsing the string below into typed arrays.
data:application/octet-stream;base64,AAABAAIAAAAAAAAAAAAAAAAAAAAAAIA/AAAAAAAAAAAAAAAAAACAPwAAAAA=
Although I originally used a quick regex to get the data portion of the string, in the interest of trouble-shooting I replaced it with AAABAAIAAAAAAAAAAAAAAAAAAAAAAIA/AAAAAAAAAAAAAAAAAACAPwAAAAA=
as to avoid uneccesary complication. the function I used to ingest the above string is given below.
async function ingest_string(str)
{
//Read the Input Data, we used atob to decode the Base64 encoding to retrieve the raw data.
let buffer_blob = await new Blob([atob(str)]);
let buffer_array = await buffer_blob.arrayBuffer();
//... Use ingested data here
}
By calling Console.log
on new Uint8Array(buffer_array)
, I can see the outcome of my parsing process byte by byte. They are given as the bottom row of the table below. The top row is what should be present for each row,in hexadecimal. It is stated in the tutorial the the lengh of the resulting buffer in bytes should be 44. However, on my attempt to parse I consistently end up with a buffer of 46 bytes. Note that the tutorial does not give methods for parsing, so the function is devised by me.
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0x00 | 0x00 | 0x01 | 0x00 | 0x02 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x80 | 0x3f | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x00 | 0x80 | 0x3f | 0x00 | 0x00 | 0x00 | 0x00 | - | - |
0 | 0 | 1 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 194 | 128 | 63 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 194 | 128 | 63 | 0 | 0 | 0 | 0 |
Comparing my data to the exemplar data to the I see that the first eight bytes are in agreement. This corresponds to a length 6 buffer of bytes encoding vertex index data and 2 bytes of padding. It is the rest of the buffer that is not in agreement that is supposed to represent the float array [[0,0,0,],[1,0,0],[0,1,0]]
of position data encoding a triangle (once flattened). Consistent with the findings from the tabled data, it is theses values that are not correct when using Float32Array
to read the typed data.
What else I have Tried
- Alternating between atob, btoa and a no-op in decoding the string just in case I got the direction wrong.
- using a
DataView
to look at the floats with both Endianness and at all offsets
Looking at the results, It seems to be that I am parsing the data incorrectly, but everything seems right to me. I don't think that the given string is incorrect because I have had the same problem with a similar minimal example file. Can anyone provide comment on what I need to do to ensure I parse the data correctly?