I managed to read a JSON file stored in my S3 bucket but I'm having to do a lot of transformation that I dont fully understand.
If we log the data as it comes we get an output of Buffer data
s3.getObject(objParam, (err, data) => {
if (err) throw err;
console.log(" ~ file: reader.js ~ line 31 ~ rs ~ data", data.Body)
});
// outputs data <Buffer 7b 0a 20 20 22 74 79 70 65 22 3a 20 22 42 75 66 66 ...
If we convert the above buffer to string with toString('utf-8') I get this:
data.Body.toString('utf8')
// outputs:
{
"type": "Buffer",
"data": [
123,
34,
99,
115,
112,
45,
114,
101,
112,
111,
114,
....
The only way I managed to see my actual original JSON was converting the above to JSON, accessing the data property, creating a new buffer, then converting back again to toString('utf-8').
s3.getObject(objParam, (err, data) => {
if (err) throw err;
const jsonBytes = JSON.parse(data.Body.toString('utf-8'));
const buffer = Buffer.from(jsonBytes.data);
const bufferBackToString = buffer.toString('utf-8');
console.log(" ~ file: reader.js ~ line 21 ~ s3.getObject ~ bufferBackToString", bufferBackToString);
});
// output: Logs my original JSON!
{
"myProperty": "myData"
} // or whatever...
If it was already a buffer, why did I have to convert to one and convert back again to string? Could the buffer have different encoding?