1

I'm trying to read a more than one large txt file to read them line by line form s3 , with this code .

const params = {
            Bucket: bucket,
            Key: key,
        };
        const s3ReadStream = s3.getObject(params).createReadStream();
    
        const rl = readline.createInterface({
          input: s3ReadStream,
          terminal: false
        });
    
        let myReadPromise = new Promise( (resolve, reject) => {
            let line_number = 1;
            
            rl.on('line', async (line) => {
// some code 
});

but i have this issue :

FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory

i increased the memory to 8 GB using this command :

 set NODE_OPTIONS="--max-old-space-size=8192"

still getting same issue , any idea how to read these files as chunks !

  • Does this answer your question? [Split S3 file into smaller files of 1000 lines](https://stackoverflow.com/questions/56139995/split-s3-file-into-smaller-files-of-1000-lines) – Yayotrón May 28 '21 at 10:47
  • @Yayotrón , i don't want to split them in the Bucket , unfortunately this will not answer my question – rashed omar May 28 '21 at 12:27
  • Unfortunately you cannot download from S3 in chunks, the only alternative is to split before downloading, or download the full file and then split from your side. I'd recommend you split using the method described on this question I show you before: aws s3 cp s3://my-bucket/big-file.txt - | split -l 1000 - output and then pass the smaller chunks to your application. – Yayotrón May 28 '21 at 12:30

0 Answers0