0

I have a file whose size is 6TB in AWS EC2, I want to split it to multiple files which size is 1Tb, so that it can be uploaded to AWS s3 bucket I use this command

split -b1T -d myfile myfil. but it runs so slow that after 1 hour, only 60G was split out.

How can i make it faster? or is there any way to split binary files more quickly?

  • Where are you running this command? Is it on your own computer, or is it on an Amazon EC2 instance? This might be helpful: [bash - Using GNU Parallel With Split - Stack Overflow](https://stackoverflow.com/questions/15144655/using-gnu-parallel-with-split) and [bash - How to split files up and process them in parallel and then stitch them back? unix - Stack Overflow](https://stackoverflow.com/questions/29033016) and [multithreading - How to split a file to multiple files with multiple threads? - Unix & Linux Stack Exchange](https://unix.stackexchange.com/questions/415972/) – John Rotenstein Aug 30 '22 at 07:29
  • @JohnRotenstein Thank you for your reply, I running this in AWS EC2, – ZhengwenGuo Aug 30 '22 at 10:31

0 Answers0