1

I have hundreds of large text files as test1, test2, test3, ....., test100 in a folder.

Each of these test files has text entries. My job is to read each text file and then split each test$i files on every blank line in each text files and create various new text files.

For example: If test1.txt has 3 blank lines then the number of files generated will be 4 text files with names of each new files as test1.1, test1.2, test1.3, test1.4 { Reference = Splitting large text file on every blank line}

I did this for a single file and it perfectly works and I get various files as test1.1, test1.2, test1.3, test1.4

awk -v RS= '{print > ("test1." NR ".txt")}' test1

But when I tried doing this for multiple files in loop,

for i in {1..100}; do awk -v RS= '{print > ("test" $i "." NR ".txt")}' test$i; done

It does not work. I am wondering, why the values of $i does not passes into the awk function and it does not print the different empty separated individual files as test1.1, test1.2, test1.3...... test2.1, test2.2 ... so on...

One issue, which I am seeing is: "File name too long". Reference Link: Limit on file name length in bash

Kindly help me to understand and fix it or some better approach for this task.

Community
  • 1
  • 1
Linguist
  • 123
  • 1
  • 10

1 Answers1

2

Using awk only:

$ awk -v RS= '{f=(FILENAME "." FNR ".txt"); print > f; close(f)}' test*
James Brown
  • 36,089
  • 7
  • 43
  • 59
  • Yes. Thanks. This works absolutely fine but I am wondering, if I need to export the output f=(FILENAME "." FNR ".txt") in different file location lets say in folder Desktop/tmp, How to go ahead? – Linguist Apr 23 '17 at 09:51
  • 1
    `f=("/path/to/" FILENAME "." FNR ".txt")` – James Brown Apr 23 '17 at 09:57