in my head I have:
read "original_file",
change line 3 "ENTRY1"
to be that of the FIRST Word in data_file
.
write out new_file1
.
read "original_file",
change line 3 "ENTRY1"
to be that of the SECOND Word in data_file
.
write out new_file2
repeat through entire data_file
.
excerpt/example:
original_file:
line1 {
line2 "id": "b5902627-0ba0-40b6-8127-834a3ddd6c2c",
line3 "name": "ENTRY1",
line4 "auto": true,
line5 "contexts": [],
line6 "responses": [
line7 {
------------
data_file:(simply a word/number List)
line1 AAA11
line2 BBB12
line3 CCC13
..100lines/Words..
-------------
*the First output/finished file would look like:
newfile1:
line1 {
line2 "id": "b5902627-0ba0-40b6-8127-834a3ddd6c2c",
line3 "name": "AAA11",
line4 "auto": true,
line5 "contexts": [],
line6 "responses": [
line7 {
------------
and the Second:
newfile2:
line1 {
line2 "id": "b5902627-0ba0-40b6-8127-834a3ddd6c2c",
line3 "name": "BBB12",
line4 "auto": true,
line5 "contexts": [],
line6 "responses": [
line7 {
------------
..and so on.
I have been trying with sed, something like
awk 'FNR==$n1{if((getline line < "data_file") > 0) fprint '/"id:"/' '/""/' line ; next}$n2' < newfile
and.. as a start of a shell script..
#!/bin/bash
n1=3
n2=2
sed '$n1;$n2 data_file' original_file > newfile
any help would be appreciated.. I've been trying to glue together various techniques found on SO.. one thing at a time.. learning how to replace..
then replace from a second file.. but its above my knowledge. thanks again.
I have approximately 31,000
LINES in my data_file.. so this is necessary..
(to be automated). its a one time thing, but may be very useful for others?