If that's what you feel you need, then
$: while IFS=$'",\n' read -a line; do set -- "${line[@]}"; shift; echo $1 and $2; done <tmp
14757 and file_one
14756 and file_two
14755 and file_three
I used the quotes as delimiters as well as the comma, which creates a leading empty field in cell 0, so I shift
it off.
...but unless there is a compelling reason, just use the array.
$: while IFS=$'",\n' read -a fields; do echo "${fields[1]} and ${fields[2]}"; done <tmp
14757 and file_one
14756 and file_two
14755 and file_three
awk
would be a lot more efficient, and notably faster if the result set is very big -
$: awk -F'[",]' '{print $2" and "$3}' tmp
14757 and file_one
14756 and file_two
14755 and file_three
or even sed
-
$: sed 's/^"//; s/"$//; s/,/ and /;' tmp
14757 and file_one
14756 and file_two
14755 and file_three
This one is a little more direct and mechanical, but if you read regexes it's pretty easy to understand: trim the leading quote, trim the trailing quote, convert the comma. I could have used s/"//g
, but I suspect the two anchored substitutions are faster than scanning the whole string since I know where the quotes are. It likely doesn't matter here, but worth mentioning for when you're processing a multi-GB file and you want to shave a little time.
If you actually did pipe your data through a tr
and remove the quotes, then all these are a little simpler, as they don't have to deal with that anymore, and you don't ignore the first empty field.
$: while IFS=, read -a line; do set -- "${line[@]}"; echo $1 and $2; done <tmp
14757 and file_one
14756 and file_two
14755 and file_three
$: while IFS=, read -a fields; do echo "${fields[0]} and ${fields[1]}"; done <tmp
14757 and file_one
14756 and file_two
14755 and file_three
$: awk -F, '{print $1 " and " $2}' tmp
14757 and file_one
14756 and file_two
14755 and file_three
$: sed 's/,/ and /;' tmp
14757 and file_one
14756 and file_two
14755 and file_three