0

I wrote the following command:

var=$(printf "$line" | awk -F':' {'print $2'} | sed 's/,//g')
sed -i "s/$var/exampletext/g" "file.txt"

The text is part of a larger script that finds a specific value, and then replaces it with another value.

The basic end result I want is something like this:

sed -i "s/123/exampletext/g" "file.txt"

That should replace all of the 123's in file.txt with exampletext. When I echo $var, it shows the result of 123. Yet when that command is run, it doesn't work - it does not replace 123 with exampletext.

Here is the full script:

INPUT="$1"

while IFS= read -r line;
do
  if [[ $line == *"length"* ]];
          then

                 var=$(printf "$line" | awk -F':' {'print $2'} | sed 's/,//g')
                 sed -i "s|$var|cookiecrisp|g" "vomit.txt"
  fi
done < $INPUT
Sidereal
  • 111
  • 3
  • is this on mac or linux? I remember on Mac the default sed doesn't do `-i` the way it works on linux. Have you already verified that the sed command run by itself does what you want? – Christian Fritz May 20 '21 at 17:30
  • Linux - Ubuntu 20.04. Yes it does do what I want when run by itself. – Sidereal May 20 '21 at 17:30
  • 2
    Why not `awk -v var="$var" ' {sub(var,"exampletext",$2)}1'`? The point being here, based on what you have in your question, there doesn't seem to be a need for `sed` and separate subshells. A single call to `awk` should do. If you provide sample data, then we can confirm. – David C. Rankin May 20 '21 at 17:32
  • 2
    I suspect that `$var` does not contain "123" but something like " 123", which `echo` will filter out. But it's not possible to say without some example data. – Christian Fritz May 20 '21 at 17:33
  • I discovered that echo is filtering out characters. I ran `wc -m $var` and it gave me this output: `wc: '225637'$'\r': No such file or directory` How can I strip out the unnecessary characters? – Sidereal May 20 '21 at 17:36
  • 1
    `'\r'` -- you have DOS line endings, Run `dos2unix input_file` to convert to Unix line-endings. If you only want to process lines containing `"length"`, then `awk -v var="$var" '/length/ {sub(var,"exampletext",$2)}1'` – David C. Rankin May 20 '21 at 17:37
  • FYI I'm still new to sed/awk. There is a secondary calculation that needs to run against $var, changing it from milliseconds to mm:ss in the final product, before running the sed command. I have yet to add it to the script. That's why I separated it. The final replace value will not be "cookiecrisp" but rather the converted value – Sidereal May 20 '21 at 17:40
  • 1
    You could also use `IFS=$'\r' read ...` to trim the carriage return in the `read` command (provided you're actually running under bash, not dash). See my answer to [this question](https://stackoverflow.com/questions/39527571/are-shell-scripts-sensitive-to-encoding-and-line-endings). – Gordon Davisson May 20 '21 at 17:43
  • 2
    Sure, you can use several commands if needed. The key is to avoid spawning unnecessary subshells within a loop -- that kills shell performance on large files. If you have 1000 or less lines, it doesn't really matter, but for 1,000,000 it matters a lot. `awk` can handle all the processing -- it is the Swiss-Army knife of text processing (basically a small programming language all it's own similar to C in many ways) – David C. Rankin May 20 '21 at 17:43

0 Answers0