4

I'm trying to read 2 floating numbers from a very simple 1-line file in Bash. I want to store these two numbers into variables. All the examples I see from Googling look like:

while read VAR1 VAR2
do
   <command>
done < file.txt

But this keeps VAR1 and VAR2 inside the while loop only. How can I store the two variables so that I can use them anywhere in my script? Thanks so much!

Joe
  • 457
  • 5
  • 18
user3654549
  • 89
  • 2
  • 2
  • 10

4 Answers4

9

The while loop is superfluous when reading a file with a single line (as described in your question ).

why not simply do :

read VAR1 VAR2 < file.txt
echo $VAR1
echo $VAR2

This would read the first line of your file

dvhh
  • 4,724
  • 27
  • 33
2

try this ...

#!/bin/bash

var1="unknown"
var2="unknown"

while read VAR1 VAR2
do
        var1=$VAR1
        var2=$VAR2
done < file.txt
echo "v1=$var1 v2=$var2"

Update: I think dvhh's answer is the correct one.

Red Cricket
  • 9,762
  • 21
  • 81
  • 166
  • This is handy for retaining the values from the _last iteration_ of the loop, but note that the OP mentions reading from a _1_-line file, so there's no reason to use a loop in the first place. – mklement0 Mar 26 '15 at 12:15
2

dvhh's answer contains the crucial pointer:

If you're reading only a single line, there's no reason to use a while loop at all.

Your while loop leaves $VAR1 and $VAR2 empty, because the last attempt to read from the input file invariably fails due to having reached EOF, causing the variables to be set to the empty string.

If, by contrast, you truly needed to read values from multiple lines and needed to access them after the loop, you could use a bash array:

aVar1=() aVar2=() # initialize arrays
while read var1 var2
do
   aVar1+=( "$var1")
   aVar2+=( "$var2")
done < file.txt

# ${aVar1[@]} now contains all $var1 values, 
# ${aVar2[@]} now contains all $var2 values.

Note that it's good practice not to use all-uppercase names for shell variables so as to avoid conflicts with environment variables.

Community
  • 1
  • 1
mklement0
  • 382,024
  • 64
  • 607
  • 775
0


first of you should obtain your separator in your script in this my example the uid and ... separate with ":"

 uid=$( IFS=":" ;while read uid   REST ; do  echo $uid  ; done </etc/passwd )
passwd=$( IFS=":" ;while read uid passwd   REST ; do   echo $passwd  ; done </etc/passwd )

IFS is internal field separator and default is white space and you should change it you can put script in varibale=$( script ) very good thing for scripting

you can see with

 echo $uid
echo $passwd
Soheil
  • 837
  • 8
  • 17
  • There's a lot of extraneous information in your answer that has no connection to the OP's question and makes it hard to understand the gist of your answer. Looks like you're demonstrating how to collect multiple values that a loop prints iteratively via a command substitution as a multi-line string. Given that the OP has stated that they read from a _1_-line file, there is no need for such a fundamentally different approach (reading the same file _twice_, in a _subshell_, capturing via _stdout_), which can have side effects - simply not using a loop is the much simpler solution. – mklement0 Mar 26 '15 at 12:40
  • I recommend this method because scripting is not just simple syntax just test speed of run script with time ./script.sh >/dev/null this method has more speed and useful in scripting ,and in linux cause variable has not limitation for value we can put 10 even 15 meg in value like uid i tested it before – Soheil Mar 26 '15 at 13:01
  • I have trouble understanding what you're saying, but in the case at hand the performance comparison is simple: `read VAR1 VAR2 < file.txt` is not only simpler, but will perform better than your command substitution-based approach (even if you only used a _single_ command substitution). – mklement0 Mar 26 '15 at 13:18
  • @mklement0 i means when you put your script in variable speed of run shell script more than bash check line by line script just check that [link](http://linux.about.com/library/cmd/blcmdl1_time.htm) – Soheil Mar 26 '15 at 13:24
  • Yes, it's a good idea to avoid bash loops for performance reasons when possible (if I understand you correctly), but (a) generally, depending on your use case, that may not be an option, and (b), specifically, here: the solution to the OP's problem (as currently stated) is to _not use a loop at all_. As an aside: Do note that performance improvements would come from running a more efficient _external_ utility that does the looping in a command substitution, whereas your example runs a slow _bash_ loop in the command substitution. – mklement0 Mar 26 '15 at 13:31