1071

I am looking for a command that will accept (as input) multiple lines of text, each line containing a single integer, and output the sum of these integers.

As a bit of background, I have a log file which includes timing measurements. Through grepping for the relevant lines and a bit of sed reformatting I can list all of the timings in that file. I would like to work out the total. I can pipe this intermediate output to any command in order to do the final sum. I have always used expr in the past, but unless it runs in RPN mode I do not think it is going to cope with this (and even then it would be tricky).

How can I get the summation of integers?

sazzad
  • 5,740
  • 6
  • 25
  • 42
Andrzej Doyle
  • 102,507
  • 33
  • 189
  • 228
  • 2
    This is very similar to a question I asked a while ago: http://stackoverflow.com/questions/295781/shortest-command-to-calculate-the-sum-of-a-column-of-output-on-unix – An̲̳̳drew Jan 16 '09 at 17:35
  • This question feels like a problem for code golf. https://codegolf.stackexchange.com/ :) – Gordon Bean Sep 26 '17 at 21:48

47 Answers47

1607

Bit of awk should do it?

awk '{s+=$1} END {print s}' mydatafile

Note: some versions of awk have some odd behaviours if you are going to be adding anything exceeding 2^31 (2147483647). See comments for more background. One suggestion is to use printf rather than print:

awk '{s+=$1} END {printf "%.0f", s}' mydatafile
Community
  • 1
  • 1
Paul Dixon
  • 295,876
  • 54
  • 310
  • 348
  • 13
    There's a lot of awk love in this room! I like how a simple script like this could be modified to add up a second column of data just by changing the $1 to $2 – Paul Dixon Jan 16 '09 at 16:02
  • Whats are the limits on awk? i.e. how many data elements can it process before dieing? or becoming a less preferable approach to using a small c snippet? – Usman Ismail Feb 24 '12 at 18:16
  • 2
    There's not a practical limit, since it will process the input as a stream. So, if it can handle a file of X lines, you can be pretty sure it can handle X+1. – Paul Dixon Feb 25 '12 at 08:36
  • 5
    I once wrote a rudimentary mailing list processer with an awk script run via the vacation utility. Good times. :) – Mr. Lance E Sloan Mar 07 '12 at 16:05
  • 2
    just used this for a: count all documents’ pages script: `ls $@ | xargs -i pdftk {} dump_data | grep NumberOfPages | awk '{s+=$2} END {print s}'` – flying sheep Jul 10 '13 at 14:42
  • Yeah, I was going to mention that you can also pipe the data to awk ... `cat file.csv | cut -d, -f3 | awk '{s+=$1} END {print s}'` – bbbco Jul 24 '13 at 14:03
  • How to modify this to work with floats, e.g. lines containing numbers like 123456.789? This snippet is really helpful when grepping through a huge list of files in accounting. – Attila O. Jul 25 '13 at 21:08
  • awk uses double-precision floats, so it should just work. Whether or not floats are appropriate for accounting I'll leave you to judge :) – Paul Dixon Jul 26 '13 at 07:20
  • 2
    To those like me who don't know awk.. check this [link](http://doc.infosnel.nl/quickawk.html) for super-quick introduction – nedR Dec 14 '13 at 17:29
  • 1
    `awk` breaks for large numbers (note: paste+bc continues to work). – jfs Oct 16 '14 at 06:16
  • 11
    Be careful, it will not work with numbers greater than 2147483647 (i.e., 2^31), that's because awk uses a 32 bit signed integer representation. Use `awk '{s+=$1} END {printf "%.0f", s}' mydatafile` instead. – Giancarlo Sportelli Feb 05 '15 at 23:34
  • As @GiancarloSportelli says, his solution below is better - no integer overflow in print, see http://stackoverflow.com/a/25245025/992887 – RichVel May 16 '15 at 05:17
  • 1
    I think if you have any question that goes 'I've been looking for a shell command that does X, but I can't find one'. The first response you or anyone should have is 'have you tried awk?' – Totoro Apr 06 '16 at 22:12
  • Lol I was using AWK to just extract the interesting number. Now I see that Chrome actually uses almost 2GB total :) – Paul Stelian Jul 11 '16 at 11:27
  • 2
    @GiancarloSportelli I just wanted to add 2 cents. You can use "%ld" on a 64 bit system to keep the number as a 64 bit int, no conversion at all. `echo "2147483647\n2147483647\n2147483647\n2147483647" | awk '{s+=$1} END {printf("%ld\n", s)}'` – Erroneous Sep 14 '16 at 18:03
  • 1
    Unlike other answers, this one works even when there are empty lines in the data. – mkj Nov 17 '16 at 02:13
  • Looking at this - how does one 'print' something before the final s variavble is printed ie total $i s – Tony Feb 06 '18 at 19:35
  • 1
    What happens if there are lines that don't contain numbers among them? How to filter them out? – Victor Nov 23 '18 at 20:10
  • 2
    Be aware that `awk` has no knowledge of what an integer is. It does all its math in double precision. Hence only all numbers upto `2^53` are representable. From that point onwards it goes wrong: `awk 'BEGIN{print 2^53-1, 2^53, 2^53+1}' => 9007199254740991 9007199254740992 9007199254740992` – kvantour Dec 09 '18 at 12:33
  • paste/bc from the comment below seems like the correct answer – rjurney Feb 08 '20 at 21:23
  • Can validate this fairly straightforwardly `seq $((1 << 32)) $(((1 << 32)+10)) | awk '{x+=$1} END {printf "%ld\n", x}' ` output: 47244640311 – Brian Chrisman May 11 '21 at 15:33
  • Actually Python is my preferred way, because it sums the numbers without rounding the result even for very large numbers, while awk rounds the result. – Uri Oct 01 '21 at 10:56
  • Much easier to remember and type than the paste solution. – Moberg Dec 07 '21 at 00:09
  • @BC : i ran that exact command, even using GMP, and got something waaaaay off : ::::::::::::::: seq $((1 << 32)) $(((1 << 32)+10)) | gawk -Me '{x+=$1} END {printf "%ld\n", x}'::::::::::::::::: 5153964000. seq only worked after explicitly adding formatting flag :::: seq -f '%.f' flag – RARE Kpop Manifesto Jun 15 '22 at 12:46
  • @BrianChrisman : but you don't need the %ld to get it work - any non-mawk.1.3.4 works as is, and mawk-1 just set CONVFMT='%.f' before you begin. but if you try to use "%ld" (as in L ) in mawk-1, I got 4766497255625457664 instead – RARE Kpop Manifesto Jun 15 '22 at 12:53
  • yeah.. there are some details on macos/linux as well. ```seq -f '%.0f' $((1 << 32)) $(((1 << 32)+10)) | awk '{x+=$1} END {print x}' ``` where 'seq' can give exponential notation as default on some systems. – Brian Chrisman Jun 15 '22 at 16:31
  • just change the printing mode and it prints fine ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: `jot 574998 - - 54321 | mawk '$++NF=_+=$!_' CONVFMT='%.f' | gtail -n 2` :::::: `31234357717 8979830992388423` ::: `31234412038 8979862226800461` – RARE Kpop Manifesto Aug 13 '22 at 11:25
771

Paste typically merges lines of multiple files, but it can also be used to convert individual lines of a file into a single line. The delimiter flag allows you to pass a x+x type equation to bc.

paste -s -d+ infile | bc

Alternatively, when piping from stdin,

<commands> | paste -s -d+ - | bc
Asclepius
  • 57,944
  • 17
  • 167
  • 143
radoulov
  • 1,541
  • 1
  • 8
  • 2
  • 1
    Very nice! I would have put a space before the "+", just to help me parse it better, but that was very handy for piping some memory numbers through paste & then bc. – Michael H. Sep 08 '10 at 05:34
  • A solution using bc or dc is what I was looking for as a solution to my own similar problem. I don't know much about it and it seems to be underutilized, so it's very good to see others know how to use it. I suspect it's much faster than the other solutions, especially if there are many lines in the input. – Mr. Lance E Sloan Mar 07 '12 at 16:12
  • 87
    Much easier to remember and type than the awk solution. Also, note that `paste` can use a dash `-` as the filename - which will allow you to pipe the numbers from the output of a command into paste's standard output without the need to create a file first: ` | paste -sd+ - | bc` – George Mar 20 '12 at 19:15
  • I find this solution less of an overkill than using a fully-fledged programming language, like awk/perl/python. Kudos! – Little Bobby Tables Jun 12 '12 at 07:57
  • 1
    Awesome! Just used this to sum bogomips across all cores: cat /proc/cpuinfo | grep bogo | cut -d: -f2 | paste -sd+ | bc –  Jul 11 '12 at 00:22
  • 25
    I have a file with 100 million numbers. The awk command takes 21s; the paste command takes 41s. But good to meet 'paste' nevertheless! – Abhi Jan 25 '13 at 06:07
  • 8
    @Abhi: Interesting :D I guess it would take me 20s to figure out the awk command so it evens out though until I try 100 million and one numbers :D – Mark K Cowan Jul 02 '14 at 09:21
  • 1
    @Abhi: check the answer. `awk` fails for large numbers on my machine but `paste` + `bc` work. – jfs Oct 16 '14 at 06:17
  • 6
    @George You can leave out the `-`, though. (It is useful if you wanted to combine a file *with* stdin). – Alois Mahdal Jan 16 '16 at 21:51
  • This didn't like the '^m' chars in the file, but once that was fixed, happy happy. – Guerry Feb 01 '16 at 20:42
  • @AloisMahdal You can't leave out the `-` when used in pipe. `echo -e "1\n2\n3" | paste -sd+` doesn't work, but having the extra `-` does work. – Causality Jun 22 '16 at 21:19
  • 1
    @MarkKCowan `bc` uses arbitrary precision, I guess `awk` is likely to use 32 or 64-bit arithmetics, possibly leading to overflows. – hdl Jun 23 '16 at 14:17
  • @Causality then it's probably version-dependent; it does work here with GNU coreutils 8.24 (Fedora 23); your code in last comment yields `1+2+3` – Alois Mahdal Jun 23 '16 at 22:21
  • 1
    Note that `bc` isn't included in some shells, like Git Bash for Windows. – thdoan Mar 30 '17 at 08:23
  • @10basetom that's because `bc` is not a shell-builtin (so it's not included in any shell) but a separate program (which is pretty standard on any un*x machine) – umläute Jul 13 '17 at 10:22
  • @umläute I know why it's not included in some shells -- I was pointing that out so that people can consider alternative solutions if their aim is portability. – thdoan Jul 16 '17 at 07:43
  • 1
    had the problem that bc was not installed and ended up with `echo $(( $( | paste -s -d+ -) ))` to evaluate the answer – rob Sep 28 '17 at 13:54
  • @George and @Abhi ... so the `awk` solution is both easier to remember... and faster. double win. – Trevor Boyd Smith Apr 10 '19 at 13:59
  • paste & bc also, like Python, sums the numbers without rounding the result even for very large numbers, while awk rounds the result. – Uri Oct 01 '21 at 11:04
156

The one-liner version in Python:

$ python -c "import sys; print(sum(int(l) for l in sys.stdin))"
Collin Anderson
  • 14,787
  • 6
  • 68
  • 57
dF.
  • 74,139
  • 30
  • 130
  • 136
  • Above one-liner doesn't work for files in sys.argv[], but that one does http://stackoverflow.com/questions/450799/linux-command-to-sum-integers-one-per-line#450825 – jfs Jan 16 '09 at 16:21
  • True- the author said he was going to pipe output from another script into the command and I was trying to make it as short as possible :) – dF. Jan 16 '09 at 18:18
  • 51
    Shorter version would be `python -c"import sys; print(sum(map(int, sys.stdin)))"` – jfs Jan 17 '09 at 12:39
  • 4
    I love this answer for its ease of reading and flexibility. I needed the average size of files smaller than 10Mb in a collection of directories and modified it to this: `find . -name '*.epub' -exec stat -c %s '{}' \; | python -c "import sys; nums = [int(n) for n in sys.stdin if int(n) < 10000000]; print(sum(nums)/len(nums))"` – Paul Whipp Oct 23 '12 at 00:33
  • Very flexible solution. Also usable for float numbers, just replace `int` with `float` – geekQ Oct 02 '15 at 08:43
  • 2
    You can also filter out non numbers if you have some text mixed in: `import sys; print(sum(int(''.join(c for c in l if c.isdigit())) for l in sys.stdin))` – Granitosaurus Feb 12 '18 at 12:02
  • Actually Python is my preferred way, because it sums the numbers without rounding the result even for very large numbers, while awk rounds the result. – Uri Oct 01 '21 at 10:50
  • @jfs : use gawk -M bignum mode with GMP, and nothing fails – RARE Kpop Manifesto Jun 15 '22 at 12:55
122

I would put a big WARNING on the commonly approved solution:

awk '{s+=$1} END {print s}' mydatafile # DO NOT USE THIS!!

that is because in this form awk uses a 32 bit signed integer representation: it will overflow for sums that exceed 2147483647 (i.e., 2^31).

A more general answer (for summing integers) would be:

awk '{s+=$1} END {printf "%.0f\n", s}' mydatafile # USE THIS INSTEAD
Jean-François Fabre
  • 137,073
  • 23
  • 153
  • 219
Giancarlo Sportelli
  • 1,219
  • 1
  • 17
  • 20
  • Why does printf() help here? The overflow of the int will have happened before that because the summing code is the same. – Robert Klemme Mar 10 '15 at 17:30
  • 14
    Because the problem is actually in the "print" function. Awk uses 64 bit integers, but for some reason print donwscales them to 32 bit. – Giancarlo Sportelli Mar 12 '15 at 17:17
  • 4
    The print bug appears to be fixed, at least for awk 4.0.1 & bash 4.3.11, unless I'm mistaken: `echo -e "2147483647 \n 100" |awk '{s+=$1}END{print s}'` shows `2147483747` – Xen2050 Feb 23 '17 at 09:38
  • 8
    Using floats just introduces a new problem: `echo 999999999999999999 | awk '{s+=$1} END {printf "%.0f\n", s}'` produces `1000000000000000000` – phemmer Oct 24 '17 at 18:53
  • 1
    Shouldn't just using "%ld" on 64bit systems work to not have printf truncate to 32bit? As @Patrick points out, floats aren't a great idea here. – yerforkferchips Jun 19 '19 at 11:27
  • 1
    @yerforkferchips, where should `%ld` be placed in the code? I tried `echo -e "999999999999999999" | awk '{s+=$1} END {printf "%ld\n", s}'` but it still produced `1000000000000000000`. – Josh Jun 13 '20 at 18:44
  • See the latest comment by @kvantour on this answer: https://stackoverflow.com/a/450821/3090225 -- a sad fact in awk ;) – yerforkferchips Jun 14 '20 at 16:52
  • Both methods print the same number, even with large numbers (40 decimal digits), but they are both not accurate - only about the first 15 digits are accurate and the rest are random. – Uri Oct 01 '21 at 10:43
  • As others said awk is incompatible with itself across different versions and builds. You need to be aware of what to really expect with such operations, each time... – fgeorgatos Jan 24 '22 at 12:56
  • @Uri : then just track the addition using a hi/lo combo of 2 integers. Between 2 of them, you can easily track 101-102 bits to full precision. Track with 5 integers total, and you'll be jus shy of 256-bit full precision. Use 21 of them, and you can even track unsigned 1024-bit without needing arrays – RARE Kpop Manifesto Jun 15 '22 at 13:01
96

Plain bash:

$ cat numbers.txt 
1
2
3
4
5
6
7
8
9
10
$ sum=0; while read num; do ((sum += num)); done < numbers.txt; echo $sum
55
karolba
  • 956
  • 10
  • 19
Giacomo
  • 11,087
  • 5
  • 25
  • 25
  • 2
    A smaller one liner: http://stackoverflow.com/questions/450799/linux-command-to-sum-integers-one-per-line/7720597#7720597 – Khaja Minhajuddin Oct 11 '11 at 01:51
  • @rjack, where is `num` defined? I believe somehow it is connected to the `< numbers.txt` expression, but it is not clear how. – Atcold Oct 27 '15 at 16:36
  • 2
    @Atcold `num` is defined in the while expression. `while read XX` means "use `while` to read a value, then store that value in `XX`" – aggregate1166877 Jun 29 '22 at 08:52
94

With jq:

seq 10 | jq -s 'add' # 'add' is equivalent to 'reduce .[] as $item (0; . + $item)'
banyan
  • 3,837
  • 2
  • 31
  • 25
  • Is there a way to do this with `rq`? – theonlygusti Feb 15 '21 at 16:43
  • I think I know what could be the next question, so I will add the answer here :) **calculate average:** `seq 10 | jq -s 'add / length'` [ref here](https://stedolan.github.io/jq/manual/#Variable/SymbolicBindingOperator:...as$identifier|...) – Marinos An Jun 16 '21 at 12:11
72
dc -f infile -e '[+z1<r]srz1<rp'

Note that negative numbers prefixed with minus sign should be translated for dc, since it uses _ prefix rather than - prefix for that. For example, via tr '-' '_' | dc -f- -e '...'.

Edit: Since this answer got so many votes "for obscurity", here is a detailed explanation:

The expression [+z1<r]srz1<rp does the following:

[   interpret everything to the next ] as a string
  +   push two values off the stack, add them and push the result
  z   push the current stack depth
  1   push one
  <r  pop two values and execute register r if the original top-of-stack (1)
      is smaller
]   end of the string, will push the whole thing to the stack
sr  pop a value (the string above) and store it in register r
z   push the current stack depth again
1   push 1
<r  pop two values and execute register r if the original top-of-stack (1)
    is smaller
p   print the current top-of-stack

As pseudo-code:

  1. Define "add_top_of_stack" as:
    1. Remove the two top values off the stack and add the result back
    2. If the stack has two or more values, run "add_top_of_stack" recursively
  2. If the stack has two or more values, run "add_top_of_stack"
  3. Print the result, now the only item left in the stack

To really understand the simplicity and power of dc, here is a working Python script that implements some of the commands from dc and executes a Python version of the above command:

### Implement some commands from dc
registers = {'r': None}
stack = []
def add():
    stack.append(stack.pop() + stack.pop())
def z():
    stack.append(len(stack))
def less(reg):
    if stack.pop() < stack.pop():
        registers[reg]()
def store(reg):
    registers[reg] = stack.pop()
def p():
    print stack[-1]

### Python version of the dc command above

# The equivalent to -f: read a file and push every line to the stack
import fileinput
for line in fileinput.input():
    stack.append(int(line.strip()))

def cmd():
    add()
    z()
    stack.append(1)
    less('r')

stack.append(cmd)
store('r')
z()
stack.append(1)
less('r')
p()
ruvim
  • 7,151
  • 2
  • 27
  • 36
CB Bailey
  • 755,051
  • 104
  • 632
  • 656
  • 2
    dc is just the tool of choice to use. But I would do it with a little less stack ops. Assumed that all lines really contain a number: `(echo "0"; sed 's/$/ +/' inp; echo 'pq')|dc`. – ikrabbe Jul 06 '15 at 10:02
  • 5
    The online algorithm: `dc -e '0 0 [+?z1 – ruvim Oct 02 '15 at 11:42
  • @ikrabbe that's great. It can actually be shortened by one more character: the space in the `sed` substitution can be removed, as `dc` doesn't care about spaces between arguments and operators. `(echo "0"; sed 's/$/+/' inputFile; echo 'pq')|dc` – WhiteHotLoveTiger Jun 13 '16 at 02:05
54

Pure and short bash.

f=$(cat numbers.txt)
echo $(( ${f//$'\n'/+} ))
Daniel
  • 81
  • 2
  • 2
  • 11
    This is the best solution because it does not create any subprocess if you replace first line with `f=$( – loentar Jun 19 '13 at 06:12
  • 1
    any way of having the input from stdin ? like from a pipe ? – njzk2 Jan 31 '14 at 18:35
  • 1
    @njzk2 If you put `f=$(cat); echo $(( ${f//$'\n'/+} ))` in a script, then you can pipe anything to that script or invoke it without arguments for interactive stdin input (terminate with Control-D). – mklement0 Apr 26 '14 at 04:02
  • 6
    @loentar The ` – mklement0 Apr 26 '14 at 04:08
  • My usecease: f=$(find -iname '\*-2014-\*' -exec du {} \; | cut -f1); echo $(( ${f//$'\n'/+} )). Might help someone. – Omer Akhter Jan 19 '15 at 03:21
  • An advantage of the bash solution over the awk solution is that it gives an integer result, even for large numbers. For large numbers, awk returns a result in scientific notation. – user100464 Apr 09 '15 at 21:28
  • This answer is not pure bash because `cat` is an external call. See the accepted answer's `awk '{s+=$1}END{printf"%f",s}' mydatafile` for a faster solution that won't fail given too large an input. Load time differences (`cat`'s 43k vs `mawk`'s 128k or even `gawk`'s 659k on my system) won't ever overcome the performance difference ... unless you're running this too often, in which case use more `awk` or else a "real" language. – Adam Katz Apr 09 '19 at 17:11
  • i didn't downvote, but wanna note this is a horrific solution - summing up from 1 to `99999` took 26.7 seconds on a machine with M1 Max and `bash 5.2.15`, versus `0.053 secs` on `awk` using `jot`, and `0.22 secs` generating via another `awk`. Even summing every integer to 100 mil was only 11.5 seconds, and just 1 min 55secs summing all the way to 1 billion. `perl` came in *just* slower than `awk` – RARE Kpop Manifesto Apr 22 '23 at 10:37
41
perl -lne '$x += $_; END { print $x; }' < infile.txt
j_random_hacker
  • 50,331
  • 10
  • 105
  • 169
  • 4
    And I added them back: "-l" ensures that output is LF-terminated as shell `` backticks and most programs expect, and "<" indicates this command can be used in a pipeline. – j_random_hacker Jan 16 '09 at 16:08
  • You are right. As an excuse: Each character in Perl one-liners requires a mental work for me, therefore I prefer to strip as many characters as possible. The habit was harmful in this case. – jfs Jan 16 '09 at 16:17
  • 4
    One of the few solutions that doesn't load everything into RAM. – Erik Aronesty Oct 04 '16 at 19:14
  • 1
    I find it curious just how undervalued this answer is in comparison with the top-rated ones (that use non-shell tools) -- while it's faster and simpler than those. It's almost the same syntax as awk but faster (as benchmarked in another well-voted answer here) and without any caveats, and it's much shorter and simpler than python, and faster (flexibility can be added just as easily). One needs to know the basics of the language used for it, but that goes for any tool. I get the notion of a popularity of a tool but this question is tool agnostic. All these were published the same day. – zdim Jan 16 '22 at 02:12
  • (disclaimer for my comment above: I know and use and like Perl and Python, as good tools.) – zdim Jan 16 '22 at 02:37
  • @zdim : no idea what you're benchmarking with or against, but i just summed `1` to `99,999,999`, and `perl 5.36` came in *just* behind `mawk 1.9.9.6` – RARE Kpop Manifesto Apr 22 '23 at 11:04
  • @RAREKpopManifesto That was written a year and a half ago, but according to the comment itself I was quoting a ("well-quoted") benchmark posted on this page (I didn't benchmark anything) ...? (Did you actually read my comment?). I don't know how `mawk` does it or how you timed things, but yes awk can be speedy for many things, agreed. – zdim Apr 22 '23 at 20:02
38

My fifteen cents:

$ cat file.txt | xargs  | sed -e 's/\ /+/g' | bc

Example:

$ cat text
1
2
3
3
4
5
6
78
9
0
1
2
3
4
576
7
4444
$ cat text | xargs  | sed -e 's/\ /+/g' | bc 
5148
innocent-world
  • 548
  • 2
  • 7
  • 11
35

I've done a quick benchmark on the existing answers which

  • use only standard tools (sorry for stuff like lua or rocket),
  • are real one-liners,
  • are capable of adding huge amounts of numbers (100 million), and
  • are fast (I ignored the ones which took longer than a minute).

I always added the numbers of 1 to 100 million which was doable on my machine in less than a minute for several solutions.

Here are the results:

Python

:; seq 100000000 | python -c 'import sys; print sum(map(int, sys.stdin))'
5000000050000000
# 30s
:; seq 100000000 | python -c 'import sys; print sum(int(s) for s in sys.stdin)'
5000000050000000
# 38s
:; seq 100000000 | python3 -c 'import sys; print(sum(int(s) for s in sys.stdin))'
5000000050000000
# 27s
:; seq 100000000 | python3 -c 'import sys; print(sum(map(int, sys.stdin)))'
5000000050000000
# 22s
:; seq 100000000 | pypy -c 'import sys; print(sum(map(int, sys.stdin)))'
5000000050000000
# 11s
:; seq 100000000 | pypy -c 'import sys; print(sum(int(s) for s in sys.stdin))'
5000000050000000
# 11s

Awk

:; seq 100000000 | awk '{s+=$1} END {print s}'
5000000050000000
# 22s

Paste & Bc

This ran out of memory on my machine. It worked for half the size of the input (50 million numbers):

:; seq 50000000 | paste -s -d+ - | bc
1250000025000000
# 17s
:; seq 50000001 100000000 | paste -s -d+ - | bc
3750000025000000
# 18s

So I guess it would have taken ~35s for the 100 million numbers.

Perl

:; seq 100000000 | perl -lne '$x += $_; END { print $x; }'
5000000050000000
# 15s
:; seq 100000000 | perl -e 'map {$x += $_} <> and print $x'
5000000050000000
# 48s

Ruby

:; seq 100000000 | ruby -e "puts ARGF.map(&:to_i).inject(&:+)"
5000000050000000
# 30s

C

Just for comparison's sake I compiled the C version and tested this also, just to have an idea how much slower the tool-based solutions are.

#include <stdio.h>
int main(int argc, char** argv) {
    long sum = 0;
    long i = 0;
    while(scanf("%ld", &i) == 1) {
        sum = sum + i;
    }
    printf("%ld\n", sum);
    return 0;
}

 

:; seq 100000000 | ./a.out 
5000000050000000
# 8s

Conclusion

C is of course fastest with 8s, but the Pypy solution only adds a very little overhead of about 30% to 11s. But, to be fair, Pypy isn't exactly standard. Most people only have CPython installed which is significantly slower (22s), exactly as fast as the popular Awk solution.

The fastest solution based on standard tools is Perl (15s).

Alfe
  • 56,346
  • 20
  • 107
  • 159
  • 2
    The `paste` + `bc` approach was just what I was looking for to sum hex values, thanks! – Tomislav Nakic-Alfirevic Nov 14 '17 at 11:09
  • 1
    Just for fun, in Rust: `use std::io::{self, BufRead}; fn main() { let stdin = io::stdin(); let mut sum: i64 = 0; for line in stdin.lock().lines() { sum += line.unwrap().parse::().unwrap(); } println!("{}", sum); }` – Jocelyn Aug 26 '18 at 09:40
  • awesome answer. not to nitpick but it is the case that if you decided to include those longer-running results, the answer would be *even more awesome!* – Steven Lu Jul 24 '19 at 21:21
  • @StevenLu I felt the answer would just be _longer_ and thus _less awesome_ (to use your words). But I can understand that this feeling needs not be shared by everybody :) – Alfe Aug 08 '19 at 15:18
  • Next: numba + parallelisation – gerrit Feb 19 '20 at 15:22
  • awk would have matched perl if you had used `$0` instead of `$1` – Amit Naidu May 07 '20 at 10:30
26

Using the GNU datamash util:

seq 10 | datamash sum 1

Output:

55

If the input data is irregular, with spaces and tabs at odd places, this may confuse datamash, then either use the -W switch:

<commands...> | datamash -W sum 1

...or use tr to clean up the whitespace:

<commands...> | tr -d '[[:blank:]]' | datamash sum 1

If the input is large enough, the output will be in scientific notation.

seq 100000000 | datamash sum 1

Output:

5.00000005e+15

To convert that to decimal, use the the --format option:

seq 100000000 | datamash  --format '%.0f' sum 1

Output:

5000000050000000
Cliff
  • 109
  • 7
agc
  • 7,973
  • 2
  • 29
  • 50
20

BASH solution, if you want to make this a command (e.g. if you need to do this frequently):

addnums () {
  local total=0
  while read val; do
    (( total += val ))
  done
  echo $total
}

Then usage:

addnums < /tmp/nums
Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
Jay
  • 41,768
  • 14
  • 66
  • 83
20

Plain bash one liner

$ cat > /tmp/test
1 
2 
3 
4 
5
^D

$ echo $(( $(cat /tmp/test | tr "\n" "+" ) 0 ))
Khaja Minhajuddin
  • 6,653
  • 7
  • 45
  • 47
17

You can using num-utils, although it may be overkill for what you need. This is a set of programs for manipulating numbers in the shell, and can do several nifty things, including of course, adding them up. It's a bit out of date, but they still work and can be useful if you need to do something more.

https://suso.suso.org/programs/num-utils/index.phtml

It's really simple to use:

$ seq 10 | numsum
55

But runs out of memory for large inputs.

$ seq 100000000 | numsum
Terminado (killed)
Iain Samuel McLean Elder
  • 19,791
  • 12
  • 64
  • 80
sykora
  • 96,888
  • 11
  • 64
  • 71
13

Cannot avoid submitting this, it is the most generic approach to this Question, please check:

jot 1000000 | sed '2,$s/$/+/;$s/$/p/' | dc

It is to be found over here, I was the OP and the answer came from the audience:

And here are its special advantages over awk, bc, perl, GNU's datamash and friends:

  • it uses standards utilities common in any unix environment
  • it does not depend on buffering and thus it does not choke with really long inputs.
  • it implies no particular precision limits -or integer size for that matter-, hello AWK friends!
  • no need for different code, if floating point numbers need to be added, instead.
  • it theoretically runs unhindered in the minimal of environments
bfontaine
  • 18,169
  • 13
  • 73
  • 107
fgeorgatos
  • 171
  • 1
  • 10
  • Please include the code related to the question in the answer and not refer to a link – Ibo Sep 27 '17 at 01:24
  • It also happens to be much slower than all the other solutions, more than 10 times slower than the datamash solution – Gabriel Ravier Jul 26 '21 at 08:42
  • @GabrielRavier OP doesn't define speed as a first requirement, so in absence of that a generic working solution would be preferred. FYI. datamash is not standard across all Unix platforms, fi. MacOSX appears to be lacking that. – fgeorgatos Aug 16 '21 at 22:47
  • @fgeorgatos this is true, but I just wanted to point out to everyone else looking at this question that this answer is, in fact, very slow compared to what you can get on most Linux systems. – Gabriel Ravier Aug 17 '21 at 23:31
  • @GabrielRavier could you provide some measured numbers for comparison? btw. I have run a couple of `jot` tests and speed is very reasonable even for quite large lists. btw. if datamash is taken as the solution to the OP's question, then any compiled assembly program should be acceptable, too... that would speed it up! – fgeorgatos Aug 17 '21 at 23:36
  • @fgeorgatos After some calculations, I can confirm it is even more than 10 times faster. `time seq 10000000 | sed '2,$s/$/+/;$s/$/p/' | dc` gives me the correct result in 43 seconds whereas `time seq 10000000 | datamash sum 1` does it in 1 second, making it more than 40 times faster. Also, a "compiled assembly program" as you call it, would be much more convoluted a solution, likely not much faster and would be much more likely to give incorrect solutions – Gabriel Ravier Aug 19 '21 at 06:52
11

I realize this is an old question, but I like this solution enough to share it.

% cat > numbers.txt
1 
2 
3 
4 
5
^D
% cat numbers.txt | perl -lpe '$c+=$_}{$_=$c'
15

If there is interest, I'll explain how it works.

Nym
  • 126
  • 1
  • 6
  • 11
    Please don't. We like to pretend that -n and -p are nice semantic things, not just some clever string pasting ;) – hobbs Oct 15 '09 at 00:37
  • 2
    Yes please, do explain :) (I'm not a Perl typea guy.) – Jens Apr 24 '13 at 03:37
  • 3
    Try running "perl -MO=Deparse -lpe '$c+=$_}{$_=$c'" and looking at the output, basically -l uses newlines and both input and output separators, and -p prints each line. But in order to do '-p', perl first adds some boiler plate (which -MO=Deparse) will show you, but then it just substitutes and compiles. You can thus cause an extra block to be inserted with the '}{' part and trick it into not printing on each line, but print at the very end. – Nym Jul 08 '13 at 18:52
11

The following works in bash:

I=0

for N in `cat numbers.txt`
do
    I=`expr $I + $N`
done

echo $I
Francisco Canedo
  • 1,980
  • 2
  • 13
  • 16
  • 1
    Command expansion should be used with caution when files can be arbitrarily large. With numbers.txt of 10MB, the `cat numbers.txt` step would be problematic. – Giacomo Jan 16 '09 at 15:59
  • 1
    Indeed, however (if not for the better solutions found here) I would use this one until I actually encountered that problem. – Francisco Canedo Jan 16 '09 at 22:05
11
sed 's/^/.+/' infile | bc | tail -1
Daniel Serodio
  • 4,229
  • 5
  • 37
  • 33
8

Pure bash and in a one-liner :-)

$ cat numbers.txt
1
2
3
4
5
6
7
8
9
10


$ I=0; for N in $(cat numbers.txt); do I=$(($I + $N)); done; echo $I
55
Oliver Ertl
  • 41
  • 1
  • 1
6

Alternative pure Perl, fairly readable, no packages or options required:

perl -e "map {$x += $_} <> and print $x" < infile.txt
clint
  • 1
  • 1
  • 1
6

For Ruby Lovers

ruby -e "puts ARGF.map(&:to_i).inject(&:+)" numbers.txt
johnlinvc
  • 304
  • 3
  • 6
6

Here's a nice and clean Raku (formerly known as Perl 6) one-liner:

say [+] slurp.lines

We can use it like so:

% seq 10 | raku -e "say [+] slurp.lines"
55

It works like this:

slurp without any arguments reads from standard input by default; it returns a string. Calling the lines method on a string returns a list of lines of the string.

The brackets around + turn + into a reduction meta operator which reduces the list to a single value: the sum of the values in the list. say then prints it to standard output with a newline.

One thing to note is that we never explicitly convert the lines to numbers—Raku is smart enough to do that for us. However, this means our code breaks on input that definitely isn't a number:

% echo "1\n2\nnot a number" | raku -e "say [+] slurp.lines"
Cannot convert string to number: base-10 number must begin with valid digits or '.' in '⏏not a number' (indicated by ⏏)
  in block <unit> at -e line 1
Julia
  • 1,950
  • 1
  • 9
  • 22
4

You can do it in python, if you feel comfortable:

Not tested, just typed:

out = open("filename").read();
lines = out.split('\n')
ints = map(int, lines)
s = sum(ints)
print s

Sebastian pointed out a one liner script:

cat filename | python -c"from fileinput import input; print sum(map(int, input()))"
Tiago
  • 9,457
  • 5
  • 39
  • 35
  • python -c"from fileinput import input; print sum(map(int, input()))" numbers.txt – jfs Jan 16 '09 at 15:50
  • 3
    cat is overused, redirect stdin from file: python -c "..." < numbers.txt – Giacomo Jan 16 '09 at 16:02
  • 2
    @rjack: `cat` is used to demonstrate that script works both for stdin and for files in argv[] (like `while(<>)` in Perl). If your input is in a file then '<' is unnecessary. – jfs Jan 16 '09 at 16:06
  • 2
    But `< numbers.txt` demonstrates that it works on stdin just as well as `cat numbers.txt |` does. And it doesn't teach bad habits. – Xiong Chiamiov Jun 18 '13 at 22:39
  • @XiongChiamiov If you care so much about habits, using the notation `command < file` is a bad habit itself. Use `< file command` instead. Bash one-liners should be easy to read left-to-right from input to output. – Marcel Besixdouze Sep 02 '22 at 03:20
4

The following should work (assuming your number is the second field on each line).

awk 'BEGIN {sum=0} \
 {sum=sum + $2} \
END {print "tot:", sum}' Yourinputfile.txt
Sinan Ünür
  • 116,958
  • 15
  • 196
  • 339
James Anderson
  • 27,109
  • 7
  • 50
  • 78
3

One-liner in Racket:

racket -e '(define (g) (define i (read)) (if (eof-object? i) empty (cons i (g)))) (foldr + 0 (g))' < numlist.txt
b2coutts
  • 131
  • 1
3

C (not simplified)

seq 1 10 | tcc -run <(cat << EOF
#include <stdio.h>
int main(int argc, char** argv) {
    int sum = 0;
    int i = 0;
    while(scanf("%d", &i) == 1) {
        sum = sum + i;
    }
    printf("%d\n", sum);
    return 0;
}
EOF)
Greg Bowyer
  • 1,684
  • 13
  • 14
  • I had to upvote the comment. There's nothing wrong with the answer - it's quite good. However, to show that the comment makes the answer awesome, I'm just upvoting the comment. – bballdave025 May 22 '19 at 21:52
3
$ cat n
2
4
2
7
8
9
$ perl -MList::Util -le 'print List::Util::sum(<>)' < n
32

Or, you can type in the numbers on the command line:

$ perl -MList::Util -le 'print List::Util::sum(<>)'
1
3
5
^D
9

However, this one slurps the file so it is not a good idea to use on large files. See j_random_hacker's answer which avoids slurping.

Community
  • 1
  • 1
Sinan Ünür
  • 116,958
  • 15
  • 196
  • 339
3

My version:

seq -5 10 | xargs printf "- - %s" | xargs  | bc
Vytenis Bivainis
  • 2,308
  • 21
  • 28
2

Real-time summing to let you monitor progress of some number-crunching task.

$ cat numbers.txt 
1
2
3
4
5
6
7
8
9
10

$ cat numbers.txt | while read new; do total=$(($total + $new)); echo $total; done
1
3
6
10
15
21
28
36
45
55

(There is no need to set $total to zero in this case. Neither you can access $total after the finish.)

sanmai
  • 29,083
  • 12
  • 64
  • 76
2

C++ (simplified):

echo {1..10} | scc 'WRL n+=$0; n'

SCC project - http://volnitsky.com/project/scc/

SCC is C++ snippets evaluator at shell prompt

rogerdpack
  • 62,887
  • 36
  • 269
  • 388
Leonid Volnitsky
  • 8,854
  • 5
  • 38
  • 53
2

Apologies in advance for readability of the backticks ("`"), but these work in shells other than bash and are thus more pasteable. If you use a shell which accepts it, the $(command ...) format is much more readable (and thus debuggable) than `command ...` so feel free to modify for your sanity.

I have a simple function in my bashrc that will use awk to calculate a number of simple math items

calc(){
  awk 'BEGIN{print '"$@"' }'
}

This will do +,-,*,/,^,%,sqrt,sin,cos, parenthesis ....(and more depending on your version of awk) ... you could even get fancy with printf and format floating point output, but this is all I normally need

for this particular question, I would simply do this for each line:

calc `echo "$@"|tr " " "+"`

so the code block to sum each line would look something like this:

while read LINE || [ "$LINE" ]; do
  calc `echo "$LINE"|tr " " "+"` #you may want to filter out some lines with a case statement here
done

That's if you wanted to only sum them line by line. However for a total of every number in the datafile

VARS=`<datafile`
calc `echo ${VARS// /+}`

btw if I need to do something quick on the desktop, I use this:

xcalc() { 
  A=`calc "$@"`
  A=`Xdialog --stdout --inputbox "Simple calculator" 0 0 $A`
  [ $A ] && xcalc $A
}
tripleee
  • 175,061
  • 34
  • 275
  • 318
technosaurus
  • 7,676
  • 1
  • 30
  • 52
1

A lua interpreter is present on all fedora-based systems [fedora,RHEL,CentOS,korora etc. because it is embedded with rpm-package(the package of package manager rpm), i.e rpm-lua] and if u want to learn lua this kind of problems are ideal(you'll get your job done as well).

cat filname | lua -e "sum = 0;for i in io.lines() do sum=sum+i end print(sum)"

and it works. Lua is verbose though, you might have to endure some repeated keyboard stroke injury :)

fedvasu
  • 1,232
  • 3
  • 18
  • 38
1

You can do it with Alacon - command-line utility for Alasql database.

It works with Node.js, so you need to install Node.js and then Alasql package:

To calculate sum from stdin you can use the following command:

> cat data.txt | node alacon "SELECT VALUE SUM([0]) FROM TXT()" >b.txt
agershun
  • 4,077
  • 38
  • 41
1

You can use your preferred 'expr' command you just need to finagle the input a little first:

seq 10 | tr '[\n]' '+' | sed -e 's/+/ + /g' -e's/ + $/\n/' | xargs expr

The process is:

  • "tr" replaces the eoln characters with a + symbol,
  • sed pads the '+' with spaces on each side, and then strips the final + from the line
  • xargs inserts the piped input into the command line for expr to consume.
Alan Dyke
  • 855
  • 6
  • 14
1

Just for completeness, there is also an R solution

seq 1 10 | R -q -e "f <- file('stdin'); open(f); cat(sum(as.numeric(readLines(f))))"
drmariod
  • 11,106
  • 16
  • 64
  • 110
1

UPDATED BENCHMARKS

So I synthetically generated 100 mn integers randomly distributed

between

0^0 - 1 

and

8^8 - 1

GENERATOR CODE

mawk2 '
BEGIN {
     __=_=((_+=_^=_<_)+(__=_*_*_))^(___=__)
     srand()
     ___^=___
     do  { 
           print int(rand()*___) 
  
     } while(--_)  }' | pvE9 > test_large_int_100mil_001.txt

     out9:  795MiB 0:00:11 [69.0MiB/s] [69.0MiB/s] [ <=> ]

  f='test_large_int_100mil_001.txt'
  wc5 < "${f}"

    rows = 100000000. | UTF8 chars = 833771780. | bytes = 833771780.

Odd / Even distribution of Last Digit

Odd  49,992,332
Even 50,007,668

AWK - Fastest, by a good margin (maybe C is faster I dunno)

in0:  795MiB 0:00:07 [ 103MiB/s] [ 103MiB/s] [============>] 100%            
( pvE 0.1 in0 < "${f}" | mawk2 '{ _+=$__ } END { print _ }'; )  

 7.64s user 0.35s system 103% cpu 7.727 total
     1  838885279378716

Perl - Quite Decent

 in0:  795MiB 0:00:10 [77.6MiB/s] [77.6MiB/s] [==============>] 100%            
( pvE 0.1 in0 < "${f}" | perl -lne '$x += $_; END { print $x; }'; )  
 
10.16s user 0.37s system 102% cpu 10.268 total

     1  838885279378716

Python3 - Slightly behind Perl

 in0:  795MiB 0:00:11 [71.5MiB/s] [71.5MiB/s] [===========>] 100%            
( pvE 0.1 in0 < "${f}" | python3 -c ; )  

 11.00s user 0.43s system 102% cpu 11.140 total
     1  838885279378716

RUBY - Decent

 in0:  795MiB 0:00:13 [61.0MiB/s] [61.0MiB/s] [===========>] 100%            
( pvE 0.1 in0 < "${f}" | ruby -e 'puts ARGF.map(&:to_i).inject(&:+)'; )  
15.30s user 0.70s system 101% cpu 15.757 total

     1  838885279378716

JQ - Slow

 in0:  795MiB 0:00:25 [31.1MiB/s] [31.1MiB/s] [========>] 100%            
( pvE 0.1 in0 < "${f}" | jq -s 'add'; )  

 36.95s user 1.09s system 100% cpu 37.840 total

     1  838885279378716

DC

- ( had to kill it after no response in minutes)
RARE Kpop Manifesto
  • 2,453
  • 3
  • 11
0
#include <iostream>

int main()
{
    double x = 0, total = 0;
    while (std::cin >> x)
        total += x;
    if (!std::cin.eof())
        return 1;
    std::cout << x << '\n';
}
Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
0

...and the PHP version, just for the sake of completeness

cat /file/with/numbers | php -r '$s = 0; while (true) { $e = fgets(STDIN); if (false === $e) break; $s += $e; } echo $s;'
Ivan Krechetov
  • 18,802
  • 8
  • 49
  • 60
0

One-liner in Rebol:

rebol -q --do 's: 0 while [d: input] [s: s + to-integer d] print s' < infile.txt

Unfortunately the above doesn't work in Rebol 3 just yet (INPUT doesn't stream STDIN).

So here's an interim solution which also works in Rebol 3:

rebol -q --do 's: 0 foreach n to-block read %infile.txt [s: s + n] print s'
draegtun
  • 22,441
  • 5
  • 48
  • 71
0

Using env variable tmp

tmp=awk -v tmp="$tmp" '{print $tmp" "$1}' <filename>|echo $tmp|sed "s/ /+/g"|bc

tmp=cat <filename>|awk -v tmp="$tmp" '{print $tmp" "$1}'|echo $tmp|sed "s/ /+/g"|bc

Thanks.

Fruchtzwerg
  • 10,999
  • 12
  • 40
  • 49
김상헌
  • 1
  • 1
0

one simple solution would be to write a program to do it for you. This could probably be done pretty quickly in python, something like:

sum = 0
file = open("numbers.txt","R")
for line in file.readlines(): sum+=int(line)
file.close()
print sum

I haven't tested that code, but it looks right. Just change numbers.txt to the name of the file, save the code to a file called sum.py, and in the console type in "python sum.py"

Matt Boehm
  • 1,894
  • 1
  • 18
  • 21
  • calling readlines() reads the entire file into memory - using 'for line in file' could be better – orip Jan 16 '09 at 16:11
0

The beauty of awk is that, using 1 single stream of integers jotted, it can simultaneously be generating multiple concurrent (and possibly cross-interacting) sequences with barely any code at all :

jot - -10 399 | 

mawk2 '__+=($++NF+=__+=-($++NF+=(--$!_)*9^9-1)+($!_^=2))' CONVFMT='%.20g'

121     4261625501                  -4261625380
100     12397455993                 -3874204891
81      28281696469                 -3486784402
64      59662756915                 -3099363913
49      122037457303                -2711943424
36      246399437577                -2324522935
25      494735977625                -1937102446
16      991021637223                -1549681957
9       1983205535923               -1162261468
4       3967185912829               -774840979
1       7934759246149               -387420490
0       15869518492299              -1
1       31738649564111              387420488
4       63476524287249              774840977
9       126951886313041             1162261466
16      253902222944143             1549681955
25      507802508785867             1937102444
36      1015602693048837            2324522933
49      2031202674154301            2711943422
64      4062402248944755            3099363911
81      8124801011105191            3486784400

This is a less commonly known feature, but mawk-1 can directly generate formatted output without using printf() or sprintf() :

 jot - -11111111555359 900729999999999 49987777777556 | 
 
 mawk '$++NF=_+=$!__' CONVFMT='%+\047\043 30.f' OFS='\t' 

-11111111555359           -11,111,111,555,359.
38876666222197            +27,765,554,666,838.
88864443999753           +116,629,998,666,591.
138852221777309          +255,482,220,443,900.

188839999554865          +444,322,219,998,765.
238827777332421          +683,149,997,331,186.
288815555109977          +971,965,552,441,163.

338803332887533        +1,310,768,885,328,696.
388791110665089        +1,699,559,995,993,785.
438778888442645        +2,138,338,884,436,430.
488766666220201        +2,627,105,550,656,631.

538754443997757        +3,165,859,994,654,388.
588742221775313        +3,754,602,216,429,701.
638729999552869        +4,393,332,215,982,570.
688717777330425        +5,082,049,993,312,995.

738705555107981        +5,820,755,548,420,976.
788693332885537        +6,609,448,881,306,513.
838681110663093        +7,448,129,991,969,606.
888668888440649        +8,336,798,880,410,255.

With nawk, one even more obscure feature is being able to print out the exact IEEE 754 double precision floating point hex :

 jot - .00001591111137777 \
       9007299999.1111111111 123.990333333328 | 

nawk '$++NF=_+=_+= cos(exp(log($!__)/1.1))' CONVFMT='[ %20.13p ]' OFS='\t' \_=1 

0.00001591111137777     [   0x400fffffffbf27f8 ]
123.99034924443937200   [   0x401f1a2498670bcc ]
247.98068257776736800   [   0x40313bd908775e35 ]
371.97101591109537821   [   0x4040516a505a57a3 ]
495.96134924442338843   [   0x4050b807540a1c3a ]

619.95168257775139864   [   0x4060f800d1abb906 ]
743.94201591107935201   [   0x407112ffc8adec4a ]
867.93234924440730538   [   0x40810bab4a485ad9 ]
991.92268257773525875   [   0x4091089e1149c279 ]

1115.91301591106321212  [   0x40a10ac8cfb09c62 ]
1239.90334924439116548  [   0x40b10a7bfa7fa42d ]
1363.89368257771911885  [   0x40c109c2d1b9947c ]
1487.88401591104707222  [   0x40d10a2644d5ab3b ]

gawk w/ GMP is even more interesting - they're willing to provide comma-formatted hex on your behalf, plus strangely padding extra commas in the empty space to its left

=

jot -  .000591111137777 90079.1111111111 123.990333333328 | 

gawk -v PREC=20000 -nMbe '
              $++NF  = _ +=(15^16 * log($!__)/log(sqrt(10)))' \
              CONVFMT='< 0x %\04724.12x >' OFS=' | '   \_=1 

# rows skipped in the middle for illustration clarity
 
4339.662257777619743 | < 0x    ,   ,4e6,007,2f4,08a,b93,8b3 >
4463.652591110947469 | < 0x    ,   ,50f,967,27f,e5a,963,518 >
4835.623591110930647 | < 0x    ,   ,58d,250,b65,a8d,45d,b79 >
7315.430257777485167 | < 0x    ,   ,8eb,b36,ee9,fe6,149,da5 >
11779.082257777283303 | < 0x    ,   ,f4b,c34,a75,82a,826,abb >

12151.053257777266481 | < 0x    ,   ,fd7,3c2,25e,1ab,a09,bbf >
16738.695591110394162 | < 0x    ,  1,6b0,f3b,350,ed3,eca,c58 >
17978.598924443671422 | < 0x    ,  1,894,2f2,aba,a30,f63,bae >
20458.405591110225942 | < 0x    ,  1,c64,a40,87e,e35,4d4,896 >
23434.173591110091365 | < 0x    ,  2,108,186,96e,0dc,2ef,d46 >

31741.525924443049007 | < 0x    ,  2,e45,bae,b73,24f,981,637 >
32857.438924442998541 | < 0x    ,  3,014,3a7,b9e,daf,18c,c3e >
33849.361591109620349 | < 0x    ,  3,1b0,9b7,5f1,536,49c,74e >
41536.762257775939361 | < 0x    ,  3,e51,7c1,9b2,e74,516,220 >
45876.423924442409771 | < 0x    ,  4,58c,52d,078,edb,db4,4ba >

53067.863257775417878 | < 0x    ,  5,1aa,cf3,eed,33c,638,456 >
59391.370257775131904 | < 0x    ,  5,c73,38a,54d,b41,98d,a02 >
61127.234924441720068 | < 0x    ,  5,f6d,ce2,c40,117,6d2,6e7 >
66830.790257774875499 | < 0x    ,  6,944,fe1,378,9ea,235,7b0 >
71170.451924441600568 | < 0x    ,  7,0ce,de6,797,df3,009,35d >

76254.055591108335648 | < 0x    ,  7,9b0,f6d,03d,878,edf,97d >
83073.523924441760755 | < 0x    ,  8,5b0,aa9,7f7,a31,89a,f2e >
86669.243591108475812 | < 0x    ,  8,c0d,678,fa3,3b1,aad,f26 >
89149.050257775175851 | < 0x    ,  9,074,278,19d,4c7,443,a00 >
89769.001924441850861 | < 0x    ,  9,18e,464,ff9,0eb,ee4,4e1 >

but be weary of syntactical errors -

  • this is a selection of what's being printed out to STDOUT,
  • all 256 byte choices have been observed of what's being printed out, even if it's terminal window

=

   jot 3000 | 
   gawk -Me ' _=$++NF=____+=$++NF=___-= $++NF=__+=$++NF=\
             _^= exp(cos($++NF=______+=($1) %10 + 1))'   \
                                  ____=-111111089 OFMT='%32c`' 


 char >>[  --[ U+ 2 | 2 (ASCII) freq >>[ 8 sumtotal >>[ 45151 
 char >>[  --[ U+ 4 | 4 (ASCII) freq >>[ 11 sumtotal >>[ 45166 
 char >>[  --[ U+ 14 | 20 (ASCII) freq >>[ 9 sumtotal >>[ 45301 
 char >>[ + --[ U+ 2B | 43 (ASCII) freq >>[ 9 sumtotal >>[ 60645 
 char >>[ --[ U+ 9 | 9 (ASCII) freq >>[ 12 sumtotal >>[ 45216 
 char >>[ 8 --[ U+ 38 | 56 (ASCII) freq >>[ 1682 sumtotal >>[ 82522 
 char >>[ Q --[ U+ 51 | 81 (ASCII) freq >>[ 6 sumtotal >>[ 85040 
 char >>[ Y --[ U+ 59 | 89 (ASCII) freq >>[ 8 sumtotal >>[ 85105 
 char >>[ g --[ U+ 67 | 103 (ASCII) freq >>[ 10 sumtotal >>[ 85212 
 char >>[ p --[ U+ 70 | 112 (ASCII) freq >>[ 7 sumtotal >>[ 85411 
 char >>[ v --[ U+ 76 | 118 (ASCII) freq >>[ 7 sumtotal >>[ 85462 
 char >>[ ? --[ \216 \x8E | 142 (8-bit byte) freq >>[ 15 sumtotal >>[ 85653 
 char >>[ ? --[ \222 \x92 | 146 (8-bit byte) freq >>[ 13 sumtotal >>[ 85698 
 char >>[ ? --[ \250 \xA8 | 168 (8-bit byte) freq >>[ 9 sumtotal >>[ 85967 
 char >>[ ? --[ \307 \xC7 | 199 (8-bit byte) freq >>[ 7 sumtotal >>[ 86345 
 char >>[ ? --[ \332 \xDA | 218 (8-bit byte) freq >>[ 69 sumtotal >>[ 86576 
 char >>[ ? --[ \352 \xEA | 234 (8-bit byte) freq >>[ 6 sumtotal >>[ 86702 
 char >>[ ? --[ \354 \xEC | 236 (8-bit byte) freq >>[ 5 sumtotal >>[ 86713 
 char >>[ ? --[ \372 \xFA | 250 (8-bit byte) freq >>[ 11 sumtotal >>[ 86823 
 char >>[ ? --[ \376 \xFE | 254 (8-bit byte) freq >>[ 9 sumtotal >>[ 86859
RARE Kpop Manifesto
  • 2,453
  • 3
  • 11
0

Miller tool(s) are definitely an overkill for this task, but 1) let's have them here for completeness' sake; 2) they might come in handy for further processing:

 % seq 10 | mlr stats1 -a sum -f 1
1_sum=55
Nikolaj Š.
  • 1,457
  • 1
  • 10
  • 17
0

if the numbers that require summing up happen to fall within a contiguous range, you can use the old trick involving

   f(n) := n x ( n + 1 ) / 2

and doing

  f( high-side ) - \
  f(  low-side  - 1 )

as fast as awk is, a direct computation is still muuuuch faster than summation :

caveat being : without any big-int library, this approach is constrianed by 64-bit fp precision:

    ( time ( jot - 19495729 21895729 | pvE0 |

    mawk2 '{ __+=$_ } END { print __ }' )) 
    
    sleep 0.5

   ( time ( echo '19495729 21895729' | pvE0 | 

    mawk2 '
    function __(___, _) {
         return \
         _ * ((_ + (_^=_<_)) * (\
             _/= _+_)) - ___ * _ * --___ 
    }
    BEGIN {
        CONVFMT = "%.250g"
        OFS = ORS } ($!NF = __($1, $2))^_'  )) 

      in0: 20.6MiB 0:00:00 [49.4MiB/s] [49.4MiB/s] [ <=>]
  ( jot - 19495729 21895729 | pvE 0.1 in0 | mawk2 '{ __+=$_ } END { print __ }')

     0.59s user 0.03s system 141% cpu 0.440 total
     1  49669770295729 0x2D2CA503B9B1


      in0: 18.0 B 0:00:00 [ 462KiB/s] [ 462KiB/s] [<=>]
  ( echo '19495729 21895729' | pvE 0.1 in0 | mawk2 ; )

     0.00s user 0.01s system 62% cpu 0.024 total
     1  49669770295729 0x2D2CA503B9B1

** the code saved the / 2 from high-side so low-side could be performed as * 0.5

RARE Kpop Manifesto
  • 2,453
  • 3
  • 11
-1

Simple php

  cat numbers.txt | php -r "echo array_sum(explode(PHP_EOL, stream_get_contents(STDIN)));"
131
  • 3,071
  • 31
  • 32
-6

Ok, here is how to do it in PowerShell (PowerShell core, should work on Windows, Linux and Mac)

Get-Content aaa.dat | Measure-Object -Sum
Severin Pappadeux
  • 18,636
  • 3
  • 38
  • 64
  • This question is tagged [[shell]]: "Without a specific tag, a portable (POSIX-compliant) solution should be assumed" not PowerShell – retnikt Mar 31 '20 at 10:31