54

I have a tab-delimited file that has over 200 million lines. What's the fastest way in linux to convert this to a csv file? This file does have multiple lines of header information which I'll need to strip out down the road, but the number of lines of header is known. I have seen suggestions for sed and gawk, but I wonder if there is a "preferred" choice.

Just to clarify, there are no embedded tabs in this file.

andrewj
  • 2,965
  • 8
  • 36
  • 37
  • For csv to tsv and embedded delimiter issues, see also answers to http://stackoverflow.com/questions/13475535/replace-every-comma-not-enclosed-in-a-pair-of-double-quotes-with (also relevant : http://unix.stackexchange.com/questions/48672/remove-comma-between-the-quotes-only-in-a-comma-delimited-file). – Skippy le Grand Gourou Dec 26 '15 at 21:18

11 Answers11

85

If you're worried about embedded commas then you'll need to use a slightly more intelligent method. Here's a Python script that takes TSV lines from stdin and writes CSV lines to stdout:

import sys
import csv

tabin = csv.reader(sys.stdin, dialect=csv.excel_tab)
commaout = csv.writer(sys.stdout, dialect=csv.excel)
for row in tabin:
  commaout.writerow(row)

Run it from a shell as follows:

python script.py < input.tsv > output.csv
Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
  • 1
    Unless you know for sure that there are no embedded commas and no embedded tabs, this is a very reliable way to do it. Even though it probably doesn't meet the criteria for being 'the fastest'. – leedm777 Mar 29 '10 at 03:00
  • 2
    It may not be "the fastest" , but it does handles embedded tabs and commas for me. – anshuman Apr 08 '14 at 19:27
  • 1
    This rocks. I was writing a cryptic sed script to take care of this in bash - but that can't compete with the completeness of the python csv library. Thank you Ignacio, for this offering. Regarding speed - Ease of use use and reliability way outway speed - this is certainly fast enough. :-) – dlink Oct 28 '14 at 14:35
  • Ignacio, with your permission I'd like to add your script, with attribution, as tab2csv in my [vbin](http://crowfly.net/vbin) library. – dlink Oct 28 '14 at 14:42
  • @IgnacioVazquez-Abrams: Thanks, Ignacio. [vbin/t2c](http://crowfly.net/vbin/#t2cHelp) – dlink Oct 28 '14 at 20:15
  • Excel 2010 didn't like this tsv->csv conversion for at least one of my .tsv files until specifically quoting all the fields: `commaout = csv.writer(sys.stdout, dialect=csv.excel, quotechar='"', quoting=csv.QUOTE_ALL)`. Possibly due to my tsv not being "properly" quoted for Excel? – Johnny Utahh May 14 '16 at 15:55
  • Indeed, to me that was the best choice! Other options might fail whenever there were tabs or commas inside the file. This Python library is a flexible yet seamless solution! – Rodrigo Hjort Mar 12 '17 at 23:49
  • If you're doing this in windows and getting extra lines between each row try adding `lineterminator='\n'` to the commaout line: ```commaout = csv.writer(sys.stdout, lineterminator='\n', dialect=csv.excel)``` – GregS Aug 06 '19 at 01:34
58

If all you need to do is translate all tab characters to comma characters, tr is probably the way to go.

The blank space here is a literal tab:

$ echo "hello   world" | tr "\\t" ","
hello,world

Of course, if you have embedded tabs inside string literals in the file, this will incorrectly translate those as well; but embedded literal tabs would be fairly uncommon.

Mark Rushakoff
  • 249,864
  • 45
  • 407
  • 398
  • 18
    More common are embedded commas in the source, which then require wrapping with quotes. Which is troublesome if there are embedded quotes... – kibibu Mar 29 '10 at 01:09
  • Thanks for the `tr` suggestion. How does it compare to `sed` with speed? Suppose you wanted to skip the header start at line number x and continue to the rest of the file. Is there a way to implement this with `tr`? (I should also clarify that there are no embedded commas in the file.) – andrewj Mar 29 '10 at 01:10
  • @andrewj: `tr` should be much faster, as it's just doing character-by-character replacement instead of regex matching. As for skipping header, the easiest thing is to just process in two chunks - if you know the length, `head -n input > output; tail -n + input | tr ... >> output`; if you don't know the length, probably something with `grep -n`... – Cascabel Mar 29 '10 at 01:13
  • @andrew, sed has support for transliteration, also you can use address range. – ghostdog74 Mar 29 '10 at 01:37
  • 3
    This is an incomplete answer; `"` must be converted into `""` for CSV, fields must be wrapped in `"` quotes if they contain quotes or commas or newlines (though newlines will not exist in data presented as TSV). – Steven Lu Aug 15 '13 at 18:16
21
perl -lpe 's/"/""/g; s/^|$/"/g; s/\t/","/g' < input.tab > output.csv

Perl is generally faster at this sort of thing than the sed, awk, and Python.

pabs
  • 211
  • 1
  • 2
13
  • If you want to convert the whole tsv file into a csv file:

    $ cat data.tsv | tr "\\t" "," > data.csv
    

  • If you want to omit some fields:

    $ cat data.tsv | cut -f1,2,3 | tr "\\t" "," > data.csv
    

    The above command will convert the data.tsv file to data.csv file containing only the first three fields.

Supun Wijerathne
  • 11,964
  • 10
  • 61
  • 87
Gopal Kumar
  • 311
  • 2
  • 8
8
sed -e 's/"/\\"/g' -e 's/<tab>/","/g' -e 's/^/"/' -e 's/$/"/' infile > outfile

Damn the critics, quote everything, CSV doesn't care.

<tab> is the actual tab character. \t didn't work for me. In bash, use ^V to enter it.

John Carter
  • 53,924
  • 26
  • 111
  • 144
Will Hartung
  • 115,893
  • 19
  • 128
  • 203
7

@ignacio-vazquez-abrams 's python solution is great! For people who are looking to parse delimiters other tab, the library actually allows you to set arbitrary delimiter. Here is my modified version to handle pipe-delimited files:

import sys
import csv

pipein = csv.reader(sys.stdin, delimiter='|')
commaout = csv.writer(sys.stdout, dialect=csv.excel)
for row in pipein:
  commaout.writerow(row)
jtlai
  • 779
  • 7
  • 7
4

assuming you don't want to change header and assuming you don't have embedded tabs

# cat file
header  header  header
one     two     three

$ awk 'NR>1{$1=$1}1' OFS="," file
header  header  header
one,two,three

NR>1 skips the first header. you mentioned you know how many lines of header, so use the correct number for your own case. with this, you also do not need to call any other external commands. just one awk command does the job.

another way if you have blank columns and you care about that.

awk 'NR>1{gsub("\t",",")}1' file

using sed

sed '2,$y/\t/,/' file #skip 1 line header and translate (same as tr)
ghostdog74
  • 327,991
  • 56
  • 259
  • 343
3

You can also use xsv for this

xsv input -d '\t' input.tsv > output.csv

In my test on a 300MB tsv file, it was roughly 5x faster than the python solution (2.5s vs. 14s).

mloughran
  • 11,415
  • 7
  • 28
  • 22
0

the following awk oneliner supports quoting + quote-escaping

printf "flop\tflap\"" | awk -F '\t' '{ gsub(/"/,"\"\"\"",$i); for(i = 1; i <= NF; i++) { printf "\"%s\"",$i; if( i < NF ) printf "," }; printf "\n" }'

gives

"flop","flap""""
coderofsalvation
  • 1,764
  • 16
  • 13
0

right click file, click rename, delete the 't' and put a 'c'. I'm actually not joking, most csv parsers can handle tab delimiters. I had this issue now and for my purposes renaming worked just fine.

Neil
  • 3,020
  • 4
  • 25
  • 48
-1

I think it is better not to cat the file because it may create problem in the case of large file. The better way may be

$ tr ',' '\t' < csvfile.csv > tabdelimitedFile.txt

The command will get input from csvfile.csv and store the result as tab seperated in tabdelimitedFile.txt

Dr. Mian
  • 3,334
  • 10
  • 45
  • 69