The following program has been running for about ~22 hours on two files (txt, ~10MB ea.). Each file has about ~100K rows. Can someone give me an indication of how inefficient my code is and perhaps a faster method. The input dict are ordered and preserving order is necessary:
import collections
def uniq(input):
output = []
for x in input:
if x not in output:
output.append(x)
return output
Su = {}
with open ('Sucrose_rivacombined.txt') as f:
for line in f:
(key, val) = line.split('\t')
Su[(key)] = val
Su_OD = collections.OrderedDict(Su)
Su_keys = Su_OD.keys()
Et = {}
with open ('Ethanol_rivacombined.txt') as g:
for line in g:
(key, val) = line.split('\t')
Et[(key)] = val
Et_OD = collections.OrderedDict(Et)
Et_keys = Et_OD.keys()
merged_keys = Su_keys + Et_keys
merged_keys = uniq(merged_keys)
d3=collections.OrderedDict()
output_doc = open("compare.txt","w+")
for chr_local in merged_keys:
line_output = chr_local
if (Et.has_key(chr_local)):
line_output = line_output + "\t" + Et[chr_local]
else:
line_output = line_output + "\t" + "ND"
if (Su.has_key(chr_local)):
line_output = line_output + "\t" + Su[chr_local]
else:
line_output = line_output + "\t" + "ND"
output_doc.write(line_output + "\n")
The input files are as follows: not every key is present in both files
Su:
chr1:3266359 80.64516129
chr1:3409983 100
chr1:3837894 75.70093458
chr1:3967565 100
chr1:3977957 100
Et:
chr1:3266359 95
chr1:3456683 78
chr1:3837894 54.93395855
chr1:3967565 100
chr1:3976722 23
I would like the output to look as follows:
chr1:3266359 80.645 95
chr1:3456683 ND 78