I have written a script, which basically splits all strings in a sentence into parts;
for instance;
"geldigim" -> "gel" "di" "g" "i" "m"
While some string may be split as above, some of them may be split as following;
"bildi" > "bil" "di"
or some sentences may not be split at all.
"kos" -> "kos"
It is totally decided by a function which splits the strings into parts.
What I want to do is the following:
geldigim -> /gel* *di* *g* *i* *m/
bildi -> /bil* *di/
kos -> /kos/
What I did is;
I have a corpus which has 37251512 sentences. I have written the following script;
if __name__ == "__main__":
io = morfessor.MorfessorIO()
print "Importing corpus ..."
f = codecs.open("corpus/corpus_tr_en/corpus.tr", encoding="utf-8").readlines()
print "Importing morphology model ..."
model = io.read_binary_model_file('seg/tr/model.bin')
corpus = open('dataset/dataset_tr_en/full_segmented.tr', 'w')
for a in range(len(f)):
print str(a) + ' : ' + str(len(f))
words = f[a].replace('\n', '').split()
line_str = ''
for word in words:
segmentation = model.viterbi_segment(word)[0]
if len(segmentation) == 1:
line_str = '/' + segmentation[0] + '/'
if len(segmentation) == 2:
line_str = '/' + segmentation[0] + '* *' + segmentation[1] + '/'
if len(segmentation) > 2:
line_str = ''
for b in range(len(segmentation)):
if (b == 0):
line_str = line_str + '/' + segmentation[b] + '*'
if (b != 0) and (b != (len(segmentation) - 1)):
line_str = line_str + ' *' + segmentation[b] + '* '
if (b == (len(segmentation) - 1)):
line_str = line_str + ' *' + segmentation[b] + '/'
line_str = line_str + ' '
corpus.write(line_str.encode('utf-8'))
corpus.write('\n')
corpus.close()
This script loops over each sentence, and each word in a sentence, and splits it into parts with io.read_binary_model_file
function.
But it is so expensive for me, it is very slow.
Could you suggest me a way which will make the process very fast?
Thanks,