So I thought this Title would produce good search results. Anyway, given the following code: It takes a one yield word as word from text_file_reader_gen() and iterates over under while loop until error where exception is given (Is there a better way than that other than try except?) and the interlock function just mixes them up.
def wordparser():
#word_freq={}
word=text_file_reader_gen()
word.next()
wordlist=[]
index=0
while True: #for word in ftext:
try:
#print 'entered try'
current=next(word)
wordlist.append(current) #Keep adding new words
#word_freq[current]=1
if len(wordlist)>2:
while index < len(wordlist)-1:
#print 'Before: len(wordlist)-1: %s || index: %s' %(len(wordlist)-1, index)
new_word=interlock_2(wordlist[index],wordlist[index+1]) #this can be any do_something() function, irrelevant and working fine
new_word2=interlock_2(wordlist[index+1],wordlist[index])
print new_word,new_word2
'''if new_word in word_freq:
correct_interlocked_words.append(new_word)
if new_word2 in word_freq:
correct_interlocked_words.append(new_word2)'''
index+=1
#print 'After: len(wordlist)-1: %s || index: %s' %(len(wordlist)-1, index)
'''if w not in word_freq:
word_freq[w]=1
else:
word_freq[w]=+1'''
except StopIteration,e:
#print 'entered except'
#print word_freq
break
#return word_freq
text_file_reader_gen() code:
def text_file_reader_gen():
path=str(raw_input('enter full file path \t:'))
fin=open(path,'r')
ftext=(x.strip() for x in fin)
for word in ftext:
yield word
Q1. Is it possible for word to be iterated and at the same time appending those word to the dictionary word_freq while at the same time enumerating over for key in word_freq where keys are words & are still being added, while the for loop runs and new words are mixed using the interlock function so that most of these iterations happen at one go- something like
while word.next() is not StopIteration:
word_freq[ftext.next()]+=1 if ftext not in word_freq #and
for i,j in word_freq.keys():
new_word=interlock_2(j,wordlist[i+1])
I just wanted a very simple thing and a hash dict search, like really very fast because the txt file from where it is taking words is a-z very long, it may have duplicates as well.
Q2. Ways to improvise this existing code? Q3. Is there a way to 'for i,j in enumerate(dict.items())' so that i can reach dict[key] & dict[next_key] at the same time, although they are unordered, but that's also irrelevant.
UPDATE: After reviewing answers here, this is what I came up. It's working but I have a question regarding the following code:
def text_file_reader_gen():
path=str(raw_input('enter full file path \t:'))
fin=open(path,'r')
ftext=(x.strip() for x in fin)
return ftext #yield?
def wordparser():
wordlist=[]
index=0
for word in text_file_reader_gen():
works but instead if I use yield ftext, it doesn't.
Q4. What is the basic difference and why does that happen?