I have my program which list and read all the files in a directory and counts total number of records present in the files concurrently.
When i'm runnning the below code i get some list of worker thread names with counts coming in chunk as the counting of records from multiple files are also going parallel.
import multiprocessing as mp
import time
import os
path = '/home/vaibhav/Desktop/Input_python'
def process_line(f):
print(mp.current_process())
#print("process id = " , os.getpid(f))
print(sum(1 for line in f))
for filename in os.listdir(path):
print(filename)
if __name__ == "__main__":
with open('/home/vaibhav/Desktop/Input_python/'+ filename, "r+") as source_file:
# chunk the work into batches
p = mp.Pool()
results = p.map(process_line, source_file)
start_time = time.time()
print("My program took", time.time() - start_time, "to run")
Current Output
<ForkProcess(ForkPoolWorker-54, started daemon)>
73
<ForkProcess(ForkPoolWorker-55, started daemon)>
<ForkProcess(ForkPoolWorker-56, started daemon)>
<ForkProcess(ForkPoolWorker-53, started daemon)>
73
1
<ForkProcess(ForkPoolWorker-53, started daemon)>
79
<ForkProcess(ForkPoolWorker-54, started daemon)>
<ForkProcess(ForkPoolWorker-56, started daemon)>
<ForkProcess(ForkPoolWorker-55, started daemon)>
79
77
77
Is there a way around so that i can get the total records count of files like
File1.Txt Total_Recordcount
...
Filen.txt Total_Recordcount
UPDATE I got the solution and pasted the answer in the comments section.