0

Right now, I'm basically running through an excel sheet.

I have about 20 names and then I have 50k total values that match to one of those 20 names, so the excel sheet is 50k rows long, column B showing any random value, and column A showing one of the 20 names.

I'm trying to get a string for each of the names that show all of the values.

Name A: 123,244,123,523,123,5523,12505,142... etc etc. 
Name B: 123,244,123,523,123,5523,12505,142... etc etc. 

Right now, I created a dictionary that runs through the excel sheet, checks if the name is all ready in the dictionary, if it is, then it does a

strA = strA + "," + foundValue

Then it inserts strA back into the dictionary for that particular name. If the name doesn't exist, it creates that dictionary key and then adds that value to it.

Now, this was working all well at first.. but it's been about 15 or 20 mins and it is only on 5k values added to the dictionary so far and it seems to get slower as time goes on and it keeps running.

I wonder if there is a better way to do this or faster way to do this. I was thinking of building new dictionaries every 1k values and then combine them all together at the end.. but that would be 50 dictionaries total and it sounds complicated.. although maybe not.. I'm not sure, maybe it could work better that way, this seems to not work.

I DO need the string that shows each value with a comma between each value. That is why I am doing the string thing right now.

AChampion
  • 29,683
  • 4
  • 59
  • 75
king
  • 1,304
  • 3
  • 23
  • 43
  • How are you determining that it's only processed 5k values? Are you doing a print statement on each iteration of the loop? – Brendan Abel Feb 25 '16 at 02:16
  • yes.. i am doing a print statement on each iteration.. right now it is only at 5.5k.. the last 500 are takin forever – king Feb 25 '16 at 02:19
  • its like 2 seconds per entry now. before it was like 500 in 30 seconds when the value was only at 1500 total – king Feb 25 '16 at 02:21
  • 1
    It sounds like you're looping over the same data again and again and again. Is it possible to paste the code that iterates through the excel sheet and adds to the dictionary? – Josh Smeaton Feb 25 '16 at 03:08
  • 1
    May I ask what do you do after the dictionary is built? It's relevant because unless some code needs the whole dict, it may be wiser (and probably way faster) to not build the dict but, say, write directly to files and then use the files as needed. – gil Feb 25 '16 at 04:23
  • i add it to an excel sheet, so I take the KEY, which is the name, and put it in A1, then I take the VALUE, which is 1345,345,135,346,3451,35.. etc etc, and put that into A2. then I do the rest of my programming with that information...... but i need those values seperated by commas and acessible inside that excel sheet like that! – king Feb 25 '16 at 04:26
  • i build the dictionary seperate, then i put it into the excel sheet.. – king Feb 25 '16 at 04:27
  • wait.. so you are saying to search the excel sheet, then paste it into the cell, like append it to the end of the cell? maybe that could work... search the excel sheet up and down to find the cell that it is in.. i'm using open pyxl – king Feb 25 '16 at 04:27
  • Well, then I hope the list approach will work. Writing to files can be slow but shouldn't become *slower* as far as I understand -- maybe that should be a question in its own right. (Writing here because the comments under my answer are getting too long, and SO will complain). – gil Feb 25 '16 at 21:13

5 Answers5

2

There are a number of things that are likely causing your program to run slowly.

String concatenation in python can be extremely inefficient when used with large strings.

Strings in Python are immutable. This fact frequently sneaks up and bites novice Python programmers on the rump. Immutability confers some advantages and disadvantages. In the plus column, strings can be used as keys in dictionaries and individual copies can be shared among multiple variable bindings. (Python automatically shares one- and two-character strings.) In the minus column, you can't say something like, "change all the 'a's to 'b's" in any given string. Instead, you have to create a new string with the desired properties. This continual copying can lead to significant inefficiencies in Python programs.

Considering each string in your example could contain thousands of characters, each time you do a concatenation, python has to copy that giant string into memory to create a new object.

This would be much more efficient:

strings = []
strings.append('string')
strings.append('other_string')
...
','.join(strings)

In your case, instead of each dictionary key storing a massive string, it should store a list, and you would just append each match to the list, and only at the very end would you do a string concatenation using str.join.

In addition, printing to stdout is also notoriously slow. If you're printing to stdout on each iteration of your massive 50,000 item loop, each iteration is being held up by the unbuffered write to stdout. Consider only printing every nth iteration, or perhaps writing to a file instead (file writes are normally buffered) and then tailing the file from another terminal.

Community
  • 1
  • 1
Brendan Abel
  • 35,343
  • 14
  • 88
  • 118
  • the string concatenation, with a list, i need each of the values in the list to be seperated by a comma, wouldnt i have to do the same thing over and over? – king Feb 25 '16 at 03:33
  • since a list technically al lready has a bunch of commas in it... i wonder if it's possible to just paste the list as it is inside somewhere? then that way, i'd have my entire thing seperated by a list? i'm pasting each comma seperated value into an excel sheet – king Feb 25 '16 at 03:34
1

This answer is based on OP's answer to my comment. I asked what he would do with the dict, suggesting that maybe he doesn't need to build it in the first place. @simon replies:

i add it to an excel sheet, so I take the KEY, which is the name, and put it in A1, then I take the VALUE, which is 1345,345,135,346,3451,35.. etc etc, and put that into A2. then I do the rest of my programming with that information...... but i need those values seperated by commas and acessible inside that excel sheet like that!

So it looks like the dict doesn't have to be built after all. Here is an alternative: for each name, create a file, and store those files in a dict:

files = {}
name = 'John'  # let's say
if name not in files:
    files[name] = open(name, 'w')

Then when you loop over the 50k-row excel, you do something like this (pseudo-code):

for row in 50k_rows:
    name, value_string = rows.split()  # or whatever
    file = files[name]
    file.write(value_string + ',')  # if already ends with ',', no need to add

Since your value_string is already comma separated, your file will be csv-like without any further tweaking on your part (except maybe you want to strip the last trailing comma after you're done). Then when you need the values, say, of John, just value = open('John').read().

Now I've never worked with 50k-row excels, but I'd be very surprised if this is not quite a bit faster than what you currently have. Having persistent data is also (well, maybe) a plus.


EDIT:

Above is a memory-oriented solution. Writing to files is much slower than appending to lists (but probably still faster than recreating many large strings). But if the lists are huge (which seems likely) and you run into a memory problem (not saying you will), you can try the file approach.

An alternative, similar to lists in performance (at least for the toy test I tried) is to use StringIO:

from io import StringIO  # python 2: import StringIO import StringIO

string_ios = {'John': StringIO()}  # a dict to store StringIO objects
for value in ['ab', 'cd', 'ef']:
    string_ios['John'].write(value + ',')
print(string_ios['John'].getvalue())

This will output 'ab,cd,ef,'

gil
  • 2,086
  • 12
  • 13
  • So no need for a dict because you're using a dict? – Stefan Pochmann Feb 25 '16 at 05:00
  • @StefanPochmann I said no need for **the** dict, the one OP is trying to build. I didn't say "no need for a/any dict". – gil Feb 25 '16 at 05:03
  • I know, but my point is that that's how your answer comes across. You're just saying *"The dict doesn't have to be built after all"* and then you go and right away build another one. With no explanation why this approach should be better. – Stefan Pochmann Feb 25 '16 at 05:08
  • @StefanPochmann I said "I'd be very surprised if this is not quite a bit faster". Does that count as an explanation of why it should be better? As to why I think it should be faster: same reason as why OP's current approach is slower, which you and others have explained well and I see no reason to repeat. – gil Feb 25 '16 at 05:13
  • so you are saying create 20 seperate csv files and then combine them together ??? i don't know the names in advanced, i scroll the list one by one and then i grab each name, it searches the dictionary for the current name, then if it cant find it, it will create that entry – king Feb 25 '16 at 15:15
  • @simon The names can be added dynamically too. I've edited my answer. – gil Feb 25 '16 at 15:50
  • oh so you are saying create a file called, Name1.txt, then inside that file, add the "," + VALUE to the end of it, then at the end, just copy paste that entire file to a csv file? maybe that could work................... can you just append to the end of a text file? – king Feb 25 '16 at 15:50
  • @simon Yes. If you have only 20 files, you can just copy-paste manually (into your result excel directly). To append to an existing file without erasing previous content, [open it in the `'w+'` mode](https://docs.python.org/3/library/functions.html#open). But if you are going to create the file in one go, using `'w'` is just fine. – gil Feb 25 '16 at 15:54
  • it's pretty fast to append to a file you think? it is 50k things i need to append – king Feb 25 '16 at 16:00
  • 1
    @simon Writing 50k lines of `''qwertyuiopasdfghjklzxcvbnm,'` to a file takes less than 0.04 sec on my machine, timed with `timeit`. – gil Feb 25 '16 at 16:05
  • okay i will give this a go today.. will let you know, thanks gill – king Feb 25 '16 at 16:46
  • it can't manually copy paste, i need to automatically copy/paste it – king Feb 25 '16 at 16:47
  • i set it up.. it is running about 200 entries/per minute to add to the files. faster for sure.. but seems like it would take around 3-4 hours to complete.. i need to see if it is consistent. not bad though. i might try one last thing, i saw i can put them all into a list, then convert that list into a string via this.. i dk if it will be faster or not though: http://www.decalage.info/en/python/print_list – king Feb 25 '16 at 18:57
  • it might run slower when it hits 5k – king Feb 25 '16 at 18:58
  • same thing is happening.. it's getting way slower shen it hits the higher numbers – king Feb 25 '16 at 19:54
0

Instead of building a string that looks like a list, use an actual list and make the string representation you want out of it when you are done.

Scott Hunter
  • 48,888
  • 12
  • 60
  • 101
  • so make 20 different lists? but wouldn't that be exactly the same situation, if I do the whole... strA = strA + "," + currentListValue – king Feb 25 '16 at 02:12
  • it would be too slow making it as well? or it's the dictionary that is slowing this all down? – king Feb 25 '16 at 02:13
  • Building a string by repeated concatenation is slower than building a list by repeated appending. – Scott Hunter Feb 25 '16 at 02:16
  • but i need a string that is the entire list length for a particular name.. so what i was getting at was if i need to do that anyway, will it be just as slow if i do it from 20 different lists? i don't know, what do you think? – king Feb 25 '16 at 02:20
  • anyway this cant even work the way it is now, it is sooo slow at this point, it seems like it would take forever to finish and it is only at 5.5k – king Feb 25 '16 at 02:21
0

Depending on how you have read the excel file, but let's say that lines are read as delimiter-separated tuples or something:

d = {}
for name, foundValue in line_tuples:
    try:
        d[name].append(foundValue)
    except KeyError:
        d[name] = [foundValue]
d = {k: ",".join(v) for k, v in d.items()}

Alternatively using pandas:

import pandas as pd
df = pd.read_excel("some_excel_file.xlsx")
d = df.groupby("A")["B"].apply(lambda x: ",".join(x)).to_dict()
John Smith
  • 1,077
  • 6
  • 21
  • Well I just did it as dictionaries so that it would be like a structured thing, like each name would have a key and then each one would have a value, just a way to keep track of everything, like thats jsut the way i thought to do it – king Feb 25 '16 at 02:31
  • OK. Try the above in my answer. Some general comments. Printing to stdout makes execution insanely slow, particularly for a large number of prints. Also look on try/except, which avoids checking for key existence 50 times (basically do then ask for forgiveness is always faster than ask for permission then do). In this case you would have something along the lines `try d[k] += ","+foundvalue except KeyError: d[k]=foundvalue – John Smith Feb 25 '16 at 02:36
  • for large data sets.. how do you know how far its gotten then or how fast it is running? i'm printing out on every for loop, it goes fast until it hits around 3k.. im not sure if its because im printing everytime or another reason – king Feb 25 '16 at 18:59
  • On the how fast the answer is the `timeit` module. On how far it depends on the method. In the pure python approach (first solution) you can print something, but certainly don't do it on every step but rather every 100/500/1000 probably in your case. Prints are very costly. For the pandas answer I would definitely try that. For large data sets in particular you want to be using matrix operations rather than element wise operations, they are the optimised goto thing to do. But in that case you don't know how far you have gone cause there is no notion of this in matrix operations. – John Smith Feb 26 '16 at 14:11
  • Ah yes and another thing, monitor your memory. On top of the prints slowing the execution, it might be that you are starting to use heap memory instead of your RAM. – John Smith Feb 26 '16 at 14:12
0

The proper way is to collect in lists and join at the end, but if for some reason you want to use strings, you could speed up the string extensions. Pop the string out of the dict so that there's only one reference to it and thus the optimization can kick in.

Demo:

>>> timeit('s = d.pop(k); s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
0.8417842664330237

>>> timeit('s = d[k];     s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
294.2475278390723
Stefan Pochmann
  • 27,593
  • 8
  • 44
  • 107