I run a python script (Python 2.7) that is computationally heavy and does some xml input/output operations. When I tested its elements in Python Interactive in VS they ran 2x faster (or more) than when I packed it into the main routine and ran it by double clicking the file.
What I discovered up to date is:
Numerical work runs about 2x slower after double clicking the file then when run separately in Python Interactive in Visual Studio. E.g. code you can find here: [https://quant.stackexchange.com/questions/17364/implied-volatility-from-american-options-using-python][1] runs roughly 1.6s/100 loops in Python Interactive and 3.2s/100 loops after double clicking the file. The number of loops I need to perform goes in thousands so its heavily affecting performance.
HowOther performance hit occurs when parsing xml files. Parsing the same file separately in Python Interactive takes around 1.5s and 8s+ when running script by double click the file. I use parse method of xml.etree.ElementTree class.
What could be the reason for this performance hit? I would be greatful for any help.
I though that this might be due to memory being already used by other parts of my script. However, after moving xml parsing/numerical heavy part at the beggining of the script, problem persists.
Comments from Mikko Ohtamaa led me to finding something that might be solution to my problem. When program was running slow it was using C:\Python27\python.exe. When it was running fast it was using C:\Windows\py.exe. I subsequently run script from WScript shell using C:\Windows\py.exe and it ran faster (comparable to times I reached while testing my code in Python Interactive). However I still do not know why this might cause such a difference?