tl,dr; no
The reference implementation of Python uses reference counting for garbage collection. There are other implementations that use different GC strategies and this affects the precise time at which __del__
methods are called, which may or may not be reliable or timely in PyPy, Jython or IronPython. These differences are not important unless when you are dealing with resources like file pointers and other expensive system resources.
In cPython the GC will wipe out objects when the referencing count is zero. For example, when you do acumulator = mod.AcumulatorObject()
inside a for
loop, a new object replaces the old one at the next iteration - and since there are no other variables referencing the old object it will be garbage collected in the next GC pass. The reference implementation cPython will spoil you with things like releasing resources automatically when they go out of scope but YMMV regarding other implementations.
That is why many people commented memory leaks are not of concern in Python.
You have complete control over cPython's garbage collector using the cg module. The default settings are pretty conservative and in 10 years doing Python for a living I never had to fire a GC cycle manually - but I've seen a situation where delaying it helped performance:
Yes, I had previously played with sys.setcheckinterval. I changed it to 1000 (from its default of 100), but it didn't do any measurable difference. Disabling Garbage Collection has helped - thanks. This has been the biggest speedup so far - saving about 20% (171 minutes for the whole run, down to 135 minutes) - I'm not sure what the error bars are on that, but it must be a statistically significant increase.
Just follow best practices like wrapping system resources using with
or (try/finally
blocks) and you should have no problems.