I have a class that has the standard python singleton form I use. It also inits a single value, but rather than init it in each instance, I init it in __new__()
when I do the singleton handling. Is this the pythonic way to do it? If not, how else can I?
class Foo(object):
_inst = None
def __new__(cls, val):
if Foo._inst is None:
Foo._inst = super(Foo, self).__new__(cls)
Foo._inst.val = val
return Foo._inst
This is obviously the simplified form. I have used the construction of waiting till __init__()
to check and set Foo._inst
to handle run once init for a singleton, but that seems ripe for synchronization errors in a multithreaded situation. For situations where there're no values that can get corrupted in the existing instance if __init__()
is run again, there's no harm in putting the init in __init__()
, except that you are running code that effectively does nothing when you don't need to, but there are often corruptible state values I don't want to overwrite. I know singletons are evil too, but some times, they are the best solution (like defining special use types I want to be able to check with is
).
This seems like the best solution I can come up with, but I feel dirty initializing instance values in __new__()
. Does anyone have ideas on how to handle this better?