There's a bit of an XY question here.
Your true difficulty seems to be "it takes 20 minutes for the __init__
ctor to complete."
There are several ways out of that predicament.
- Spend many minutes computing a new instance, and then arrange to rapidly serialize it to disk. If deserializing takes less than twenty minutes, you win!
- Use a level of indirection, so the object that has so many interesting behaviors is lightweight, consuming very little memory. Let it contain pointers to RDBMS id, S3 bucket, url, or filename.
- Store the expensive object in a global or equivalent, such as with
@lru_cache
, and let your updated / reloaded class find it that way. Use importlib.reload()
.
- Similar to serializing, offer a cheap .copy(), and exploit inheritance.
- Monkeypatch.
Here is one way to exploit inheritance
(copying isn't necessary, but makes it cleaner):
class FooBase:
...
# global variable for expensive computation
foo = FooBase()
class FooFeature(FooBase):
def __init__(self, foo):
...
def some_new_feature(self):
...
foo = FooFeature(foo.copy()) # This happens quickly.
Here is one way to monkeypatch:
class Foo:
...
foo = Foo()
FooOriginal = Foo
# Now you edit in a brand new Foo feature and reload the class definition:
def new_feature(self):
...
FooOriginal.new_feature = Foo.new_feature
When monkeypatching,
note that foo
holds a reference to the same class object that FooOriginal
holds.
Upon reloading, Foo
becomes a brand new object,
with one additional method.
The final monkeypatch assignment
makes that method available to FooOriginal
,
and hence available to foo
.