I have huge pickle file around 6 GB generated for RainForestClassifer training samples using joblib.dump(). Each and every execution has to load the pickle objects using joblib.load() for processing the input data. The loading time is very high and subsequently hitting the performance of script execution.
Is there a way that object once loaded can be persisted in memory and make it available for subsequent python executions with out calling joblib.load().
Does using DB like sqlite will help loading the data faster?