Consider the following script to generate plots in Matplotlib:
from matplotlib import font_manager as fm, get_data_path as datapath, pyplot as plt, colors as clr, figure as fig, patches as mpatches, rcParams, gridspec
import cartopy.crs as crs
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader
from cartopy.feature import ShapelyFeature
states = '/some/path/usstates.shp'
states_feature = ShapelyFeature(Reader(states).geometries(), crs.LambertConformal(), facecolor='none', edgecolor='black')
fig = plt.figure(figsize=(14,9))
gs = gridspec.GridSpec(ncols=1, nrows=2, width_ratios=[1], height_ratios=[0.15, 3.00])
gs.update(wspace=0.00, hspace=0.00)
ax1 = fig.add_subplot(gs[0, :])
plt.text(0.00, 0.50, "Some Fun Text", fontsize=15)
plt.text(1.00, 0.50, "Some Other Fun Text", fontsize=15, ha='right')
ax2 = fig.add_subplot(gs[1, :], projection=crs.LambertConformal())
ax2.set_extent(['region coords, not required for question'], crs=crs.LambertConformal())
ax2.add_feature(states_feature, linewidth=1.25)
plt.show()
I've encountered the requirement to generate plots over multiple domains, or where the function ax2.set_extent() will be passed multiple sets of lat/lon bounds. The total number of sets is large, to where generating these plots one at a time is grossly inefficient.
The current solution I've implemented is to run this script multiple times in parallel and pass these pairs in pre-compiled groups. However, this has become largely inefficient and occupies large amounts of memory, particularly if the shapefile is multiple MBs large. This occurs from loading the shapefile multiple times across the cumulative script executions.
Is there an effective way to generate these plots with pooled parallel jobs, where components like shapefiles are only loaded into memory once to speed completion?