There is a 3d sphere filled with many particles distributed in some specific manner within the sphere. This sphere has been divided into very small cubes of a known and fixed size such that sides of these grids correspond to the x, y and z axes of the system. Since position and mass of all constituent particles are known, one can establish a 3d histogram representing the distribution of the particles within the sphere. some of the cubes contain many such particles resulting in some finite mass by adding all the particles inside each cube.
One can calculate the maximum local density along projected dimension by (i) defining a 3d histogram function and then (ii) converting it into 3d density through dividing total mass by the volume of each cubic grid and finally (iii) finding the maximum of histogram array along the mentioned projection. Here is the python code for that:
import numpy as np
dimensions_plot = [0,1,2] #where 0 stands for x, 1 for y, and 2 for z
# calculate maximum local density along projected dimension
hist_values, (hist_xs, hist_ys, hist_zs) = np.histogramdd(positions, position_bin_number, position_limits, weights=weights, normed=False,)
# convert to 3-d density
hist_values /= (np.diff(hist_xs)[0] * np.diff(hist_ys)[0] * np.diff(hist_zs)[0])
dimension_project = np.setdiff1d([0, 1, 2], dimensions_plot)
hist_values = np.max(hist_values, dimension_project)
And here is the error message I receive in calculating the 3d histogram across the sphere:
Traceback (most recent call last):
File "/some/path/to/the/my_code.py", line 6, in np.histogramdd()
positions, position_bin_number, position_limits, weights=weights, normed=False,
File "/usr/local/anaconda3/lib/python3.5/site-packages/numpy/lib/function_base.py", line 977, in histogramdd
hist = zeros(nbin, float).reshape(-1)
MemoryError
According to the following post (How to deal with “MemoryError” in Python code), the conclusion would be that there is something to be rescued or the interpreter can be recovered. Since there is about 3,000 grids along each axis, while cubing this number, it becomes large enough so that the size of the arrays becomes large enough. After increasing grid size, I noticed that the code seems to be running forever.
I do need to have smaller grid sizes. Is there anyway to speed up the code regardless of the size of grid? Can I bypass this error message while making sure that my results are at least float32
numbers?
Your help is greatly appreciated.