I want to plot a grid where each node in the grid is drawn by a dot, with a certain color code (taken from a vector stored in the node).
the grid here varies in size depending on the size of the simulations I am running. I have yet to figure out the relationship between the canvas size and the marker size. Now I use the following formula:
markersize = (figsize*figsize*dpi*dpi)/(xdim*ydim)
plt.scatter(X, Y, s=markersize/3, marker='s', c=Z, cmap=cm.rainbow)
plt.show()
that is I square the sigsize and the dpi (use 15 and 80 respectively), and divide by the dimensionality of the grid. Finally I divide this by 3 as I found this to work.
But I haven't been able to work out how to analytically get the right marker size. What I want is a grid where each square uses as much a space as it can but without "infringinig" on the other nodes' space, so that the markers overlap.
As far as I can read the figsize is given in inches. dpi is - dots per inch, and the markersize is specified in - points. If the points is the same as dots, you would then have dpifigsize amount of dots on each axis. The xdimydim is the amount of nodes in my grid. Dividing the pow(dpifigsize, 2) / xdimydim should then give the amounts of dots per node, and the right size for each marker I thought. But that made the markers too big. I diveded by 3 and it sort of worked practically for the sizes I usually run, but not for all. (I am guessing a point and a dot is not the same, but what is the relation?)
How do I work out the correct answer for this? Ideally I would like a very granular picture where I could zoom in an certain areas to get a finer look at the color nuances.