-1

I've implemented a simple direct Nbody simulation in python. I'm looking to parallelize it as we are doing the same operation again and again. In C++, I would have use openmp, but python doesn't have it.

So I was thinking to use the multiprocessing module. From what I understand, I would need a manager to manage the class (and the list particles?) and I was thinking of using a starmap pool.

I'm quite lost on how to use these function to achieve any semblance of parallelization, so any help is appreciated.

PS: I'm open to use other module too, the easier the better. The class is ditchable if using numpy array (for position velocity mass) solves the problem, I'll go with it.

Code:

import numpy as np
import matplotlib.pyplot as plt
import multiprocessing as mp


class particle:
    def __init__(self, xy, uv, m):
        self.xy =xy # position
        self.uv = uv # velocity
        self.m = m # mass
        self.force = np.zeros([2]) # at t=0s, force =0

    def update(self,dt):
        self.uv += self.force/self.m * dt
        self.xy += self.uv*dt
        self.force=np.zeros([2])

    def interaction(self,p,jj,eps):
        dr = p[jj].xy - self.xy
        dr_norm = np.linalg.norm(dr + eps)
        self.force += G*self.m*p[jj].m/(dr_norm**2) * dr/dr_norm
        p[jj].force -= G*self.m*p[jj].m/(dr_norm**2) * dr/dr_norm


def init_world(n_part):
    p=[]
    for ii in range(n_part):
        p.append(particle(np.random.uniform(0,50,size=(2))*1e15,np.random.uniform(-10,10,size=(2))*0,np.random.uniform(2,25)*1e28))
    return p

G = 6.67e-11 # in SI units
dt= 1e5 # in second, 86400s = one day
niter = 10000
n_part = 300
eps = 1e8 #softening to avoid infinite force a small distance

p = init_world(n_part)

xy = np.asarray([p[ii].xy for ii in range(n_part)])

fig, ax1 = plt.subplots()
im = ax1.scatter(xy[:,0],xy[:,1])
plt.show()

for tt in range(niter):
    for ii in range(n_part):
        for jj in range(ii+1,n_part):
            p[ii].interaction(p,jj,eps)

    for ii in range(n_part):
            p[ii].update(dt)

    xy = np.asarray([p[ii].xy for ii in range(n_part)])

    ax1.set_title(tt)
    im.set_offsets(xy)
    plt.pause(0.01)

martineau
  • 119,623
  • 25
  • 170
  • 301
  • There are a lot of tools for parallelization in Python. Some that might be particularly useful would be numba and cupy (if you want GPU acceleration). Since NumPy has a C layer, a lot of NumPy operations are parallelized if you compile it to be. But what's your purpose in n-body simulation? Learning to code it? Keep going, and have fun. Doing science with it? Maybe look at the many open source packages for dynamics that already exist. So many things have Python interfaces; save your time and "stand on the shoulders of giants"! – dwhswenson Jan 28 '21 at 00:47
  • There's a example of managing a class with a manager in [How can I update class members in processes?](https://stackoverflow.com/questions/65451457/how-can-i-update-class-members-in-processes) – martineau Jan 28 '21 at 01:56
  • @dwhswenson Yeah the main goal is to code it as a way to improve my skill and for fun. I prefer to code it without too much specialized module as i feel it will give me a better understanding of the underlaying process – Staufenbache Jan 29 '21 at 14:35
  • @martineau Thx for the ref ! – Staufenbache Jan 29 '21 at 14:36

1 Answers1

0

If you want to share a list of custom objects (such as particle in the question) among processes, you can consider a simplified example here:

import multiprocessing
from multiprocessing.managers import BaseManager


TOTAL_PROCESS = 3

class Particle():
    def __init__(self, x, y):
        self.x = x
        self.y = y
        
    def multiply(self, z):
        self.x *= z
        self.y *= z

    def __repr__(self):
        return f'(x={self.x},y={self.y})'


def worker(sharedList, ix):
    # Call multiply() for the specific item in the list
    sharedList[ix].multiply(2)


def main():
    BaseManager.register('Particle', Particle)  # Register your custom class
    clsManager = BaseManager()  # A manager to manage Particle objects
    clsManager.start()
    manager = multiprocessing.Manager()  # Another manager to manage a shared list

    # Create a list of Particle objects
    sharedList = manager.list([clsManager.Particle(x, x+1) for x in range(0, TOTAL_PROCESS)])
    
    # See the origina list
    for x in sharedList:
        print(x, end=' ')
    else:
        print()

    # Run multiple processes and to make each of them them to work on a specific object only
    processes = []
    for i in range(TOTAL_PROCESS):
        p = multiprocessing.Process(target=worker, args=[sharedList, i])
        p.start()
        processes.append(p)

    for p in processes:
        p.join()
    
    # See the updated list of Pariticle objects
    for x in sharedList:
        print(x, end=' ')
    else:
        print()
    
if __name__ == '__main__':
    main()

yoonghm
  • 4,198
  • 1
  • 32
  • 48
  • Thanks your anwer is exactly what i was looking for ! And serve as a great entry point to understand more complex answer regarding multiprocessing. Unfortunatly, i was unable to make it work so i switch to Julia when i need to add parallelism and keep python to develop quick solution – Staufenbache Feb 01 '21 at 17:12