You may use the RawArray functionality of multiprocessing where you define the variable that needs to be accessed from the process before starting the process as a RawArray and then after the process has finished access it as a reshaped numpy array.
Here is an example:
import numpy as np
import multiprocessing as mp
n_elements = 1000 # how many elements your numpy should have
def myProc( shared_var ):
'''
here you convert your shared variable from mp.RawArray to numpy
then treat it as it is numpy array e.g. fill it in with some
random numbers for demonstration purpose
'''
var = np.reshape( np.frombuffer( shared_var, dtype=np.uint32 ), -1 )
for i in range( n_elements ):
var[i] = np.random.randint( 0, 2**16, 1 )
print( 'myProc var.mean() = ', var.mean() )
#buffer that contains the memory
mp_var = mp.RawArray( 'i', n_elements )
p = mp.Process( target=myProc, args=(mp_var,) )
p.start()
p.join()
#after the process has ended, you convert the buffer that was passed to it
var = np.reshape( np.frombuffer( mp_var, dtype=np.uint32 ), -1)
#and again, you can treat it like a numpy array
print( ' out var.mean() = ',var.mean() )
the output is:
myProc var.mean() = 32612.403
var.mean() = 32612.403
hope that helps!
Please note that if you access this buffer from concurrent processes you need to organise a proper locking mechanism so no two processes modify the same piece of memory at the same time.