3

I think the answer is no. (Seeing one hypothesis as to why in comments to question here.) But it would be really nice to be able to build a new shared (raw) array between processes after they have forked, perhaps using a Pipe/Queue/Manager to handle the setup. I'm not knowledgeable on OS's; any prospect of this ever happening?

Any clever workarounds (maybe memmap?) which provide the same read and write speeds as a true shared array?

Community
  • 1
  • 1
Adam S.
  • 323
  • 3
  • 10
  • Oops this was already [answered](http://stackoverflow.com/q/7419159/6442723) years ago...must get better at searching! – Adam S. Jan 20 '17 at 00:12

1 Answers1

2

I think it can be done by sharing a memory map on an existing file. Just run the example below more than once and watch the output. Once all processes have opened the shared file, you can delete the file on disk and keep using the shared memory. There's a file lock used here, but it may not be the best method.

#!/usr/bin/env python3

import fcntl
import mmap
import os
import time
from contextlib import contextmanager, suppress

# Open (and create if necessary) a file of shared_size.
# Every sleep_timeout seconds:
#   - acquire an exclusive lock,
#   - read the data, and
#   - write new data

# 1kiB for example
shared_size = 1024
filename = "shared_data.bin"
sleep_timeout = 1

# Context manager to grab an exclusive lock on the
# first length bytes of a file and automatically
# release the lock.
@contextmanager
def lockf(fileno, length=0):
  try:
    fcntl.lockf(fileno, fcntl.LOCK_EX, length)
    yield
  finally:
    fcntl.lockf(fileno, fcntl.LOCK_UN, length)

def ensure_filesize(f, size):
  # make sure file is big enough for shared_size
  f.seek(0, os.SEEK_END)
  if f.tell() < size:
    f.truncate(size)

def read_and_update_data(f):
  f.seek(0)
  print(f.readline().decode())
  f.seek(0)
  message = "Hello from process {} at {}".format(os.getpid(), time.asctime(time.localtime()))
  f.write(message.encode())

# Ignore Ctrl-C so we can quit cleanly
with suppress(KeyboardInterrupt):
  # open file for binary read/write and create if necessary
  with open(filename, "a+b") as f:
    ensure_filesize(f, shared_size)

    # Once all processes have opened the file, it can be removed
    #os.remove(filename)

    with mmap.mmap(f.fileno(), shared_size) as mm:
      while True:
        with lockf(f.fileno(), length=shared_size):
          read_and_update_data(mm)
        time.sleep(sleep_timeout)
Harvey
  • 5,703
  • 1
  • 32
  • 41