10

In my application, there is one process which writes data to a file, and then, in response to receiving a request, will send (some) of that data via the network to the requesting process. The basis of this question is to see if we can speed up communication when both processes happen to be on the same host. (In my case, the processes are Java, but I think this discussion can apply more broadly.)

There are a few projects out there which use the MappedByteBuffers returned by Java's FileChannel.map() as a way to have shared memory IPC between JVMs on the same host (see Chronicle Queue, Aeron IPC, etc.).

One approach to speeding up same-host communication would be to have my application use one of those technologies to provide the request-response pathway for same-host communication, either in conjunction with the existing mechanism for writing to the data file, or by providing a unified means of both communication and writing to the file.

Another approach would be to allow the requesting process to have direct access to the data file.

I tend to favor the second approach - assuming it would be correct - as it would be easier to implement, and seems more efficient than copying/transmitting a copy of the data for each request (assuming we didn't replace the existing mechanism for writing to the file).

Essentially, I'd like to understanding what exactly occurs when two processes have access to the same file, and use it to communicate, specifically Java (1.8) and Linux (3.10).

From my understanding, it seems like if two processes have the same file open at the same time, the "communication" between them will essentially be via "shared memory".

Note that this question is not concerned with the performance implication of using a MappedByteBuffer or not - it seem highly likely that a using mapped buffers, and the reduction in copying and system calls, will reduce overhead compared to reading and writing the file, but that might require significant changes to the application.

Here is my understanding:

  1. When Linux loads a file from disk, it copies the contents of that file to pages in memory. That region of memory is called the page cache. As far as I can tell, it does this regardless of which Java method (FileInputStream.read(), RandomAccessFile.read(), FileChannel.read(), FileChannel.map()) or native method is used to read the file (obseved with "free" and monitoring the "cache" value).
  2. If another process attempts to load the same file (while it is still resident in the cache) the kernel detects this and doesn't need to reload the file. If the page cache gets full, pages will get evicted - dirty ones being written back out to the disk. (Pages also get written back out if there is an explicit flush to disk, and periodically, with a kernel thread).
  3. Having a (large) file already in the cache is a significant performance boost, much more so than the differences based on which Java methods we use to open/read that file.
  4. If a file is loaded using the mmap system call (C) or via FileChannel.map() (Java), essentially the file's pages (in the cache) are loaded directly into the process' address space. Using other methods to open a file, the file is loaded into pages not in the process' address space, and then the various methods to read/write that file copy some bytes from/to those pages into a buffer in the process' address space. There is an obvious performance benefit avoiding that copy, but my question isn't concerned with performance.

So in summary, if I understand correctly - while mapping offer a performance advantage, it doesn't seem like it offers any "shared memory" functionality that we don't already get just from the nature of the Linux and the page cache.

So, please let me know where my understanding is off.

Thanks.

3 Answers3

4

My question is, on Java (1.8) and Linux (3.10), are MappedByteBuffers really necessary for implementing shared memory IPC, or would any access to a common file provide the same functionality?

It depends on why you want to implement shared memory IPC.

You can clearly implement IPC without shared memory; e.g. over sockets. So, if you are not doing it for performance reasons, it is not necessary to do shared memory IPC at all!

So performance has to be at the root of any discussion.

Access using files via the Java classic io or nio APIs does not provide shared memory functionality or performance.

The main difference between regular file I/O or Socket I/O versus shared memory IPC is that the former requires the applications to explicitly make read and write syscalls to send and receive messages. This entails extra syscalls, and entails the kernel copying data. Furthermore, if there are multiple threads you either need a separate "channel" between each thread pair or something to multiplexing multiple "conversations" over a shared channel. The latter can lead to the shared channel becoming a concurrency bottleneck.

Note that these overheads are orthogonal to the Linux page cache.

By contrast, with IPC implemented using shared memory, there are no read and write syscalls, and no extra copy step. Each "channel" can simply use a separate area of the mapped buffer. A thread in one process writes data into the shared memory and it is almost immediately visible to the second process.

The caveat is that the processes need to 1) synchronize, and 2) implement memory barriers to ensure that the reader doesn't see stale data. But these can both be implemented without syscalls.

In the wash-up, shared memory IPC using memory mapped files >>is<< faster than using conventional files or sockets, and that is why people do it.


You also implicitly asked if shared memory IPC can be implemented without memory mapped files.

  • A practical way would be to create a memory-mapped file for a file that lives in a memory-only file system; e.g. a "tmpfs" in Linux.

    Technically, that is still a memory-mapped file. However, you don't incur overheads of flushing data to disk, and you avoid the potential security concern of private IPC data ending up on disk.

  • You could in theory implement a shared segment between two processes by doing the following:

    • In the parent process, use mmap to create a segment with MAP_ANONYMOUS | MAP_SHARED.
    • Fork child processes. These will end up all sharing the segment with each other and the parent process.

    However, implementing that for a Java process would be ... challenging. AFAIK, Java does not support this.

Reference:

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • I'm not sure you understood my question, presumably because it was poorly written. I've attempted to make it clearer by re-writing. Essentially, I'm trying to understand what happens when two processes have the same file open at the same time, and if one could use this to safely and performantly offer communication between to processes. – dan.m was user2321368 Jun 01 '20 at 20:27
3

Essentially, I'm trying to understand what happens when two processes have the same file open at the same time, and if one could use this to safely and performantly offer communication between to processes.

If you are using regular files using read and write operations (i.e. not memory mapping them) then the two processes do not share any memory.

  • User-space memory in the Java Buffer objects associated with the file is NOT shared across address spaces.
  • When a write syscall is made, data is copied from pages in one processes address space to pages in kernel space. (These could be pages in the page cache. That is OS specific.)
  • When a read syscall is made, data is copied from pages in kernel space to pages in the reading processes address space.

It has to be done that way. If the operating system shared pages associated with the reader and writer processes buffers behind their backs, then that would be an security / information leakage hole:

  • The reader would be able to see data in the writer's address space that had not yet been written via write(...), and maybe never would be.
  • The writer would be able to see data that the reader (hypothetically) wrote into its read buffer.
  • It would not be possible to address the problem by clever use of memory protection because the granularity of memory protection is a page versus the granularity of read(...) and write(...) which is as little as a single byte.

Sure: you can safely use reading and writing files to transfer data between two processes. But you would need to define a protocol that allows the reader to know how much data the writer has written. And the reader knowing when the writer has written something could entail polling; e.g. to see if the file has been modified.

If you look at this in terms of just the data copying in the communication "channel"

  • With memory mapped files you copy (serialize) the data from application heap objects to the mapped buffer, and a second time (deserialize) from the mapped buffer to application heap objects.

  • With ordinary files there are two additional copies: 1) from the writing processes (non-mapped) buffer to kernel space pages (e.g. in the page cache), 2) from the kernel space pages to the reading processes (non-mapped) buffer.

The article below explains what is going on with conventional read / write and memory mapping. (It is in the context of copying a file and "zero-copy", but you can ignore that.)

Reference:

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • "These could be pages in the page cache. That is OS specific" Agreed, my question is about recent-ish Linux (I specified v 3.10 in the question). This is where I'm confused, if the OS did all read/writed via the page cache, wouldn't these processes be sharing that memory (in the cache)? – dan.m was user2321368 Jun 02 '20 at 01:46
  • Nope. That is not how it works. Look at the diagrams in the reference. (Oh ... I see I didn't finish a key sentence in this answer.) – Stephen C Jun 02 '20 at 05:32
  • (Fixed that. Added explanation of why `read` and `write` have to copy.) – Stephen C Jun 02 '20 at 05:42
  • I understand that my read buffer and your write buffer are private (and I though that has been clear in my question since inception, but sorry if it wasn't), but aren't the pages in the cache shared? I think you're saying yes, but please confirm. If they are shared, isn't this "shared memory IPC" or why not, or are we now just dealing with semantics? – dan.m was user2321368 Jun 02 '20 at 11:45
  • Pages in the page cache are not shared with processes ... UNLESS you are doing shared memory IPC. I have said this about 5 times now, in different ways. – Stephen C Jun 02 '20 at 13:27
  • As you are claiming that "Pages in the page cache are not shared with processes" then I now understand your answer. However this intuitively doesn't make sense to me, as it doesn't explain how things like shared libraries work, nor does it explain why, if two processes load the same file, one after the other, the second one takes much quicker. My fundamental question is how to reconcile my observations/intuition with your statement. – dan.m was user2321368 Jun 02 '20 at 16:26
  • 1) Shared libraries are different. These are **read only** and therefore can be safely shared between multiple processes. – Stephen C Jun 03 '20 at 01:35
  • 2) The reason that the page cache makes repeated file access by the same process or different processes faster is that the file system data is being cached in a page in memory. If it is already in memory, it doesn't need to be read from disk or SSD. Reading data from disk / SSD into memory takes a significant amount of time. Caching makes it faster. (And note that file and directory metadata is also cached ...) – Stephen C Jun 03 '20 at 01:39
  • Last question - if a file is cached in a page in memory, and two processes can access that same page, and it is not *read only* - isn't that the definition of memory shared between two processes? Either there is contradiction between your comments that "pages in the page cache are not shared" and "access by ... different processes [are] faster ... [because] the ... data is being cached", or there is some subtlety that I am missing. – dan.m was user2321368 Jun 03 '20 at 12:01
  • You are taking my comments out of context. What I said was "Pages in the page cache are not shared with processes ... **UNLESS you are doing shared memory IPC**." Strictly speaking, that should be unless you are using `mmap`. But the point is that regular I/O without fancy `mmap` stuff does not involve sharing pages between the page cache and user processes. – Stephen C Jun 03 '20 at 13:17
  • " that regular I/O without fancy mmap stuff does not involve sharing pages between the page cache and user processes" - if what you say is correct, then if there are two processes that open the same file, without using mmap, one after the other, the second one should show no improvement - this doesn't agree with my observations. – dan.m was user2321368 Jun 03 '20 at 13:22
  • And no there is no contradiction. The page cache gets its performance by having data in memory so that you don't need to read it from disk. Shared memory IPC gets performance by processes sharing pages. The two mechanisms for speedup are orthogonal ... conceptually. – Stephen C Jun 03 '20 at 13:24
  • I don't really see there is much point in continuing this. You are clearly disbelieving what I am saying. That's OK. It is not my concern whether what you believe to be the explanation is correct or not. It is not like anyone is likely to be hurt :-) – Stephen C Jun 03 '20 at 13:26
  • I agree. I awarded you the bounty because you've been very helpful, but I agree that something is being "lost in translation" using SO as a medium to communicate. Thanks for the help. – dan.m was user2321368 Jun 03 '20 at 13:31
  • I think to summarize, you're saying that because process A can't directly change a byte in process B's address space - that isn't "shared memory", and I'm asking if process A changes a variable in A's address space, copies that changed variable to a common area (page cache) and then B updates it's copy of the variable (from the common area) is that a correct form of IPC (which I was calling "shared memory" - but I now think that the term "shared memory" is defined as you were using it, not as I did). – dan.m was user2321368 Jun 03 '20 at 13:42
  • 1
    It would be. I even described that in my answer ... and pointed out the downsides of that approach. I am using "shared memory" in the way that it is used in Linux documentation, etc. It may well be that you have a different interpretation of that term, but I strongly suggest that you stick to the standard meanings / definitions ... if you want to understand the documentation and hold meaningful discussions about it. Doing IPC by doing conventional file I/O or socket I/O is NOT shared memory ... according to the standard terminology. – Stephen C Jun 03 '20 at 13:56
  • Yes. I didn't notice that we were defining the term differently (me incorrectly/non-standard) until a few minutes ago. I'm sure in front of a whiteboard we would have figured that out much earlier. Thanks for the help. – dan.m was user2321368 Jun 03 '20 at 14:01
0

Worth mentioning three points: performance, and concurrent changes, and memory utilization.

You are correct in the assessment that MMAP-based will usually offer performance advantage over file based IO. In particular, the performance advantage is significant if the code perform lot of small IO at artbitrary point of the file.

consider changing the N-th byte: with mmap buffer[N] = buffer[N] + 1, and with file based access you need (at least) 4 system calls + error checking:

   seek() + error check
   read() + error check
   update value
   seek() + error check
   write + error check

It is true the the number of actual IO (to the disk) most likely be the same.

The second point worth noting the concurrent access. With file based IO, you have to worry about potential concurrent access. You will need to issue explicit locking (before the read), and unlock (after the write), to prevent two processes for incorrectly accessing the value at the same time. With shared memory, atomic operations can eliminate the need for additional lock.

The third point is actual memory usage. For cases where the size of the shared objects is significant, using shared memory can allow large number of processes to access the data without allocating additional memory. If systems constrained by memory, or system that need to provide real-time performance, this could be the only way to access the data.

dash-o
  • 13,723
  • 1
  • 10
  • 37
  • I'm not sure you understood my question, presumably because it was poorly written. I've attempted to make it clearer by re-writing. Essentially, I'm trying to understand what happens when two processes have the same file open at the same time, and if one could use this to safely and performantly offer communication between to processes. – dan.m was user2321368 Jun 01 '20 at 20:34