2

I have a Java 8 Application that takes in messages over the network and writes to multiple Memory Mapped files using Java NIO MappedByteBuffer. I have a reader that reads messages simultaneously from these files in order and deletes read files again using MappedByteBuffer. All is smooth until I have written and read about 246 GB of data and my application crashes with the following

[thread 139611281577728 also had an error][thread 139611278419712 also had an error][thread 139611282630400 also had an error][thread 139611277367040 also had an error][thread 139611283683072 also had an error][thread 139611279472384 also had an error]





[thread 139611280525056 also had an error]
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x00007f02d10526de, pid=44460, tid=0x00007ef9c9088700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build 1.8.0_101-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode linux-amd64 )
# Problematic frame:
# v  ~StubRoutines::jint_disjoint_arraycopy
#
# Core dump written. Default location: /home/user/core or core.44460
#
# An error report file with more information is saved as:
# /home/user/hs_err_pid44460.log

The hs_err_pid44460.log is empty and the core dump core.44460 is about 246 GB in size and full of the messages I am trying to write.

I am running with a Max Heap size of 32 GB. As per JConsole, I run out of Free Physical Memory and then crash. JConsole Screen Capture

Why am I running out of RAM? Am i forgetting to close some file handle / not closing my MMapped files correctly?

Chinmay Nerurkar
  • 495
  • 6
  • 22

3 Answers3

1

Even though your program may indeed be correct in its usage of MappedByteBuffers, please note that at an high allocation pace you could incur phenomena due to untimely deallocation of said buffers, which is ultimately a responsibility of JVM and should occur only during garbage collection of buffers. In short, freeing of buffer will ultimately succeed but when it will happen should be hardly predictable.

You could, however, force deallocation ("cleaning") of memory allocated to buffers using JVM's Cleaner functionality (class sun.misc.Cleaner). Please refer to this SO question for some directions but, long story short, you simply should call Cleaner.clean() on your throwaway buffers as early as possible, in order to reduce memory allocation figures and support effectively your use case.

Community
  • 1
  • 1
logtwo
  • 502
  • 5
  • 12
  • 1
    The solution was to use the `Cleaner` to get the job done. Specifically this helped -> http://stackoverflow.com/a/19447758/1041963 – Chinmay Nerurkar Sep 06 '16 at 19:49
  • 2
    do note that using the cleaner incorrectly can lead to crashes and is an internal API that may change from version to version – the8472 Sep 07 '16 at 17:56
  • 1
    @the8472, you are right, especially for API change. I will try to update my answer ASAP with some directions for Java9, which is starting to clear the matter of "internal" (yet widely used!) APIs, see also: http://openjdk.java.net/jeps/260 – logtwo Sep 08 '16 at 09:56
0

You'll have to look at the virtual memory footprint or memory mapping of the process to figure out whether direct buffers might be the culprit.

If it is indeed crashing due to mapped or direct buffers then you're either leaking them (running heap dumps through a memory analyzer can identify those) or the GC is running too infrequently to release them.

the8472
  • 40,999
  • 5
  • 70
  • 122
0

You might also find a more aggressive garbage collection will help.

Also might like to try this option which was introduced in J7: -XX:+UseG1GC -XX:ParallelGCThreads=4 This will allow for 4 threads to GC in parallel

There are a number of good articles about more tuning your garbage collector heres one I found useful (https://blogs.oracle.com/java-platform-group/entry/g1_from_garbage_collector_to)

Hope this helps.