68

On my system I can't run a simple Java application that start a process. I don't know how to solve.

Could you give me some hints how to solve?

The program is:

[root@newton sisma-acquirer]# cat prova.java
import java.io.IOException;

public class prova {

   public static void main(String[] args) throws IOException {
        Runtime.getRuntime().exec("ls");
    }

}

The result is:

[root@newton sisma-acquirer]# javac prova.java && java -cp . prova
Exception in thread "main" java.io.IOException: Cannot run program "ls": java.io.IOException: error=12, Cannot allocate memory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:474)
        at java.lang.Runtime.exec(Runtime.java:610)
        at java.lang.Runtime.exec(Runtime.java:448)
        at java.lang.Runtime.exec(Runtime.java:345)
        at prova.main(prova.java:6)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
        at java.lang.ProcessImpl.start(ProcessImpl.java:81)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:467)
        ... 4 more

Configuration of the system:

[root@newton sisma-acquirer]# java -version
java version "1.6.0_0"
OpenJDK Runtime Environment (IcedTea6 1.5) (fedora-18.b16.fc10-i386)
OpenJDK Client VM (build 14.0-b15, mixed mode)
[root@newton sisma-acquirer]# cat /etc/fedora-release
Fedora release 10 (Cambridge)

EDIT: Solution This solves my problem, I don't know exactly why:

echo 0 > /proc/sys/vm/overcommit_memory

Up-votes for who is able to explain :)

Additional informations, top output:

top - 13:35:38 up 40 min,  2 users,  load average: 0.43, 0.19, 0.12
Tasks: 129 total,   1 running, 128 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.5%us,  0.5%sy,  0.0%ni, 94.8%id,  3.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1033456k total,   587672k used,   445784k free,    51672k buffers
Swap:  2031608k total,        0k used,  2031608k free,   188108k cached

Additional informations, free output:

[root@newton sisma-acquirer]# free
             total       used       free     shared    buffers     cached
Mem:       1033456     588548     444908          0      51704     188292
-/+ buffers/cache:     348552     684904
Swap:      2031608          0    2031608
Andrew Thompson
  • 168,117
  • 40
  • 217
  • 433
Andrea Francia
  • 9,737
  • 16
  • 56
  • 70
  • It's either a bug in the linux version or you have some privilege issues. You could look into the UnixProcess:164 in the source to find out what it tries to allocate. – akarnokd Jul 14 '09 at 11:22
  • 1
    You can always try the sun jdk – wds Jul 14 '09 at 11:32
  • I had posted a link to a free library that solves your problem but a moderator deleted my answer without explanation. To the benefit of the community, I give it another try as comment: Your memory problem is solved by Yajsw which on Linux uses calls to a C library for the process creation. Read about it here: http://sourceforge.net/projects/yajsw/forums/forum/810311/topic/4423982 – kongo09 Sep 20 '11 at 09:57
  • I've encountered this with openjdk, after I replaced it with the official sun jdk, forking works fine... If you don't want to replace openjdk, the 'overcommit_memory' hack works as well – Dzhu Nov 22 '12 at 09:47

10 Answers10

37

This is the solution but you have to set:

echo 1 > /proc/sys/vm/overcommit_memory
om-nom-nom
  • 62,329
  • 13
  • 183
  • 228
Michael
  • 379
  • 1
  • 3
  • 3
  • 28
    Beware! With overcommit_memory set to 1 every malloc() will succeed. Linux will start randomly killing processes when you're running out of memory. http://www.win.tue.nl/~aeb/linux/lk/lk-9.html – Dan Fabulich Aug 10 '11 at 18:49
  • 1
    Is it possible to restrict this to be per-process, rather than system-wide? – Mark McDonald Sep 06 '12 at 05:56
  • 1
    Using this solution in development in a Vagrant box. – François Beausoleil Sep 04 '13 at 13:42
  • Yes, this worked for me too in a local Vagrant/JDK environment, while try to build [dom-distiller](https://github.com/chromium/dom-distiller). Had to `sudo su -` to gain root to adjust the proc filesystem. – Big Rich Jul 28 '15 at 00:10
21

What's the memory profile of your machine ? e.g. if you run top, how much free memory do you have ?

I suspect UnixProcess performs a fork() and it's simply not getting enough memory from the OS (if memory serves, it'll fork() to duplicate the process and then exec() to run the ls in the new memory process, and it's not getting as far as that)

EDIT: Re. your overcommit solution, it permits overcommitting of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available. So I guess that the fork() duplicates the Java process memory as discussed in the comments below. Of course you don't use the memory since the 'ls' replaces the duplicate Java process.

Brian Agnew
  • 268,207
  • 37
  • 334
  • 440
  • I once read that fork() call actually duplicates the entire memory of the currently running process. Is it still true? If you have a java program with 1.2 GB memory and 2GB total, I guess it will fail? – akarnokd Jul 14 '09 at 11:36
  • 2
    Yes. I was going to mention this, but I vaguely remember that modern OSes will implement copy-on-write for memory pages, so I'm not sure of this – Brian Agnew Jul 14 '09 at 11:37
  • If she runs the app with the default settings, it shouldn't be a problem to dupe 64MB memory I guess. – akarnokd Jul 14 '09 at 11:51
  • 22
    I think Andrea's a "he". It's a masculine name in Italy :-) – Brian Agnew Jul 14 '09 at 12:03
  • @kd304 yes this is still true, only memory mappings are copied though - and the memory is made copy-on-write in the new process - meaning memory is only actually copied if it's written to. Still - it's a quite big problem in big application server using *a lot* of memory -as those servers tend to cause a lot of memory to be copied in the small window between fork and exec. – nos Aug 01 '10 at 21:07
9

Runtime.getRuntime().exec allocates the process with the same amount of memory as the main. If you had you heap set to 1GB and try to exec then it will allocate another 1GB for that process to run.

om-nom-nom
  • 62,329
  • 13
  • 183
  • 228
Attila Bukta
  • 91
  • 1
  • 2
  • 2
    I had this problem with Maven. My machine had 1GB memory, and it was running Hudson, Nexus and another Maven process. The Maven process crashed since we set -Xms512m by mistake on MAVEN_OPTS. Fixing it to -Xms128m solved it. – Asaf Mesika Jan 03 '11 at 13:10
9

This is solved in Java version 1.6.0_23 and upwards.

See more details at http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7034935

Alf Høgemark
  • 91
  • 1
  • 1
  • Any idea if it applies to OpenJDK or equivalent non-Sun JVMs? – Mark McDonald Sep 06 '12 at 05:31
  • I am not getting this issue after upgrading to 1.6.0_37-b06.. Still confused about the bug fix.. So how much memory jvm allocates to `Runtime.exec`? – Satish Pandey Nov 11 '12 at 05:19
  • Excellent point. Upgrading the JVM does fix the issue as they now use a different (lighter) system call. – neesh May 02 '13 at 21:27
  • 1
    still getting this with 1.7.0_91, seems to be more a memory restriction on my machine (when other apps are closed I don't get this error). Plus that `exec` spawns new processes with the same RAM usage as the origin process – Karussell Jan 25 '16 at 15:01
  • @Karussell: Did you get to resolve this issue? I am on 1.7.0_111 and facing the same. Upgrading to jdk8 is not an option. – saurabheights Jan 10 '17 at 16:51
8

I came across these links:

http://mail.openjdk.java.net/pipermail/core-libs-dev/2009-May/001689.html

http://www.nabble.com/Review-request-for-5049299-td23667680.html

Seems to be a bug. Usage of a spawn() trick instead of the plain fork()/exec() is advised.

akarnokd
  • 69,132
  • 14
  • 157
  • 192
8

I solved this using JNA: https://github.com/twall/jna

import com.sun.jna.Library;
import com.sun.jna.Native;
import com.sun.jna.Platform;

public class prova {

    private interface CLibrary extends Library {
        CLibrary INSTANCE = (CLibrary) Native.loadLibrary((Platform.isWindows() ? "msvcrt" : "c"), CLibrary.class);
        int system(String cmd);
    }

    private static int exec(String command) {
        return CLibrary.INSTANCE.system(command);
    }

    public static void main(String[] args) {
        exec("ls");
    }
}
kongo09
  • 1,281
  • 10
  • 23
5

If you look into the source of java.lang.Runtime, you'll see exec finally call protected method: execVM, which means it uses Virtual memory. So for Unix-like system, VM depends on amount of swap space + some ratio of physical memory.

Michael's answer did solve your problem but it might (or to say, would eventually) cause the O.S. deadlock in memory allocation issue since 1 tell O.S. less careful of memory allocation & 0 is just guessing & obviously that you are lucky that O.S. guess you can have memory THIS TIME. Next time? Hmm.....

Better approach is that you experiment your case & give a good swap space & give a better ratio of physical memory used & set value to 2 rather than 1 or 0.

Scott Chu
  • 972
  • 14
  • 26
4

As weird as this may sound, one work around is to reduce the amount of memory allocated to the JVM. Since fork() duplicates the process and its memory, if your JVM process does not really need as much memory as is allocated via -Xmx, the memory allocation to git will work.

Of course you can try other solutions mentioned here (like over-committing or upgrading to a JVM that has the fix). You can try reducing the memory if you are desperate for a solution that keeps all software intact with no environment impact. Also keep in mind that reducing -Xmx aggressively can cause OOMs. I'd recommend upgrading the JDK as a long-term stable solution.

Deepak Bala
  • 11,095
  • 2
  • 38
  • 49
4

overcommit_memory

Controls overcommit of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available.

0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slighly more memory in this mode. This is the default.

1 - Always overcommit. Appropriate for some scientific applications.

2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap plus a configurable percentage (default is 50) of physical RAM. Depending on the percentage you use, in most situations this means a process will not be killed while attempting to use already-allocated memory but will receive errors on memory allocation as appropriate.

4

You can use the Tanuki wrapper to spawn a process with POSIX spawn instead of fork. http://wrapper.tanukisoftware.com/doc/english/child-exec.html

The WrapperManager.exec() function is an alternative to the Java-Runtime.exec() which has the disadvantage to use the fork() method, which can become on some platforms very memory expensive to create a new process.

Dan Fabulich
  • 37,506
  • 41
  • 139
  • 175
  • The Tanuki wrapper is quite impressive. Unfortunately, the `WrapperManager` is part of the Professional Edition, which is quite expensive if this is the only thing you need. Do you know of any free alternative? – kongo09 Sep 19 '11 at 21:19
  • @kongo09 It's available as part of the Free (GPLv2) community edition as well. You can even download the source and use it in GPL products. – Dan Fabulich Sep 20 '11 at 00:07
  • I don't think this is part of the community edition. If you try a quick test, you'll get the following exception: `Exception in thread "main" org.tanukisoftware.wrapper.WrapperLicenseError: Requires the Professional Edition.` – kongo09 Sep 20 '11 at 09:51