28

I get this error on my UNIX server, when running my java server:

Exception in thread "Thread-0" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at [... where ever I launch a new Thread ...]

It happens everytime I have about 600 threads running.

I have set up this variable on the server:

$> ulimit -s 128

What looks strange to me is the result of this command, which I ran when the bug occured the last time:

$> free -m
              total       used       free     shared    buffers     cached
Mem:          2048        338       1709          0          0          0
-/+ buffers/cache:        338       1709
Swap:            0          0          0

I launch my java server like this:

$> /usr/bin/java -server -Xss128k -Xmx500m -jar /path/to/myJar.jar

My debian version:

$> cat /etc/debian_version
5.0.8

My java version:

$> java -version
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

My question: I have read on Internet that my program should handle something like 5000 threads or so. So what is going on, and how to fix please ?


Edit: this is the output of ulimit -a when I open a shell:

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 794624
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 100000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 794624
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

I run the script as a daemon from init.d, and this is what i run:

DAEMON=/usr/bin/java
DAEMON_ARGS="-server -Xss128k -Xmx1024m -jar /path/to/myJar.jar"
ulimit -s 128 && ulimit -n 10240 && start-stop-daemon -b --start --quiet --chuid $USER -m -p $PIDFILE --exec $DAEMON -- $DAEMON_ARGS \
    || return 2

Edit2: I have come across this stack overflow question with a java test for threads: how-many-threads-can-a-java-vm-support

    public class DieLikeADog { 
        private static Object s = new Object(); 
        private static int count = 0; 
        public static void main(String[] argv){ 
            for(;;){ 
                new Thread(new Runnable(){ 
                        public void run(){ 
                            synchronized(s){ 
                                count += 1; 
                                System.err.println("New thread #"+count); 
                            } 
                            for(;;){ 
                                try { 
                                    Thread.sleep(100); 
                                } catch (Exception e){ 
                                    System.err.println(e); 
                                } 
                            } 
                        } 
                    }).start(); 
            } 
        } 
    } 

On my server, the program crashes after 613 threads. Now i'm certain this is not normal, and only related to my server configuration. Can anyone help please ?


Edit 3: I have come across this article, and many others, explaining that linux can't create 1000 threads, but you guys are telling me that you can do it on your systems. I don't understand.

I have also ran this script on my server: threads_limits.c and the limit is around 620 threads.

My website is now offline and this is the worst thing that could have happened to my project. I don't know how to recompile glibc and this stuff. It's too much work imo.

I guess I should switch to windows server. Because none of the settings proposed on this page did make any change: The limit on my system is between 600 and 620 threads, no matter the program involved.

Community
  • 1
  • 1
Joel
  • 3,427
  • 5
  • 38
  • 60
  • It's a game server. I open 2 threads for each client: 1 for TCP-read, and 1 for TCP-write. The server crashes when I reach 300 clients, but I have tons of memory left, so why ? – Joel Nov 20 '11 at 17:30
  • @Brian Roach: Why not? Real server hardware can churn with pretty high amounts of threads without much problems, when the settings are correct. – esaj Nov 20 '11 at 17:33
  • And what would these correct settings be, please ? – Joel Nov 20 '11 at 17:37
  • @esaj - Because the context switching and overhead of doing so is horrible and not necessary? – Brian Roach Nov 20 '11 at 17:37
  • @Brian Roach: we're running at around 2000-4000 threads, most of them active, and the server load is around 2%-5% (on a 4-core Xeon). Of course this is specific to our software and server setup. – esaj Nov 20 '11 at 17:42
  • Any reason why you are capping the `-Xmx` to a mere `500M` for a high performance server app and that too on a 64 bit VM? – Sanjay T. Sharma Nov 20 '11 at 18:36
  • The reason is that it seems not to change anything since i'm not reaching this limit. – Joel Nov 20 '11 at 18:58
  • Right, my point was, have you tried it with different values of `Xmx` like `4GiB`? Does the number of threads drop in that case? Also, can you post the output of `ulimit -a`? – Sanjay T. Sharma Nov 20 '11 at 19:40
  • I can't post the output since it is set in a deamon script in init.d, but I have raised every value of the limit except the threads stack size lowered to 128, and it's always crashing at 600 threads. – Joel Nov 20 '11 at 20:23
  • yes i have tried every value possible for xmx but it doesn't change anything since the server app consumes fewer memory than 200 mb even for 300 clients – Joel Nov 20 '11 at 20:23
  • @BrianRoach There were tests that showed better performance with Java IO when compared to NIO - and contrary to the general wisdom ("nio much faster than io") that guy actually provided benchmarks for his claims. So it's by far not that clear cut. The actual page is down atm, but there's a google docs version: – Voo Nov 20 '11 at 20:35
  • cont. [rather long link that](http://docs.google.com/viewer?a=v&q=cache:W9_5P9_-_FgJ:www.mailinator.com/tymaPaulMultithreaded.pdf+tymaPaulMultithreaded&hl=en&gl=at&pid=bl&srcid=ADGEESji9fNd5gye3tZ5LEYyv5OD86uPFsRk5i4Kez2braJqhzQdtWJFi59DZPlbjFP7c1Z20enK15Kvk01wQM73f_qr8KL6G-ex_3ekHyHFU_xcQyA2q3aAnOvlKwWA9pfDd4b6nxCT&sig=AHIEtbSCeSyvl7NHiOdAw79AgiR24re27w&pli=1) – Voo Nov 20 '11 at 20:36
  • @Joel: You are right, this doesn't seem Java specific but more of a config issue. To try few things out: can you run the same code without involving `init.d`? i.e. a plain simple `ulimit -s 128; ulimit -n 10240; java -server -Xss128k -Xmx1G -jar /path/to/myJar.jar` at the `bash` shell? Because it seems that your `ulimit` changes are not being picked up by your `java` process... – Sanjay T. Sharma Nov 21 '11 at 05:38
  • @Joel This could be your culprit: "stack size (kbytes, -s) 10240" (From your ulimit -a output). That's 10megabytes of stack per thread. Try to edit your '/etc/security/limits.conf' and set the stack size and other parameters there. I don't think ulimit -changes are picked up in the same session (although not sure), but editing /etc/security/limits.conf should be permanent (requires you to at least start a new session, which loads the modified limits.conf, or in the worst case, a reboot each time the file is changed, can't remember which). Take a backup of the original settings before modifying – esaj Nov 21 '11 at 06:42
  • `ulimit` changes are picked up in the same `bash` session. The problem might be the way in which the daemon process is picking up those limits – Sanjay T. Sharma Nov 21 '11 at 07:07
  • What kernel are you using? Any output from dmesg ? – KarlP Nov 21 '11 at 07:11
  • @sanjay I have tried to run the command in a bash session and had the same problem – Joel Nov 21 '11 at 12:10
  • @esaj I have already tried with this limits.conf file and still the same exact limit of 600 threads – Joel Nov 21 '11 at 12:11
  • @KarlP I use Debian 5.0.8 and the command dmesg outputs nothing. – Joel Nov 21 '11 at 12:12
  • 1) Did you run the program in edit2 directly from the promt where you have typed ulimit -a ? I have copied your setting and I have no problem. 2) What does uname -a say? – KarlP Nov 21 '11 at 14:17
  • @KarlP Yes I have run the edit2 directly from the prompt. `uname -a` returns `Linux de801.ispfr.net 2.6.18-028stab085.5 #1 SMP Thu Apr 14 15:06:33 MSD 2011 x86_64 GNU/Linux` – Joel Nov 21 '11 at 14:25
  • See my post, verify that you have NTPL and not the old and obsolete "Linux Threads" that could handle only 1500 threads or so. – KarlP Nov 21 '11 at 19:19
  • let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/5207/discussion-between-karlp-and-joel) – KarlP Nov 21 '11 at 20:39
  • Thanks for your help. I am on the chat. – Joel Nov 21 '11 at 21:14

7 Answers7

15

Just got the following information: This is a limitation imposed by my host provider. This has nothing to do with programming, or linux.

Joel
  • 3,427
  • 5
  • 38
  • 60
  • It will be interesting to know how he imposed this limit ! – souser Nov 24 '11 at 04:47
  • 4
    Here's the answer: Nuxit.com host provider sells virtuals server under the denomination "Dedicated server". They virtualize fake servers using a software called "Parallels® Virtuozzo Containers". They can control how the processes run and impose limits on threads, memory, and so on. It took me a week to find out but I have changed host, and now my program finally works fine. – Joel Nov 24 '11 at 14:58
  • Thanks for getting back. Appreciate it ! – souser Nov 25 '11 at 21:10
  • 4
    @Joel: I am not able to create more than 375 threads per process on Amazon EC2 machine. Is this limit imposed by Amazon? https://forums.aws.amazon.com/thread.jspa?threadID=86751 – codersofthedark Feb 08 '12 at 19:37
  • 2
    I'm pretty sure this limit is imposed by Amazon. If you need more threads, rent a real dedicated server and not a virtual private server or a cloud. You can also try to contact their support and ask for more threads slots. Good luck. – Joel Feb 08 '12 at 23:35
  • @Joel: Thanks Joel for you response, I agree that at times the hosting service can disable max number of threads per process but this is wrong to say that Linux does not implement per process thread limit. Check my answer for details. – codersofthedark Feb 09 '12 at 13:50
  • Thanks for this hint, this was it in my case! I checken with cat /proc/user_beancounters the numprocess value which was 768 - too low for me.. – mr.simonski Oct 08 '14 at 00:28
7

The underlying operating system (Debian Linux in this case) does not allow the process to create any more threads. See here how to raise the maximum amount: Maximum number of threads per process in Linux?

I have read on Internet that my program should handle something like 5000 threads or so.

This depends on the limits set to the OS, amount of running processes etc. With correct settings you can easily reach that many threads. I'm running Ubuntu on my own computer, and I can create around 32000 threads before hitting the limit on a single Java program with all my "normal stuff" running on the background (this was done with a test program that just created threads that went to sleep immediately in an infinite loop). Naturally, that high amount of threads actually doing something would probably screech consumer hardware to a halt pretty fast.

Community
  • 1
  • 1
esaj
  • 15,875
  • 5
  • 38
  • 52
  • this is the number of threads-max on my system: 1589248 ; and this is the maximum number of processes available to a single user: 794624 ; so i don't see any matching limitation here – Joel Nov 20 '11 at 17:28
  • Maybe you're really running out of memory then. I've only seen the "java.lang.OutOfMemoryError: unable to create new native thread" -exception when we hit the limit on a production system at around 20000 threads. – esaj Nov 20 '11 at 17:32
  • @esaj: yes it's possible, but did you see the output of `free -m` ? Doesn't it say that I have 75% of free memory ? – Joel Nov 20 '11 at 17:33
  • You could start with these: Up the heap-size of the JVM as per @Sid Malani's post (Try -Xmx1024M for example and go up from there). The amount of free memory in the system can be anything, but JVM will only get what you give to it. Also check you ulimits (ulimit -a), remember that pretty much everything in Linux/Unix is a file, so also check how many open files are allowed per user. ulimit-changes don't become 'active' straightaway, the session needs to be restarted (starting a new console-session should be enough, use ulimit -a to check that the changes really took). – esaj Nov 20 '11 at 17:38
  • I have checked ulimit -a and set the only remaining limit (the file descriptors) to 100 000, and I have the same exception when i reach 600 threads. any other setting possible ? – Joel Nov 20 '11 at 18:59
  • Nothing I can think of at the moment. @Adam Zalcmans' point on dropping stack-size and heap-size to leave more memory for stacks sounded a good candidate, but if the limit seems to be the same 600 threads regardless of these settings, it has to be something non-memory related. – esaj Nov 20 '11 at 19:26
  • What is your operating system and version please ? I mean the one where you can run 2000 threads. – Joel Nov 21 '11 at 01:43
  • I run Ubuntu 10.10 on my homebox, I get the "OOM: unable to create new native thread"-exception at around 32000 threads. At work we have a Blade-server with Gentoo Linux, actually running thousands of simultaneous threads serving clients. With that one, the limits with current configuration were hit around 20000 threads. – esaj Nov 21 '11 at 06:28
3

Can you try the same command with a smaller stack size "-Xss64k" and pass on the results ?

souser
  • 5,868
  • 5
  • 35
  • 50
2

Related to the OPs self-answer, but I do not yet have the reputation to comment. I had the identical issue when hosting Tomcat on a V-Server.

All standard means of system checks (process amount/limit, available RAM, etc) indicated a healthy system, while Tomcat crashed with variants of "out of memory / resources / GCThread exceptions".

Turns out some V-Servers have an extra configuration file that limits the amount of allowed Threads per process. In my case (Ubuntu V -Server with Strato, Germany) this was even documented by the hoster, and the restriction can be lifted manually.

Original documentation by Strato (German) here: https://www.strato.de/faq/server/prozesse-vs-threads-bei-linux-v-servern/

tl;dr: How to fix:

-inspect thread limit per process:

systemctl show --property=DefaultTasksMax

-In my case the default was 60, which was insufficient for Tomcat. I changed it to 256:

vim /etc/systemd/system.conf

Change the value for:

DefaultTasksMax=60

to something higher, e.g. 256. (The HTTPS connector of tomcat has a default thread pool of 200, so it should be at least 200.)

Then reboot, to make the changes take effect.

m5c
  • 45
  • 7
2

Your JVM fails to allocate stack or some other per-thread memory. Lowering the stack size with -Xss will help increase the number of threads you can create before OOM occurs (but JVM will not let you set arbitrarily small stack size).

You can confirm this is the problem by seeing how the number of threads created change as you tweak -Xss or by running strace on your JVM (you'll almost certainly see an mmap() returning ENOMEM right before an exception is thrown).

Check also your ulimit on virtual size, i.e. ulimit -v. Increasing this limit should let you create more threads with the same stack size. Note that resident set size limit (ulimit -m) is ineffective in current Linux kernel.

Also, lowering -Xmx can help by leaving more memory for thread stacks.

Community
  • 1
  • 1
Adam Zalcman
  • 26,643
  • 4
  • 71
  • 92
2

I am starting to suspect that "Native Posix Thread Library" is missing.

>getconf GNU_LIBPTHREAD_VERSION

Should output something like:

NPTL 2.13

If not, the Debian installation is messed up. I am not sure how to fix that, but installing Ubuntu Server seems like a good move...

for ulimit -n 100000; (open fd:s) the following program should be able to handle 32.000 threads or so.

Try it:

package test;

import java.io.InputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.ArrayList;
import java.util.concurrent.Semaphore;

public class Test {

    final static Semaphore ss = new Semaphore(0);


    static class TT implements Runnable {

        @Override
        public void run() {
            try {
                Socket t = new Socket("localhost", 47111);
                InputStream is = t.getInputStream();
                for (;;) {
                    is.read();
                }

            } catch (Throwable t) {
                System.err.println(Thread.currentThread().getName() + " : abort");
                t.printStackTrace();
                System.exit(2);
            }

        }
    }

    /**
     * @param args
     */
    public static void main(String[] args) {
        try {

            Thread t = new Thread() {
                public void run() {
                    try {
                        ArrayList<Socket> sockets = new ArrayList<Socket>(50000);
                        ServerSocket s = new ServerSocket(47111,1500);
                        ss.release();

                        for (;;) {
                            Socket t = s.accept();
                            sockets.add(t);
                        }
                    } catch (Exception e) {
                        e.printStackTrace();
                        System.exit(1);

                    }
                }
            };


            t.start();
            ss.acquire();


            for (int i = 0; i < 30000; i++) {

                Thread tt = new Thread(new TT(), "T" + i);
                tt.setDaemon(true);
                tt.start();
                System.out.println(tt.getName());
                try {
                    Thread.sleep(1);
                } catch (InterruptedException e) {
                    return;
                }
            }

            for (;;) {
                System.out.println();
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    return;
                }
            }

        } catch (Throwable t) {
            t.printStackTrace();
        }
    }
}
KarlP
  • 5,149
  • 2
  • 28
  • 41
  • very interesting program knowing your results running it. on my server, i get an error at `T255 : abort | java.net.SocketException: No buffer space available | at java.net.Socket.createImpl(Socket.java:397) | ... Is it possible that it's a debian/java bug ? – Joel Nov 21 '11 at 00:03
  • What is your operating system and version please ? – Joel Nov 21 '11 at 01:43
  • Added info above. I also tested with openJDk 1.6 and I got the same results. – KarlP Nov 21 '11 at 06:35
  • The output to `getconf GNU_LIBPTHREAD_VERSION` is `NPTL 2.7`. – Joel Nov 21 '11 at 20:43
-3

Its going out of memory.

Also need to change ulimit. If your OS does not give your app enough memory -Xmx i suppose will not make any difference.

I guess the -Xmx500m is having no effect.

Try

ulimit -m 512m with -Xmx512m

Sid Malani
  • 2,078
  • 1
  • 13
  • 13
  • 1
    Every manual I have read said that there was no connection between this error and Xmx option. And I have tried with 1.5GB for Xmx and got the same error, so I guess it's not related – Joel Nov 20 '11 at 17:29
  • 1
    I have reproduced a similar situation and increasing -Xmx makes the problem worse, i.e. increasing maximum size of memory allocation pool leaves less memory for the stacks. – Adam Zalcman Nov 20 '11 at 18:41