14

I'm using a NUMA machine (an SGI UV 1000) to run a large number of numerical simulations at the same time, each of which is an OpenMP job using 4 cores. However, running more than around 100 of these jobs results in a significant performance hit. Our theory as to why this happens is that the shared libraries required by the software are loaded only once into the machine's global memory, and the system is then experiencing a communication bottleneck as all processes are accessing memory on a single node.

It's an old software with limited to no scope for modification and the static make option does not statically link all the libraries it needs. The most convenient solution, from what I can see, would be to somehow force the system to load a new copy of the required shared libraries on each process or node (on each of which I am running 3 processes), but I haven't been able to find out how to do this. Can anyone tell me how to do this, or have any other suggestions about how to solve this problem?

osgx
  • 90,338
  • 53
  • 357
  • 513
acroz
  • 165
  • 10
  • This is a very interesting question your are bringing up. Can you profile your code using the hardware counters and see how many L1 instruction cache misses it generates? Nehalem CPUs have 32 Kuops of L1 instruction cache per core and it should be enough to hold even some of the largest compute kernels. Also do you use process and thread binding - it is very important on NUMA systems. – Hristo Iliev Sep 13 '12 at 10:34
  • 1
    Assuming that the library is loaded to a given bank of memory, the first time the processor looks for it is obvious that you'll suffer a penalty in those CPU that are 'far away' from that bank. But won't the library thereafter stay in the instructions cache? If library is not frequently used, probably you shouldn't worry about NUMA. If it is, there you have the cache. – Genís Sep 14 '12 at 09:27
  • I have the same problem except with multiple threads accessing the same memory-mapped data file (read-only). – Tim Cooper Sep 20 '12 at 13:12
  • Bus, memory or other resource contention seems like a more likely first culprit. Why did you look here -- did you already rule stuff like that out? – Brian Cain Sep 23 '12 at 16:51
  • I'm curious - how do you handle cpusets for this machine? Do you have the memory locked as well as the cpus? I really doubt that the machine has it in one place in global memory as this would defeat the purpose of the machine. – dbeer Sep 26 '12 at 21:49

1 Answers1

12

the shared libraries required by the software are loaded only once into the machine's global memory,

As I know, this is the current behavior of Linux. Shared library is loaded only to one set of physical memory, and only on single node.

and the system is then experiencing a communication bottleneck as all processes are accessing memory on a single node.

As said in comments, instructions from library should be cached in every processor, so there can be bottleneck only if active code from library is wiped from cache (e.g. there is a lot of different code working).

You should to verify your theory by using hardware performance counters (misses from caches, inter-node NUMA memory access count).

The mechanism of storing some data with several copies on NUMA called "replication" on linux. And code of kernel, executable or of its shared libraries is called text. So, what you want is "text replication for shared libraries". I think that text replication is easier for kernel codes.

I was able to find some experimental patches from 2003 for doing such text replication, e.g. http://lwn.net/Articles/63512/ ([RFC][PATCH] NUMA user page replication) by Dave Hansen, IBM. This patch seems to be refused.

More modern (2007) variant of this technique is replication of pagecache: http://lwn.net/Articles/223056/ (mm: replicated pagecache) by Nick Piggin, SUSE. There is also presentation about his method: http://ondioline.org/~paul/pagecachereplication.pdf. This will work because all files are stored in pagecache, both executables and shared libraries. But even for this patch I can't find it in the current kernel.

On SGI there is more needs of replications (they have more NUMA machines that typical kernel developer), so there is can be some addition patches. There is an SGI's application tuning manual for NUMA: http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi/linux/bks/SGI_Developer/books/LX_86_AppTune/sgi_html/ch05.html which mentions dplace utility in section "Using the dplace Command". It has option for text replication:

-r: Specifies that text should be replicated on the node or nodes where the application is running. In some cases, replication will improve performance by reducing the need to make offnode memory references for code. The replication option applies to all programs placed by the dplace command. See the dplace(5) man page for additional information on text replication. The replication options are a string of one or more of the following characters:

l Replicate library text

b Replicate binary (a.out) text

t Thread round-robin option

Man dplace(1): http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=linux&db=man&fname=/usr/share/catman/man1/dplace.1.html

Man dplace(5): http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=linux&db=man&fname=/usr/share/catman/man5/dplace.5.html

Community
  • 1
  • 1
osgx
  • 90,338
  • 53
  • 357
  • 513