All setrlimit()
limits are upper limits. A process is allowed to use as much resources as it needs to, as long as it stays under the soft limits. From the setrlimit()
manual page:
The soft limit is the value that the
kernel enforces for the corresponding
resource. The hard limit acts as a
ceiling for the soft limit: an
unprivileged process may only set its
soft limit to a value in the range
from 0 up to the hard limit, and
(irreversibly) lower its hard limit. A
privileged process (under Linux: one
with the CAP_SYS_RESOURCE capability)
may make arbitrary changes to either
limit value.
Practically this means that the hard limit is an upper limit for both the soft limit and itself. The kernel only enforces the soft limits during the operation of a process - the hard limits are checked only when a process tries to change the resource limits.
In your case, you specifiy an upper limit of 320MB for the address space and your process uses about 180MB of those - well within its resource limits. If you want your process to grow, you need to do it in its code.
BTW, resource limits are intended to protect the system - not to tune the behaviour of individual processes. If a process runs into one of those limits, it's often doubtful that it will be able to recover, no matter how good your fault handling is.
If you want to tune the memory usage of your process by e.g. allocating more buffers for increased performance you should do one or both of the following:
ask the user for an appropriate value. This is in my opinion the one thing that should always be possible. The user (or a system administrator) should always be able to control such things, overriding any and all guesswork from your application.
check how much memory is available and try to guess a good amount to allocate.
As a sidenote, you can (and should) deal with 32-bit vs 64-bit at compile-time. Runtime checks for something like this are prone to failure and waste CPU cycles. Keep in mind, however, that the CPU "bitness" does not have any direct relation with the available memory:
32-bit systems do indeed impose a limit (usually in the 1-3 GB range) on the memory that a process can use. That does not mean that this much memory is actually available.
64-bit systems, being relatively newer, usually have more physical memory. That does not mean that a specific system actually has it or that your process should use it. For example, many people have built 64-bit home file servers with 1GB of RAM to keep the cost down. And I know quite a few people that would be annoyed if a random process forced their DBMS to swap just because it only thinks of itself.