My task definition is configured with these limits:
"cpu": "1024",
"memory": "8192"
I'm running the jar inside a docker container using the "docker" cgroup flags:
java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -XX:InitialRAMPercentage=70 /myjar.jar foo.Main
But ECS is killing my service with OOM errors.
I've gone ahead and measured the JVM memory usage and report it from within my application as a debug measure using the following:
val bean: MemoryMXBean = ManagementFactory.getMemoryMXBean
val hmu: MemoryUsage = bean.getHeapMemoryUsage
val nhu = bean.getNonHeapMemoryUsage
... reporting these metrics ...
In the image , the top portion is what cloud-watch is reporting as the used memory. As you can see it is at 100%.
The bottom graph shows the application reported memory:
val pc = (1.0 * hmu.getUsed) / hmu.getCommitted
From the documentation:
* Below is a picture showing an example of a memory pool:
*
* <pre>
* +----------------------------------------------+
* +//////////////// | +
* +//////////////// | +
* +----------------------------------------------+
*
* |--------|
* init
* |---------------|
* used
* |---------------------------|
* committed
* |----------------------------------------------|
* max
/**
* Returns the amount of memory in bytes that is committed for
* the Java virtual machine to use. This amount of memory is
* guaranteed for the Java virtual machine to use.
*
* @return the amount of committed memory in bytes.
*
*/
public long getCommitted() {
return committed;
};
/**
* Returns the amount of used memory in bytes.
*
* @return the amount of used memory in bytes.
*
*/
public long getUsed() {
return used;
};
My Docker file is very simple:
FROM openjdk:10-jdk
COPY service.jar /affinity-service.jar
COPY start.sh /start.sh
RUN chmod +x /start.sh
CMD ["/start.sh"]
and start.sh
is:
#!/bin/bash
set -x
OPTS=""
#... setting flags from ENV values...
#...
#...
java -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -XX:InitialRAMPercentage=70 ${OPTS} -jar /service.jar com.....Service