0

we are developing a system based on microservices architecture with spring boot technology.

Everything works fine, and we love it but we have a concern about the resources that each service is consuming.

Our system is distributed into 8-12 microservices and each of them is using at least about 550MB of RAM.

We tried to limit the resources through system variables but the performance drops seriously so limiting resources physically it is not a choice.

mem_limit: 200m
memswap_limit: 400m

Regarding this, I would like to know:

Is it normal for services that are implemented with spring boot to consume so much memory? Is there anything we can do to optimize this?

Configurations:

We're currently using CentOS 7.5 as a host OS running docker.

openjdk:11

qenndrimm
  • 1
  • 3
  • Each service is a world of its own. Are you caching data? is that on heap? or off-heap? Which service is called a lot and which one is not? What GC algorithm are you using (G1 probably?). You probably would need to run each service with VisualVM or another tool that allows you to poke at the memory and figure out what to do with each service. A small Spring Boot app that is not called often (3-4 concurrent requests) can run with 192MB of RAM, but depends on many factors. Do a memory dump and see what is actually consuming the memory. – Augusto Sep 03 '19 at 09:19

1 Answers1

0

That`s pretty normal. We are running in to the same issues. After researching i come up with following solutions:

Strikt memory limitations caused more errors where no errors was before (to strict mem limits in container and java heap space)

The basic idea is to give more memory to the services then you have physically available in ram. To handle peaks which can eventually cause a memory problems you can use swap on the host system. From my point of view you don`t need some swap memory limitations inside docker container. When the memory is pretty full, the kernal of the host system will manage this.

Lat say you have 32GB of RAM and 16GB of SWAP. You can assign mem limits (docker and jvm heap space) which will be in sum more then 32GB but less then 32GB+16GB (to be safe). This over-provisioning is only used to handle a load peeks of your services. The limitations are only used to fail fast if something go wrong and that only one service fail and not the whole system.

You can adjust a swappiness value to prevent to much swapping and you can tell jvm to free the allocated heap space back to the OS faster if you decrease -XX:MaxHeapFreeRatio from default 70.

But on the long run you need something to monitor your memory usage for all services ("prometheus grafana" ?) and to reassign the memory limits to fit into your physical limitations.

d3rbastl3r
  • 463
  • 1
  • 6
  • 16
  • Additionally i have played with the thought to share the jvm between containers to save memory, but at the end it was not that god idea. https://stackoverflow.com/questions/57604913/using-java-from-host-to-share-heapspace – d3rbastl3r Sep 03 '19 at 09:00