38

Look at this picture showing gitlab ce memory consumption. gitlab ce memory consumption

I really dont need all of those workers, sidekiq or unicorn or all of those daemon. This is on IDLE. I mean, I installed this to manage 1 project, with like 4 people, I dont need all those daemon. Is there any way to reduce this ?

delmalki
  • 1,326
  • 1
  • 13
  • 31

9 Answers9

43

I also had problems with gitlab's high memory consumption. So I ran the linux tool htop.

In my case I found out that the postgresl service used most of the memory.

With postgres service running 14.5G of 16G were usedenter image description here

I stopped one gitlab service after the other and found out that when I stop postgres a lot of memory was freed.

enter image description here

You can try it

gitlab-ctl stop postgresql

and start the service again with

gitlab-ctl start postgresql

Finally I came across the following configuration in /etc/gitlab/gitlab.rb

##! **recommend value is 1/4 of total RAM, up to 14GB.**
# postgresql['shared_buffers'] = "256MB"

I just set the shared buffers to 256MB by removing the comment #, because 256MB is sufficient for me.

postgresql['shared_buffers'] = "256MB"

and executed gitlab-ctl reconfigure. gitlab-ctl restarts the affected services and the memory consumption is now very moderate. enter image description here

Hopefully that helps someone else.

René Link
  • 48,224
  • 13
  • 108
  • 140
  • Remember to restart the server or you will get this error https://stackoverflow.com/questions/49528292/gitlab-ctl-reconfigure-unable-to-determine-node-name – Sorter Dec 25 '18 at 07:12
19

From your image it looks like Sidekiq and all its workers are using a total sum of 257mb of memory, which is normal. Remember that all the Sidekiq workers use the same memory pool, so they're using 257mb total, not 257mb each. As you've seen from your own answer, decreasing the number of Sidekiq workers will not drastically decrease the memory usage, but will cause background jobs to take longer because they have to wait around for a Sidekiq process to be available. I would leave this value at the default, but if you really want to decrease it then I wouldn't decrease it below 4 since you have 4 cores.

The Unicorn processes also share a memory pool, but each worker has 1 pool that is shared between its 2 processes. In your original image it looks like you have 5 workers, which is recommended for a 4 core system, and each is using about ~250mb of memory. You shouldn't notice any performance differences if you decreased the number of workers to 3.

Also, you might want to read this doc on how to configure Unicorn. You definitely don't want the number of workers to be less than 2 because it causes issues when editing files from within the GitLab UI, as discussed here, and it also disables cloning over HTTPS according to this quote from the doc I linked:

With one Unicorn worker only git over ssh access will work because the git over HTTP access requires two running workers (one worker to receive the user request and one worker for the authorization check).

Finally, recent versions of GitLab seem to allocate more memory to the postgresql database cache. I'd recommend configuring this property postgresql['shared_buffers'] in /etc/gitlab/gitlab.rb to be 1/4 of your total free RAM. See René Link's answer below for more information on that.

BrokenBinary
  • 7,731
  • 3
  • 43
  • 54
18

Since GitLab 9.0, prometheus is enabled by default which I noticed was using a lot of memory over 1.5GB in my case, this can be disabled with prometheus_monitoring['enable'] = false

anon
  • 181
  • 1
  • 2
  • 3
    When running gitlab on a system with 2Gb ram, this made "idle" ram consumption drop from 1.7Gb to 1.2Gb - So this definitely makes a large difference. I notice that prometheus is for database monitoring. Would it be possible to expand this answer with some information about the implications of disabling it. – sdfgeoff Nov 19 '18 at 11:51
10

2 Options I found browsing the gitlab.rb

  1. sidekiq['concurrency'] = 1 #25 is the default
  2. unicorn['worker_processes'] = 1 #2 is the default

And this which needs understanding according to their warning:

## Only change these settings if you understand well what they mean
## see https://about.gitlab.com/2015/06/05/how-gitlab-uses-unicorn-and-  unicorn-worker-killer/
## and https://github.com/kzk/unicorn-worker-killer
# unicorn['worker_memory_limit_min'] = "300*(1024**2)"
# unicorn['worker_memory_limit_max'] = "350*(1024**2)"

This is after config modifications

Memory usage gitlab c

Still WAY too much in my opinion.

Wernight
  • 36,122
  • 25
  • 118
  • 131
delmalki
  • 1,326
  • 1
  • 13
  • 31
  • 7
    Setting worker_processes to 1 had to adverse effect of my commits via the web UI to fail. – Wernight Sep 21 '16 at 13:10
  • 5
    Do not set worker_processes to anything less than 2 as of GitLab 10.1. https://gitlab.com/gitlab-org/gitlab-ce/issues/18771 – Xunnamius Oct 22 '17 at 23:53
  • 1
    @Wernight: I can confirm to commit via web UI issue when worker_processes are set to only 1. So this is not recommended. Set it to 2. – Phlogi Apr 16 '19 at 05:46
  • Want to note that `unicorn` is not used by gitlab anymore. Now `puma` is used. – Eugen Konkov Oct 30 '22 at 11:30
5

Fast forward to 2022, my GitLab v15 instance was using up its entire allotment of memory. I checked & tested some of the recommendations from this guide: Running GitLab in a memory-constrained environment. The changes that in my case reduced memory usage were:

################################################################################
## GitLab Puma
################################################################################

puma['worker_timeout'] = 120
puma['worker_processes'] = 1

################################################################################
## GitLab Sidekiq
################################################################################

sidekiq['max_concurrency'] = 10

I verified the effectiveness of the changes by checking the Service Level Indicators metrics in Grafana's dashboard (/-grafana).

Service Level Indicators

luissquall
  • 1,740
  • 19
  • 14
3

I have already fixed this case.
Which used the most memory is the unicorn!
My gitlab's version was "GitLab Community Edition 10.6.3".
And it was deploied on my server , it's cpu , INTEL Core i5 8400 for six cores.
So gitlab allocate 7 progresses for unicorn, each progress occuped 6% mem.

Method:
vim /var/opt/gitlab/gitlab-rails/etc/unicorn.rb
How to edit the unicorn.rb
Edit and Save changes. And execute "gitlab-ctl restart unicorn"
The htop behind unicorn.rb changes

王海吉
  • 31
  • 1
  • Excellent thank you! I'm running on 2x6 Xeon (24 threads) and could not figure out why it would fork so many processes. There's no indicator in config comment that this is a multiplier of CPU count! Just "default 2." I finally uncommented the "2" and it worked fine. – BoeroBoy Jul 30 '18 at 19:49
2

I had the same problem: a vanilla Gitlab on vanilla Ubuntu 20.04, would last maybe a day before crashing without any load. Bare metal EPYC, 8c /16t and 64 GB of RAM.

Postgresql was taking its 15G share as mentioned by BrokenBinary's answer, but even "fixing" that to 2G did not suffice.

I also had to fix the amount of Puma workers:

puma['worker_processes'] = 2

It seems that newer Gitlab installations will have memory leaks using the replacement for unicorn, which had memory leaks.

Update: Crashed again. Next try:

sidekiq['max_concurrency'] = 6
sidekiq['min_concurrency'] = 2
Jan
  • 21
  • 2
2

I'm running gitlab-ce on Raspberry 4B 8GB.

Setting:

sidekiq['max_concurrency'] =4
postgresql['shared_buffers'] = "256MB"

Did help.

martinerk0
  • 403
  • 1
  • 4
  • 17
1

When I changed the /etc/gitlab/gitlab.rb as mentioned in other answers it did not worked for me.

This is what I did, I edited the following file:

/var/opt/gitlab/gitlab-rails/etc/unicorn.rb (Perhaps the path to the file in your machine is different)

And changed worker_processes 9 to worker_processes 2.

David Valdivieso
  • 449
  • 1
  • 5
  • 11