74

I have on a dashboard, a number of panels (numbering around 6)to display data points chart making queries to dockerised instance of PostgreSQL database.

Panels were working fine until very recently, some stop working and report an error like this:

pq: could not resize shared memory segment "/PostgreSQL.2058389254" to 12615680 bytes: No space left on device

Any idea why this? how to work around solving this. Docker container runs on remote host accessed via ssh.

EDIT

Disk space:

$df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       197G  140G   48G  75% /
devtmpfs        1.4G     0  1.4G   0% /dev
tmpfs           1.4G  4.0K  1.4G   1% /dev/shm
tmpfs           1.4G  138M  1.3G  10% /run
tmpfs           1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/dm-16       10G   49M   10G   1% /var/lib/docker/devicemapper/mnt/a0f3c5ab84aa06d5b2db00c4324dd6bf7141500ff4c83e23e9aba7c7268bcad4
/dev/dm-1        10G  526M  9.5G   6% /var/lib/docker/devicemapper/mnt/8623a774d736ed3dc0d2db89b7d07cae85c3d1bcafc245180eec4ffd738f93a5
shm              64M     0   64M   0% /var/lib/docker/containers/260552ebcdf2bf0961329108d3d975110f8ada0a41325f5e7dd81b8ddad9d18b/mounts/shm
/dev/dm-4        10G  266M  9.8G   3% /var/lib/docker/devicemapper/mnt/6f873e62607e7cac4c4b658c72874c787b90290f74d1159eca81af61cb467cfb
shm              64M   50M   15M  78% /var/lib/docker/containers/84c66d9fb5b6ae023d051766f4d35ced87a519a1fee68ca5c89d61ff87cf1e5a/mounts/shm
/dev/dm-2        10G  383M  9.7G   4% /var/lib/docker/devicemapper/mnt/cb3df1ae654ed78802c2e5bd7a51a1b0bdd562855a7c7803750b80b33f5c206e
shm              64M     0   64M   0% /var/lib/docker/containers/22ba2ae2b6859c24623703dcb596527d64257d2d61de53f4d88e00a8e2335211/mounts/shm
/dev/dm-3        10G   99M  9.9G   1% /var/lib/docker/devicemapper/mnt/492a19fc8f3e254c4e5cc691c3300b5fee9d1a849422673bf0c19b4b2d1db571
shm              64M     0   64M   0% /var/lib/docker/containers/39abe855a9b107d4921807332309517697f024b2d169ebc5f409436208f766d0/mounts/shm
/dev/dm-7        10G  276M  9.8G   3% /var/lib/docker/devicemapper/mnt/55c6a6c17c892d149c1cc91fbf42b98f1340ffa30a1da508e3526af7060f3ce2
shm              64M     0   64M   0% /var/lib/docker/containers/bf2e7254cd7e2c6000da61875343580ec6ff5cbf40c017a398ba7479af5720ec/mounts/shm
/dev/dm-8        10G  803M  9.3G   8% /var/lib/docker/devicemapper/mnt/4e51f48d630041316edd925f1e20d3d575fce4bf19ef39a62756b768460d1a3a
shm              64M     0   64M   0% /var/lib/docker/containers/72d4ae743de490ed580ec9265ddf8e6b90e3a9d2c69bd74050e744c8e262b342/mounts/shm
/dev/dm-6        10G   10G   20K 100% /var/lib/docker/devicemapper/mnt/3dcddaee736017082fedb0996e42b4c7b00fe7b850d9a12c81ef1399fa00dfa5
shm              64M     0   64M   0% /var/lib/docker/containers/9f2bf4e2736d5128d6c240bb10da977183676c081ee07789bee60d978222b938/mounts/shm
/dev/dm-5        10G  325M  9.7G   4% /var/lib/docker/devicemapper/mnt/65a2bf48cbbfe42f0c235493981e62b90363b4be0a2f3aa0530bbc0b5b29dbe3
shm              64M     0   64M   0% /var/lib/docker/containers/e53d5ababfdefc5c8faf65a4b2d635e2543b5a807b65a4f3cd8553b4d7ef2d06/mounts/shm
/dev/dm-9        10G  1.2G  8.9G  12% /var/lib/docker/devicemapper/mnt/3216c48346c3702a5cd2f62a4737cc39666983b8079b481ab714cdb488400b08
shm              64M     0   64M   0% /var/lib/docker/containers/5cd0774a742f54c7c4fe3d4c1307fc93c3c097a861cde5f611a0fa9b454af3dd/mounts/shm
/dev/dm-10       10G  146M  9.9G   2% /var/lib/docker/devicemapper/mnt/6a98acd1428ae670e8f1da62cb8973653c8b11d1c98a8bf8be78f59d2ddba062
shm              64M     0   64M   0% /var/lib/docker/containers/a878042353f6a605167e7f9496683701fd2889f62ba1d6c0dc39c58bc03a8209/mounts/shm
tmpfs           285M     0  285M   0% /run/user/0

EDIT-2

$df -ih
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/vda1         13M  101K   13M    1% /
devtmpfs         354K   394  353K    1% /dev
tmpfs            356K     2  356K    1% /dev/shm
tmpfs            356K   693  356K    1% /run
tmpfs            356K    16  356K    1% /sys/fs/cgroup
/dev/dm-16        10M  2.3K   10M    1% /var/lib/docker/devicemapper/mnt/a0f3c5ab84aa06d5b2db00c4324dd6bf7141500ff4c83e23e9aba7c7268bcad4
/dev/dm-1         10M   19K   10M    1% /var/lib/docker/devicemapper/mnt/8623a774d736ed3dc0d2db89b7d07cae85c3d1bcafc245180eec4ffd738f93a5
shm              356K     1  356K    1% /var/lib/docker/containers/260552ebcdf2bf0961329108d3d975110f8ada0a41325f5e7dd81b8ddad9d18b/mounts/shm
/dev/dm-4         10M   11K   10M    1% /var/lib/docker/devicemapper/mnt/6f873e62607e7cac4c4b658c72874c787b90290f74d1159eca81af61cb467cfb
shm              356K     2  356K    1% /var/lib/docker/containers/84c66d9fb5b6ae023d051766f4d35ced87a519a1fee68ca5c89d61ff87cf1e5a/mounts/shm
/dev/dm-2         10M  5.6K   10M    1% /var/lib/docker/devicemapper/mnt/cb3df1ae654ed78802c2e5bd7a51a1b0bdd562855a7c7803750b80b33f5c206e
shm              356K     1  356K    1% /var/lib/docker/containers/22ba2ae2b6859c24623703dcb596527d64257d2d61de53f4d88e00a8e2335211/mounts/shm
/dev/dm-3         10M  4.6K   10M    1% /var/lib/docker/devicemapper/mnt/492a19fc8f3e254c4e5cc691c3300b5fee9d1a849422673bf0c19b4b2d1db571
shm              356K     1  356K    1% /var/lib/docker/containers/39abe855a9b107d4921807332309517697f024b2d169ebc5f409436208f766d0/mounts/shm
/dev/dm-7         10M  7.5K   10M    1% /var/lib/docker/devicemapper/mnt/55c6a6c17c892d149c1cc91fbf42b98f1340ffa30a1da508e3526af7060f3ce2
shm              356K     1  356K    1% /var/lib/docker/containers/bf2e7254cd7e2c6000da61875343580ec6ff5cbf40c017a398ba7479af5720ec/mounts/shm
/dev/dm-8         10M   12K   10M    1% /var/lib/docker/devicemapper/mnt/4e51f48d630041316edd925f1e20d3d575fce4bf19ef39a62756b768460d1a3a
shm              356K     1  356K    1% /var/lib/docker/containers/72d4ae743de490ed580ec9265ddf8e6b90e3a9d2c69bd74050e744c8e262b342/mounts/shm
/dev/dm-6        7.9K  7.3K   623   93% /var/lib/docker/devicemapper/mnt/3dcddaee736017082fedb0996e42b4c7b00fe7b850d9a12c81ef1399fa00dfa5
shm              356K     1  356K    1% /var/lib/docker/containers/9f2bf4e2736d5128d6c240bb10da977183676c081ee07789bee60d978222b938/mounts/shm
/dev/dm-5         10M   27K   10M    1% /var/lib/docker/devicemapper/mnt/65a2bf48cbbfe42f0c235493981e62b90363b4be0a2f3aa0530bbc0b5b29dbe3
shm              356K     1  356K    1% /var/lib/docker/containers/e53d5ababfdefc5c8faf65a4b2d635e2543b5a807b65a4f3cd8553b4d7ef2d06/mounts/shm
/dev/dm-9         10M   53K   10M    1% /var/lib/docker/devicemapper/mnt/3216c48346c3702a5cd2f62a4737cc39666983b8079b481ab714cdb488400b08
shm              356K     1  356K    1% /var/lib/docker/containers/5cd0774a742f54c7c4fe3d4c1307fc93c3c097a861cde5f611a0fa9b454af3dd/mounts/shm
/dev/dm-10        10M  5.2K   10M    1% /var/lib/docker/devicemapper/mnt/6a98acd1428ae670e8f1da62cb8973653c8b11d1c98a8bf8be78f59d2ddba062
shm              356K     1  356K    1% /var/lib/docker/containers/a878042353f6a605167e7f9496683701fd2889f62ba1d6c0dc39c58bc03a8209/mounts/shm
tmpfs            356K     1  356K    1% /run/user/0

EDIT-3 postgres container service:

version: "3.5"
services:

#other containers go here..

 postgres:
    restart: always
    image: postgres:10
    hostname: postgres
    container_name: fiware-postgres
    expose:
      - "5432"
    ports:
      - "5432:5432"
    networks:
      - default
    environment:
      - "POSTGRES_PASSWORD=password"
      - "POSTGRES_USER=postgres"
      - "POSTGRES_DB=postgres"
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    build:
      context: .
      shm_size: '4gb'

Database size:

postgres=# SELECT pg_size_pretty( pg_database_size('postgres'));
 pg_size_pretty
----------------
 42 GB
(1 row)

EDIT-4

Sorry, but none of the workaround related to this question actually work, including this one On the dashboard, I have 5 panels intended to display data points. The queries are similar, except that each displays different parameters for temperature, relativeHumidity, illuminance, particles and O3. This is the query:

SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
avg(attrvalue::float) as illuminance
FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;

The difference is in the WHERE attrname=#parameterValue statement. I modified the postgresql.conf file to write logs but the logs doesn't seem to provide helpfull tips: here goes the logs:

$ vim postgres-data/log/postgresql-2019-06-26_150012.log
.
.
2019-06-26 15:03:39.298 UTC [45] LOG:  statement: SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as o3
        FROM urbansense.airquality WHERE attrname='O3' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.903 UTC [41] ERROR:  could not resize shared memory segment "/PostgreSQL.1197429420" to 12615680 bytes: No space left on device
2019-06-26 15:03:40.903 UTC [41] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as illuminance
        FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.905 UTC [42] FATAL:  terminating connection due to administrator command
2019-06-26 15:03:40.905 UTC [42] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as illuminance
        FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.909 UTC [43] FATAL:  terminating connection due to administrator command
2019-06-26 15:03:40.909 UTC [43] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800) as time,
        avg(attrvalue::float) as illuminance
        FROM urbansense.weather WHERE attrname='illuminance' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:03:40.921 UTC [1] LOG:  worker process: parallel worker for PID 41 (PID 42) exited with exit code 1
2019-06-26 15:03:40.922 UTC [1] LOG:  worker process: parallel worker for PID 41 (PID 43) exited with exit code 1
2019-06-26 15:07:04.058 UTC [39] LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp39.0", size 83402752
2019-06-26 15:07:04.058 UTC [39] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800)as time,
        avg(attrvalue::float) as relativeHumidity
        FROM urbansense.weather WHERE attrname='relativeHumidity' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:07:04.076 UTC [40] LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp40.0", size 83681280
2019-06-26 15:07:04.076 UTC [40] STATEMENT:  SELECT to_timestamp(floor((extract('epoch' from recvtime)/ 1800 )) * 1800)as time,
        avg(attrvalue::float) as relativeHumidity
        FROM urbansense.weather WHERE attrname='relativeHumidity' AND attrvalue<>'null' GROUP BY time ORDER BY time asc;
2019-06-26 15:07:04.196 UTC [38] LOG:  temporary file: path "base/pgsql_tmp/pgsql_tmp38.0", size 84140032

Anyone with a idea how to solve this?

arilwan
  • 3,374
  • 5
  • 26
  • 62

6 Answers6

148

This is because docker by-default restrict size of shared memory to 64MB.

You can override this default value by using --shm-size option in docker run.

docker run -itd --shm-size=1g postgres

or in docker-compose:

db:
  image: "postgres:11.3-alpine"
  shm_size: 1g

Check this out. More info here.

starball
  • 20,030
  • 7
  • 43
  • 238
mchawre
  • 10,744
  • 4
  • 35
  • 57
  • modified compose file adding `shm_size: '4gb'` however,this doesn't solve the problem(question edit 3). does this has to do with container volume or docker working memory? Because my database size is above 40GB as can be seen on edit-3. – arilwan Jun 25 '19 at 15:16
  • But after this change did you got the same `pq: could not resize shared memory segment` error? – mchawre Jun 25 '19 at 15:26
  • 2
    Yes I still get it. even set the `shm_size: '50gb'` no change, most recent: `pq: could not resize shared memory segment "/PostgreSQL.1336373456" to 12615680 bytes: No space left on device` – arilwan Jun 25 '19 at 15:34
  • Hve you gone through this similar question https://stackoverflow.com/questions/55803015/google-cloud-sql-pg11-could-not-resize-shared-memory-segment Try out the mentioned solution if possible. – mchawre Jun 25 '19 at 15:37
  • I am having same problem, and it does not seem to be a problem of database settings (workers, memory, etc) – PCamargo Oct 04 '19 at 16:05
  • @PCamargo try increasing the shared memory size of your `postgres` database, (which is 64MB by default). – arilwan Oct 04 '19 at 17:10
  • @arilwan it would be good if you post a separate question with all the details. – mchawre Oct 07 '19 at 09:08
  • @mchawre I'm just trying to tell PCamargo how my problem was solved, not a new question altogether. – arilwan Oct 07 '19 at 12:07
  • 1
    It actually worked, @arilwan . I had to create a new container (no success in changing it), but it worked like a beauty! – PCamargo Oct 07 '19 at 12:11
  • @arilwan sorry was supposed to ask this to pcamargo – mchawre Oct 07 '19 at 14:39
28

Sorry for the late reply. Building a new image is not necessary. But you must be sure the container is recreated, and you are not using the old one. Docker-compose file must be changed with, adding shm_size at service level. Build section is not necessary.

version: "3.5"
services:

#other containers go here..

 postgres:
    restart: always
    image: postgres:10

    #THIS MUST BE ADDED AT SERVICE LEVEL
    shm_size: 1gb 

    hostname: postgres
    container_name: fiware-postgres
    expose:
      - "5432"
    ports:
      - "5432:5432"
    networks:
      - default
    environment:
      - "POSTGRES_PASSWORD=password"
      - "POSTGRES_USER=postgres"
      - "POSTGRES_DB=postgres"
    volumes:
      - ./postgres-data:/var/lib/postgresql/data

Then you must completely recreate the container

docker-compose rm postgres 
# alternatively you can docker-compose down to destroy all containers
docker-compose up -d

to destroy the old container and create a new one.

You can check the change inside the container (enter with docker-compose exec postgres bash) and run df -h | grep shm

Reference:docker SHM_SIZE /dev/shm: resizing shared memory

Javier Dottori
  • 1,040
  • 9
  • 9
  • 1
    Excellent. Instead of `docker-compose down` you can also do `docker-compose rm postgres` to remove the pre-existing Postgres container. – The Alchemist May 22 '20 at 16:57
  • @Javier Dottori, The solution it's ok, but the check shm command, not. Should be `docker exec postgres df -h |grep shm` or `docker exec -it postgres bash`, and after terminal is connect, runs `df -h |grep shm`. – André Carvalho Dec 10 '21 at 13:07
  • Hi André, thanks for the suggestion. Anyway, if the container is created with `docker-compose` it's better to use its wrappers so you don't need to assign a name to it. If you use named containers you need to be careful so multiple docker-compose files doesn't share names on all your machine. `docker-compose exec` is mostly a wrapper for `docker exec -ti`, and if you want to use the `docker`native command you should use the `container_name` that is `fiware-postgres` (I keept it because it was in the original question) – Javier Dottori Dec 13 '21 at 21:23
  • It's good. You update docker-compose file and docker-compose up -d – LokiAlice Dec 22 '22 at 08:04
9

You can increase shm size by remounting it without restarting/rebuilding container

mount -o remount,size=256m -t tmpfs /var/lib/docker/containers/your-container-id/mounts/shm

Change your-container-id and needed size (256m)

  • works nice! was inspecting a long running proces i did not wanted to interupt and ran out of shared memory. Gave it a try and it worked. – Stephan Jul 30 '20 at 08:10
  • 1
    It's better to use it with adding `shm_size: 256m` to docker-compose.yml. Sometimes after restarting container I needed to remount it again. Don't know why, but it happens – Николай Агеев Jul 30 '20 at 08:53
  • 1
    but then my task has crashed anyway. for the moment it was a quick solution. – Stephan Jul 31 '20 at 19:18
1

After increasing shm_size: did not help me out, I turned to db optimization and found that adding some missing indexes solved this error in my particular case.

Specifically I had several views that used joins on fields w/o indexes, and once the db got to typical size (1M rows in "many" side of joins) these errors began.

Greg Lyon
  • 2,624
  • 1
  • 16
  • 4
1

this error message about shared memory refers to the postgres own work_mem config - default is just 4 MB.

you can specify it in the command:

version: '3.3'
services:
  db:
    image: postgres:12
    command:
     - -c
     - work_mem=64MB

looks like you need at least 13MB there

0

You need to build new image with increased shm_size

Dockerfile

FROM postgres:10.7

docker-compose.release.yml

version: '3.7'
services:
  postgres:
    image: registry.my-site.com/postgres:latest
    build:
      shm_size: '4gb'

run

docker-compose -f docker-compose.release.yml build

then you can use your image registry.my-site.com/postgres:latest to deploy container with increased shm_size

Ryabchenko Alexander
  • 10,057
  • 7
  • 56
  • 88
  • 3
    Building a custom image is not necessary, because it's a setting during container creation time (`docker run --rm -it --shm-size 2gb....`). – The Alchemist May 22 '20 at 16:58