45

By default docker uses a shm size of 64m if not specified, but that can be increased in docker using --shm-size=256m

How should I increase shm size of a kuberenetes container or use --shm-size of docker in kuberenetes.

anandaravindan
  • 2,401
  • 6
  • 25
  • 35

3 Answers3

63

I originally bumped into this post coming from google and went through the whole kubernetes issue and openshift workaround. Only to find the much simpler solution listed on another stackoverflow answer later.

Glenn Vandamme
  • 1,146
  • 8
  • 23
  • this should be the accepted answer. Very infuriating that we greeted with an accepted answer that says "IMPOSSIBLE" – YoniXw Nov 09 '20 at 15:06
23

added below line in deployment.yaml and container came up for me which was failing-
basically Mounting an emptyDir to /dev/shm and setting the medium to Memory

   spec:
     containers:
       - name: solace-service
         image: solace-pubsub-standard:latest
         volumeMounts:
           - mountPath: /dev/shm
             name: dshm
         ports:
           - containerPort: 8080
     volumes:
       - name: dshm
         emptyDir:
            medium: Memory
Sanoj
  • 959
  • 11
  • 11
  • it should be without asterisks (medium: Memory) – diman82 Mar 11 '21 at 23:58
  • done..i guess it was formatted for some reason – Sanoj Mar 12 '21 at 08:15
  • You can usually also add `sizeLimit: 1Gi` to the `emptyDir:` to set a fixed size. By default I think it it gets 50% as a limit, but there is a beta feature gate that allows you to specify your own limit. – Greg Bray Oct 12 '22 at 01:16
9

It's not possible to do this in a kubernetes pod. See this issue

There is a workaround from openshift mentioned in the comments, but it may not be ideal

EDIT: at the time this question was asked, this was not possible. It now is, see https://stackoverflow.com/a/47921100/645002 for the correct answer

jaxxstorm
  • 12,422
  • 5
  • 57
  • 67