9

I am dealing with CRDs and creating Custom resources. I need to keep lots of information about my application in the Custom resource. As per the official doc, etcd works with request up to 1.5MB. I am hitting errors something like

"error": "Request entity too large: limit is 3145728"

I believe the specified limit in the error is 3MB. Any thoughts around this? Any way out for this problem?

Penny Liu
  • 15,447
  • 5
  • 79
  • 98
Yudi
  • 831
  • 4
  • 10
  • 19
  • 1
    The way Argo solves that problem is by using compression on the stored entity, but the real question is whether you **have** to have all 3MB worth of that data at once, or if it is merely more convenient for you and they could be decomposed into separate objects with relationships between each other. The kubernetes API is not a blob storage, and shouldn't be treated as one – mdaniel Mar 01 '20 at 22:47

4 Answers4

9
  • The "error": "Request entity too large: limit is 3145728" is probably the default response from kubernetes handler for objects larger than 3MB, as you can see here at L305 of the source code:
expectedMsgFor1MB := `etcdserver: request is too large`
expectedMsgFor2MB := `rpc error: code = ResourceExhausted desc = trying to send message larger than max`
expectedMsgFor3MB := `Request entity too large: limit is 3145728`
expectedMsgForLargeAnnotation := `metadata.annotations: Too long: must have at most 262144 bytes`
  • The ETCD has indeed a 1.5MB limit for processing a file and you will find on ETCD Documentation a suggestion to try the--max-request-bytes flag but it would have no effect on a GKE cluster because you don't have such permission on master node.

  • But even if you did, it would not be ideal because usually this error means that you are consuming the objects instead of referencing them which would degrade your performance.

I highly recommend that you consider instead these options:

  • Determine whether your object includes references that aren't used;
  • Break up your resource;
  • Consider a volume mount instead;

There's a request for a new API Resource: File (orBinaryData) that could apply to your case. It's very fresh but it's good to keep an eye on.

If you still need help let me know.

Will R.O.F.
  • 3,814
  • 1
  • 9
  • 19
  • 2
    Thanks. yes, setting the --max-request-byte is not really a good idea and k8s cluster is not in my control so that option is pretty much not possible for me. I am working on breaking up the structure. – Yudi Mar 03 '20 at 05:29
  • @Yudi if my answer was useful to you, consider upvoting/accepting. Thanks and good luck! – Will R.O.F. Mar 03 '20 at 10:00
  • the source code https://github.com/kubernetes/kubernetes/blob/d2c5779dadc9ed7a462c36bc280b2f9a200c571e/staging/src/k8s.io/apiserver/pkg/server/config.go#L367 – wow qing Jun 25 '22 at 14:21
  • You should not try this as the first step to fix the issue, check if your repo has large files that are getting packaged. – Vishrant Aug 24 '22 at 14:56
1

This happened to me when I put some large files in my Helm chart directory. Removing those files helped me resolve my issue.

d3vpasha
  • 459
  • 7
  • 24
0

Check the size of files in the directory, which contains the templates and values.yaml of the release of your chart (as it seems the name of directory is usually equals to charts).

du <directory-path> --max-depth=1
# if you want it to be more readable add -h switch
du -h <directory-path> --max-depth=1

Make sure you do not have any irrelevant files if the file size exceeded 3145728. (source)

Mostafa Ghadimi
  • 5,883
  • 8
  • 64
  • 102
-2

If you are using HELM, check if you have large file like log files. Add .helmignore

.DS_Store
# Common VCS dirs
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

*.log
Vishrant
  • 15,456
  • 11
  • 71
  • 120