0

I have a small server which has to handle a lot of files. The files are send via a PUT request to the server. I am using the net/http package for the server. I know that for every request a go routine is opened. But the problem is that after a request is finished the used memory of the handler is not released.

The server (should) run on a Raspery Pi 3 with 1GB memory. The problem is that it runs out of memory, when I am sending a lot of files. At this point I can not wait until the garbage collector releases the memory.

This two topics are about the problem:

Why is the memory block not cleaned by the garbage collector?

Go 1.3 Garbage collector not releasing server memory back to system

But there is no solution for my problem.

Now the question is: is there a way to make a http handler marked as totaly ready, that the garbage collector releases the memory for the calling routine? I tried to add a return at the end of the handler, but that doesn't work. I am runing still out of memory.

Community
  • 1
  • 1
apxp
  • 5,240
  • 4
  • 23
  • 43
  • Also check out this: [Golang - Cannot free memory once occupied by bytes.Buffer](http://stackoverflow.com/questions/37382600/golang-cannot-free-memory-once-occupied-by-bytes-buffer), there is [`debug.FreeOSMemory()`](https://golang.org/pkg/runtime/debug/#FreeOSMemory). You could also limit concurrent requests / file uploads/downloads. – icza May 27 '16 at 21:12
  • To be clear, are you sure the goroutine has returned and the handler is the issue? I think you can leak a goroutine much like if you fail to free an object in non-GC'd languages. – evanmcdonnal May 27 '16 at 21:13
  • Get a stack trace as you're running our of memory and see what goroutines are still running. It also can't hurt to try the current dev version of Go if you're having trouble on ARM. – JimB May 27 '16 at 21:26
  • 4
    Can you show the code? – Mark May 27 '16 at 23:25
  • If for every request a goroutine is opened, then that may very well result in disaster. For example, if there are 40,000 concurrent requests, then there will be 40000 goroutines and if each goroutine consumes a lot of memory, then out-of-memory error may happen. Ideally, you should have a pool of goroutines executing the requests. The size of the pool may be decided by analyzing resources constraints. You may submit the requests to a buffered channel and the requests are pulled from the channel by the go routines from the pool executing the requests. – Nipun Talukdar May 28 '16 at 04:40
  • First thank you for the helpful comments. First I updated the golang version. The problem was that just Golang 1.3 was installed. My fault not to check that. After updating to 1.6.2 the memory usage is ok. If a file with 200 MB is used the server takes that amount of memory. After finishing that request the 200MB are still blocked, but if another file with 150 MB is used the server uses the blocked part of the memory. I am still testing. I will write an answer when I know more. – apxp May 28 '16 at 08:50

1 Answers1

0

The short answer: I updated to version 1.6.2. Now there is no additional memory allocation for each routine.


Thank you all for your comments. They brought me back to the right track.

The memory consumption came from a function which creates a md5 hash for each uploaded file. So 100MB file needs also 100MB memory. Go in version 1.3 allocated for every request (go routine) new memory. So after max. 1 GB of uploaded files the Raspbery ran out of memory.

Where do all the goroutines come from?

One comment was about the logic to open a go routine for every request. This logic was not implemented by myself it is inside the way go handles the requests.

More about that you can find in that great open source book about building web applications with Go: Get into http package

What happens now in version 1.6.2?

For every upload the server allocates memory. The difference is now that Go reserves the amount of memory, but it uses it also for other routines.

Example for version 1.6.2:

1st file 10 MB -> 10 MB RAM is used by the server
2nd file 5 MB -> 10 MB RAM is used by the server
3rd file 100 MB -> 100 MB RAM is used by the server
4th file 50 MB -> 100 MB is used by the server

So until no file is bigger than the memory the server should work.

apxp
  • 5,240
  • 4
  • 23
  • 43