3

I'm trying to build a simple Golang/Appengine app which uses a channel to handle each http request. Reason is I want each request to perform a reasonable large in- memory calculation, and it's important that each request is performed in a thread- safe manner (ie calculations from concurrent requests don't get mixed).

Essentially I need a synchronous queue which will only process one request at a time, and channels look like a natural fit.

Is it possible to use Go's buffered channel as a thread-safe queue?

However I can't get my simple hello world example to work. It seems to fail on the line 'go process(w, cr)'; I get a 200 response from the server, but no contennt. Works fine if I eliminate 'go' from the this line, but then I'm guessing I'm not calling the channel correctly.

Anyone point out where I'm going wrong ?

Thanks!

// curl -X POST "http://localhost:8080/add" -d "{\"A\":1, \"B\":2}"

package hello

import (
    "encoding/json"
    "net/http"  
)

type MyStruct struct {
    A, B, Total int64
}

func (s *MyStruct) add() {
    s.Total = s.A + s.B
}

func process(w http.ResponseWriter, cr chan *http.Request) {
    r := <- cr
    var s MyStruct
    json.NewDecoder(r.Body).Decode(&s)
    s.add()
    json.NewEncoder(w).Encode(s)
}

func handler(w http.ResponseWriter, r *http.Request) {  
    cr := make(chan *http.Request, 1)
    cr <- r
    go process(w, cr) // doesn't work; no response :-(
    // process(w, cr) // works, but blank response :-(
}

func init() {
    http.HandleFunc("/add", handler)
}
Community
  • 1
  • 1
Justin
  • 4,649
  • 6
  • 33
  • 71
  • 3
    Just an FYI - the one you said returns a blank response works fine for me. Your issue though is that Go will flush the response given that it thinks it has finished doing its job. Using `go` here to fire off a goroutine will mean your processing code is running _after_ the request has been flushed. That said, `ListenAndServe` will handle concurrency for you. It literally fires a goroutine in a loop per request (that is, your handler has been fired as a goroutine). So what else are you trying to add to that? – Simon Whitehead Sep 08 '14 at 10:43
  • ListenAndServe sounds like it might be what I'm looking for; but are the goroutines it fires for each request thread safe ? Or do I need to use them with channels ? – Justin Sep 08 '14 at 11:31
  • 1
    No they aren't thread safe - they just fire off goroutines and continue listening. That is really only an issue though if you have shared global state - which it doesn't appear you do in your example above. That said, your large calculation could have some global state.. so it could be an issue for you. – Simon Whitehead Sep 08 '14 at 11:53
  • Very useful, thx. There is no global state but the calc contains some maps which I know aren't thread safe; using http.HandleFunc in conjunction with calc appears to lead to threading errors, ie calc results are all wrong/mixed up. You think using ListenAndServer/goroutines might fix ? [maybe because http.HandleFunc is doing everything in same thread/goroutine ?] – Justin Sep 08 '14 at 12:25
  • 1
    I think you're making this more complicated than it needs to be. Why not just use maps local to the request-handler? Synchronising access to global state doesn't seem like it would actually stop things getting mixed up in this case. – Greg Sep 08 '14 at 13:24
  • I agree with Greg. You're over thinking this. Utilise the built-in concurrency of the `net/http` package web server and stress test it with something like Apache's `ab` tool. If its a problem, then work towards more concurrency within your actual operation. For now though, you get concurrent request handling out of the box. – Simon Whitehead Sep 08 '14 at 23:05

2 Answers2

3

Not sure this is the right design but I suspect that the issue is that where you're starting the second go routine the first go routine continues and finishes writing the connection etc.

To stop this you can make the first routine wait using a waitgroup (http://golang.org/pkg/sync/#WaitGroup).

This stop the whole reasoning behind why you're trying to put this into a thread (hence why I think you've got a design issue).

Here is some untested code that should work or at least help in the right direction.

package main

import (
    "encoding/json"
    "net/http"
    "sync"  
)

type MyStruct struct {
    A, B, Total int64
}

func (s *MyStruct) add() {
    s.Total = s.A + s.B
}

func process(w http.ResponseWriter, cr chan *http.Request) {
    r := <- cr
    var s MyStruct
    json.NewDecoder(r.Body).Decode(&s)
    s.add()
    json.NewEncoder(w).Encode(s)
}

func handler(w http.ResponseWriter, r *http.Request) {  
    cr := make(chan *http.Request, 1)
    cr <- r
    var pleasewait sync.WaitGroup
    pleasewait.Add(1)

    go func() {
        defer pleasewait.Done()
        process(w, cr) // doesn't work; no response :-(
    }()
    // process(w, cr) // works, but blank response :-(

    pleasewait.Wait()
}

func main() {
    http.HandleFunc("/add", handler)
}
DanG
  • 323
  • 1
  • 9
  • Doesn't this just make it synchronous anyway? The handler will already be fired off in a goroutine via the `net/http` package.. Also, the channel access here is just a blocking pull from the channel.. – Simon Whitehead Sep 08 '14 at 10:48
  • Yep. It's the right answer to the wrong solution. I think the "enterprise" solution for this sort to thing is to use message queues. An alternative is that the request is made and the response is a 200 thanks for the request sending back an id. The client then polls the server with the id and when the response is ready it sends the result if it's not ready then it send please try later.. – DanG Sep 08 '14 at 10:55
  • 1
    A message queue is certainly not something you would use to serve a simple GET request. At least I hope not. – Simon Whitehead Sep 08 '14 at 10:56
  • It's not the simple GET request that will be the issue it depends what large calculation is being made. What would stop you kicking off 100 client connections and bringing the server to it' knees? I still stick to the belief this is the incorrect design. – DanG Sep 08 '14 at 11:07
  • For design assistance I'd read through: http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html – DanG Sep 08 '14 at 11:16
  • You're both right. A task queue is the right way to do it. However the calculation is reasonably small (couple of 100ms), there's only one client (a mapreduce process) and doing it in Go is a load more performant than doing it in Python. But yes, it's a hacked design. DanG, appreciate the possible solution, will take a look. Thx – Justin Sep 08 '14 at 12:40
  • What difference between process(w http.ResponseWriter, r *http.Request) and process(w http.ResponseWriter, cr chan *http.Request) ??? – cheks Apr 18 '18 at 12:18
1

If the large computation does not use shared mutable state, then write a plain handler. There's no need for channels and what not.

OK, the large computation does use shared mutable state. If there's only one instance of the application running, then use sync.Mutex to control access to the mutable state. This is simple compared to shuffling the work off to single goroutine to process the computations one at a time.

Are you running on App Engine? You might not be able to guarantee that there's a single instance of the application running. You will need to use the datastore or memcache for mutable state. If the computation can be done offline (after the request completes), then you can use App Engine Task Queues to process the computations one at a time.

A side note: The title proposes a solution to the problem stated in the body of the question. It would be better to state the problem directly. I would comment above on this, but I don't have the juice required.

Simon Fox
  • 5,995
  • 1
  • 18
  • 22