6

I am making a high-load web statistics system through embedding <img> tag to site. The thing I want to do is:

  1. nginx gets request for an image from some host
  2. it gives as answer to host little 1px static image from filesystem
  3. at this time it somehow transfers request's headers to application and closes connection to host

I am working with Ruby and I'm going to make a pure-Rack app to get the headers and put them into a queue for further calculations.

The problem I can't solve is, how can I configure sphinx to give headers to the Rack app, and return a static image as the reply without waiting a for response from the Rack application?

Also, Rack is not required if there is more common Ruby-solution.

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
sandrew
  • 3,109
  • 5
  • 19
  • 29
  • 1
    This is not really an answer to your _specific_ question, but if you don't find another solution, you may consider logging the requests to a file and then parsing that file later with Ruby – Michelle Tilley Dec 04 '11 at 08:27
  • Brandon, thanks, this solution is rather interesting, but I'm afraid, it is badly scalable. – sandrew Dec 04 '11 at 12:14
  • Why do you need an immediate response from the server? I mean the client is waiting for a transparent 1x1px GIF image, so for the end user experience it is unnoticeable... – lwe Dec 19 '11 at 08:43
  • @Iwe for the case when page can not be shown untill all its content is downloaded. Probably you have noticed, that sometimes sites open for 5-6 seconds or greater because of slow CDN or Google Analytics. – sandrew Dec 21 '11 at 01:52

4 Answers4

2

A simple option is to terminate the client connection ASAP while proceeding with the backend process.

server {
    location /test {
        # map 402 error to backend named location
        error_page 402 = @backend;

        # pass request to backend
        return 402;
    }

    location @backend {
        # close client connection after 1 second
        # Not bothering with sending gif
        send_timeout 1;

        # Pass the request to the backend.
        proxy_pass http://127.0.0.1:8080;
    }
}

The option above, while simple, may result in the client receiving an error message when the connection is dropped. The ngx.say directive will ensure that a "200 OK" header is sent and as it is an async call, will not hold things up. This needs the ngx_lua module.

server {
    location /test {
        content_by_lua '
            -- send a dot to the user and transfer request to backend
            -- ngx.say is an async call so processing continues after without waiting
            ngx.say(".")
            res = ngx.location.capture("/backend")

        ';
    }

    location /backend {
        # named locations not allowed for ngx.location.capture
        # needs "internal" if not to be public
        internal;

        # Pass the request to the backend.
        proxy_pass http://127.0.0.1:8080;
    }

}

A more succinct Lua based option:

server {
    location /test {
        rewrite_by_lua '
            -- send a dot to the user
            ngx.say(".")

            -- exit rewrite_by_lua and continue the normal event loop
            ngx.exit(ngx.OK)
        ';
        proxy_pass http://127.0.0.1:8080;
    }
}

Definitely an interesting challenge.

Dayo
  • 12,413
  • 5
  • 52
  • 67
2

After reading here about post_action and reading "Serving Static Content Via POST From Nginx" http://invalidlogic.com/2011/04/12/serving-static-content-via-post-from-nginx/ I have accomplished this using:

server {
  # this is to serve a 200.txt static file 
  listen 8888;
  root /usr/share/nginx/html/;
}
server {
  listen 8999;
  location / {
    rewrite ^ /200.txt break;
  }

  error_page 405 =200 @405;
  location @405 {
    # post_action, after this, do @post
    post_action @post;
    # this nginx serving a static file 200.txt
    proxy_method GET;
    proxy_pass http://127.0.0.1:8888;
  }

  location @post {
    # this will go to an apache-backend server.
    # it will take a long time to process this request
    proxy_method POST;
    proxy_pass http://127.0.0.1/$request_uri;
  }
}
javierwilson
  • 138
  • 1
  • 5
1

You may be able to accomplish this with post_action (I'm not entirely sure this will work, but it's the only thing I can think of)

server {
  location / {
    post_action @post;
    rewrite ^ /1px.gif break;
  }

  location @post {
    # Pass the request to the backend.
    proxy_pass http://backend$request_uri;

    # Using $request_uri with the proxy_pass will preserve the original request,
    # if you use (fastcgi|scgi|uwsgi)_pass, this would need to be changed.
    # I believe the original headers will automatically be preserved.
  }
}
kolbyjack
  • 17,660
  • 5
  • 48
  • 35
  • I do this with post_action. http://wiki.nginx.org/HttpEmptyGifModule is faster to serve a 1x1 gif. – greg Jan 04 '13 at 08:44
0

Why not make use of X-Accel-Redirect, http://wiki.nginx.org/XSendfile, so you can forward the request to your ruby app and then just set a response header and nginx returns the file.

Update, well for a 1x1px Transparent GIF File it's probably easier to store the data in a variable and return it to client directly (honestly it's that small), so I think X-Accel-Redirect is probably a overkill in this case.

lwe
  • 2,625
  • 20
  • 19