1

Let's say I am going to deploy a server application that's likely to be placed behind a NAT/firewall and I don't want to ask users to tweak their NAT port mapping. In other words, connections to the server are impossible, but my app is a server application by nature, i.e. it sends back objects per URI.

Now, I'm thinking about initiating connections from the server periodically to see what requests are there to be responded to. I'm going to use HTTP via port 80 as something that would likely be working through NAT/firewall from virtually anywhere.

The question is, are there any standard considerations and common practices of implementing a client that can act as a server at the application level, specifically using HTTP? Any special HTTP headers? Design patterns?

E.g. I am thinking about the following scheme:

  • The client (which is my logical server) sends a dummy HTTP request to the server
  • The server responds back with non-standard headers X-Request-URI:, X-Host:, X-If-Modified-Since: etc, in other words, request headers wrapped into X-xxx as they are not standard in this situation; also requests to keep the connection alive
  • The client responds with a POST request that sends the requested object; again, uses wrapped headers (e.g. X-Status:, etc)

Unless there is a more "standard" way of doing something like this, do you think my approach is plausible?

Edit: an interesting discussion took place on reddit here

mojuba
  • 11,842
  • 9
  • 51
  • 72

1 Answers1

0

I've done something similar. This is very common. Client initiate the connection to the Server and keep the connection ALIVE. If the session is shut-down, client would re-initiate. When the session is up, Server can push anything to the client since it's client initiated.

Kenny Lim
  • 1,193
  • 1
  • 11
  • 31