71

If webserver can send gzip response, why can't browser sent gzip request?

Herman
  • 3,004
  • 5
  • 37
  • 49

6 Answers6

66

The client and server have to agree on how to communicate; part of this is whether the communication can be compressed. HTTP was designed as a request/response model, and the original creation was almost certainly envisioned to always have small requests and potentially large responses. Compression is not required to implement HTTP, there are both servers and clients that don't support it.

HTTP compression is implemented by the client saying it can support compression, and if the server sees this in the request and it supports compression it can compress the response. To compress the request the client would have to have a "pre-request" that actually negotiated that the request would be made compressed OR it would have to require compression as a supported encoding for ALL requests.

* UPDATE Feb '17 * It's been 8 years, but as @Phil_1984_ notes, a 3rd possible solution would be for the client and server to negotiate compression support and then use that for subsequent requests. In fact, things like HSTS work just this way with the client caching that the server expects to only speak TLS and ignore any unencrypted links. HTTP was explicitly designed to be stateless but we've moved beyond that at this point.

Peter Oehlert
  • 16,368
  • 6
  • 44
  • 48
  • So a server will simply fail if it doesn't support compression then? – jjxtra Jul 03 '14 at 04:10
  • 2
    "Compression is not required by the spec, there are both servers and clients that don't support it." The client starts by saying, "hey, I speak French, do you?" The server responds, and answers in either English or French, depending on whether it knows French or not. French in this example is compression. If as the OP asked, the client were able to start talking in French immediately, all servers would have to speak French or the system would break. The system only allows compressed responses precisely because it needs to negotiate and both systems agree. – Peter Oehlert Jul 03 '14 at 15:41
  • 1
    Very well explained. no compression with small request (usually) will outperform compression with pre-request negotiation. – Ron Apr 29 '16 at 01:56
  • 1
    A third option (conveniently missed) would be for the client/browser to remember that the server Accepts compression and then post compressed data later on in the connection if it needs to. POSTing large payloads is never the very first thing a browser does when connecting to a server anyway. – Phil Jan 29 '17 at 22:02
  • @Phil_1984_ It very much goes against the stateless design of HTTP, but at this point given all the other things that ignore that part I think you're absolutely right in adding this as an option. I'll edit the answer to note it. – Peter Oehlert Feb 17 '17 at 05:45
  • @PeterOehlert It was more of a grumble at browser devs or spec writers, I'm not sure who to blame exactly. Browsers do complicated stateful things like remembering HSTS headers per domain (for example), but they don't remember acceptable transfer encodings. Thinking about it more, I think when a client starts remembering things about a domain, you get in to the realm of caching and for how long. The spec designers just never thought of having a cached acceptable transfer encoding header. – Phil Feb 17 '17 at 15:41
  • 1
    @Phil_1984_ I think the historic context is helpful; it can be easy to forget how far we've come. In 1989 when HTTP was being designed as stateless, the 486 running at a whopping 20MHz had been announced though wouldn't really be available until the following spring. The internet hadn't really grown out of an academic space connecting universities and governments. Stateless made a lot of sense at the time. As agents (browsers) got so much more complex in the last 28 years, it's made sense to add more stateful features, especially to implement specific use cases which is where HSTS comes from. – Peter Oehlert Feb 17 '17 at 23:29
27

A client can't know in advance that a server would understand a gzipped request, but the server can know that the client will accept one.

Paul Dixon
  • 295,876
  • 54
  • 310
  • 348
  • 26
    Not true. Content-Encoding is a permissible header for the client to supply. The RFC says: "If the content-coding of an entity in a request message is not acceptable to the origin server, the server SHOULD respond with a status code of 415 (Unsupported Media Type)." - per [Nick Johnson](http://stackoverflow.com/questions/2395440/sending-gzipped-form-data#comment-2375417) – Pacerier Jul 04 '12 at 06:59
  • 11
    What you're saying is a little different to what I was driving at. You can *try* to send a gzipped request as you suggest, but there's no way of knowing beforehand that the server will accept it (without talking to the server). That said, your point is well made: if you try to send a gzipped request, you may find the server can support it. – Paul Dixon Jul 04 '12 at 09:43
  • 6
    There are a lot of places where you know in advance that the server supports that. For example and mobile app talking to the backend. – Guillermo Jul 10 '12 at 13:24
  • 1
    Is there a list of servers that actually support a gzipped request? – Eric Dec 27 '12 at 01:52
  • are there *any* browsers that would support it? – Brady Moritz May 15 '13 at 14:38
  • compression mostly refers to form data, in which case, the client will *always* know the server wants it compressed because the server provided the form asking for it that way. – cnd Aug 18 '16 at 14:11
7

It could, provided it could guarantee that the server would accept it. This might mean using an OPTIONS request.

There are a lot of things that web browsers could do (for example, pipelining) that they don't do. Web browser developers consider the compatibility implications of a change.

In a heterogeneous environment, there are a lot of different web servers and configurations. Making a change to the way a client works could break some of them.

Perhaps only 1% of servers might accept gzipped requests, but perhaps some of those advertise that they do, but cannot correctly accept it - so users would be denied from uploading files to those sites.

Historically there have been a lot of broken client / server implementations - for a long time, gzipped responses were broken in major web browsers (thankfully those are now mostly gone).

So you'd end up with blacklists of user-agents or servers (or domain names) where those options were automatically turned off, which is nasty.

MarkR
  • 62,604
  • 14
  • 116
  • 151
3

Because it doesn't know that the server can accept it. An HTTP transaction has a single request sent by the client followed by a response. One of the things the client sends is what encoding/compression it can support. The server can then decide how to compress the response. The client does not have this luxury.

Yuliy
  • 17,381
  • 6
  • 41
  • 47
  • Well if the server can determine whether browser supports it or not, there CAN be an implementation which browser tries to found if the server can understand the g-zipped content or not; if developers work on that. – David Refoua May 29 '14 at 16:27
  • The server determines that the browser supports gzip because the browser just told it via an Accept-Encoding request header. You need some other way of the browser knowing a priori what the server's capabilities are. Doing that is outside of what HTTP/1.1 gives you. – Yuliy May 30 '14 at 06:48
  • @Yuliy You mean (for example) using browser memory to remember a servers Accept-Encoding response? – Phil Jan 29 '17 at 22:10
2

If you're writing a web application, I'm assuming that you're in control of what is sent to the client and what is sent back from the client.

It would be easy enough to write a gzip implementation in javascript, that compresses the post data being sent to the server. The server could have a filter (j2ee term), that knows client data is sent compressed, this filter decompresses the data and then passes the data to the servlet (or action classes in Struts) that read the data as normal e.g. request.getParameter(...).

This seems perfectly logical and do-able if you're in control. As other posts mention, you couldn't rely on the browser to do this automatically, but since you're writing the web pages, you can get the browser to do the compression you're after (with a little work).

Andy.

0

HTTP is designed in this way:

  • The client says its request in plain text (including if can understand compressed answers)
  • The server responses with the propper encoding (compressed or not)

BUT IN THIS DESIGN the client can not send compressed requests because it doesnt know if server will understand it in advance.