The HTTP RFCs define GET such that it is composed of the UrL abd other headers, and terminated by two newline chatacters. The URL scheme is defined by RFCs to include a special format for passing paramters as part of the URL. Web servers and browsers are specufically designed to comply with these standards, and so they are both defined standards and defacto standards.
In theory, headers could be used to pass paramemters between the client and serve.Except for cookies, this is nonstandard behavior that is not available through most clients.
HTTP POST is defined by RFCs to include the same URL and headers as GET, as well as to include a POSTDATA section, separated from the headers by two newline characters and terminated by two newline characters, (iirc). The POSTDATA SECTION is specially defined to allow the cliebt to pass parameters to the server.
You coukd try to add a POSTDATA section to your GET requests, but proxies and servers wouldn't interpret it correctly - the two newlines that terminate the GET request headers will be interpreted to terminate the GET request entirely. the following POSTDATA section will be interpreted as a malformed request, and thrown out. You coukd write a custom web server to handle your custom GET format, but that wouldn't fix the ye proxy servers in between, and it wouldn't be used by any existing clients. You could create a custom client to communicate with your server using your custom GET requests, abd this woukd orobably work as long as no proxy servers are in between.
But then you woukd not be communicating over HTTP, but rather a different custom protocol that you invented. And only your software would speak this custom protocol, so you wouldn't gain the interoperability that is typically dssired when using HTTP.