Short Answer: You can't do origin failover in CloudFront for request methods other than GET
, HEAD
, or OPTIONS
. Period.
TL; DR
CloudFront
caches GET
and HEAD
requests always. It can be configured to cache OPTIONS
requests too. However it doesn't cache POST
, PUT
, PATCH
, DELETE
,... requests which is consistent with the most of the public CDNs out there. However, some of them might provide you with writing some sort of custom hooks by virtue of which you can cache POST
, PUT
, PATCH
, DELETE
,... requests. You might be wondering why is that? Why can't I cache POST
requests? The answer to that question is RFC 2616
. Since POST
requests are not idempotent, the specification advises against caching them and sending them to the end server intended, indeed, always. There's a very nice SO thread here which you can read to have a better understanding.
CloudFront
fails over to the secondary origin only when the HTTP
method of the viewer request is GET
, HEAD
, or OPTIONS
. CloudFront does not fail over when the viewer sends a different HTTP method (for example POST
, PUT
, and so on).
Ok. POST
requests are not cached by CloudFront. But, why does CloudFront not provide failover for POST
requests?
Let's see how does CloudFront handle requests in case of a primary origin failure. See below:

CloudFront
routes all incoming requests to the primary origin, even when a previous request failed over to the secondary origin. CloudFront
only sends requests to the secondary origin after a request to the primary origin fails.
Now, since POST
requests are not cached CloudFront
has to go to the primary origin each time, come back with an invalid response or worst a time-out, then hit the secondary origin in the origin group. We're talking about region failures here. The failover requests from primary to secondary would be ridiculously high and we might expect a cascading failure due to high load. This would lead to CloudFront PoP failures and this defeats the whole purpose of high availability, doesn't it? Again, this explanation is only my assumption. Of course, I'm sure folks at CloudFront would come up with a solution for handling POST
requests region failover soon.
So far so good. But how are other AWS customers able to guarantee high availability to their users in case of AWS region failures.
Well other AWS customers only use CloudFront region failover to make their static websites, SPAs, static contents like videos (live and on demand), images, etc failure proof which by the way only requires GET
, HEAD
and occasionally OPTION
HTTP
requests. Imagine a SaaS company which drives its sales and discoverability via a static website. If you could reduce your downtime by the method above which would ensure your sales/growth doesn't take a hit, why wouldn't you?
Got the point. But I do really need to have region failover for my backend APIs. How can I do it?
One way would be to write a custom Lambda@Edge
function. CloudFront
hits the intended primary origin, the code inside checks for time-out/response codes/etc and if failover has to be triggered, hits the other origin's endpoint and returns the response. This is again in contradictory to the current schemes of CloudFront
.
Another solution would be, which in my opinion is much cleaner, is to make use of latency-based routing support of Route53. You can read about how to do that here. While this method would surely work for your backend APIs if you had different subdomain names for your S3 files and APIs (and those subdomains pointing to different CloudFront
distributions) since it leverages CloudFront
canonical names, I'm a bit skeptical if this would work in your setup. You can try and test it out, anyways.
Edit: As suggested by OP, there is a third approach to achieve this which is to handle this on the client side. Whenever client receives an unexpected response code or a timeout it makes an API call to another endpoint which is hosted on another region. This solution is a bit cheaper and simpler and easier to implement with current scheme of things available.