13

I have attended an AWS training, and they explained to us that a good practice is to have cache all dynamic content via Cloudfront, setting TTL to 0, even if you have an LB in front on the Load Balancer. So it could be like:

Route 53 -> CloudFront -> Application LB

I can not see any advantage of this architecture, instead of having directly (only for dynamic content):

Route 53 -> Application LB

I do not see the point since Cloudfront will send all traffic always to the LB, so you will have:

  • Two HTTPS negotiation (client <-> Cloudfront, and Cloudfront <-> LB)
  • No caching at all (it is dynamic content, it should not be cached, since that is the meaning of "dynamic")
  • You will not have the client IP since your LB will see only the Cloudfront IP (I know this can be fixed, to have the client IP, but then you will have issues with the next bullet).
  • As an extra work, you need to be able to update your LB security groups often, to match the CloudFront IPs (for this region), as I guess you want to get traffic only from your Cloudfront, and not directly from the LB public endpoint.

So, probably, I am missing something important about this Route 53 -> CloudFront -> Application LB architecture.

Any ideas?

Thanks!

MTG
  • 551
  • 3
  • 14
  • 1
    Does this answer help? https://stackoverflow.com/questions/10621099/what-is-a-ttl-0-in-cloudfront-useful-for – jarmod Dec 07 '18 at 13:46
  • Hi! Well, reading that answer I still do not understand any single advantage of adding CloudFront on top of ELB for an application that does not need caching at all, but thanks for your comments! – MTG Dec 08 '18 at 19:06
  • Yeah, I think the linked answer actually does answer your answer. This bit: "the origin server decides whether or not, and if for how long CloudFront caches the objects." So by default, no requests will be cached. BUT, you can configure your application server-side to instruct CF to cache certain routes, e.g. if it really wants to return a 304. – haz Jun 30 '20 at 07:07

4 Answers4

4

Here are some of the benefits of having cloudfront on top of your ALB

  • For a web application or other content that's served by an ALB in Elastic Load Balancing, CloudFront can cache objects and serve them directly to users (viewers), reducing the load on your ALB.

  • CloudFront can also help to reduce latency and even absorb some distributed denial of service (DDoS) attacks. However, if users can bypass CloudFront and access your ALB directly, you don't get these benefits. But you can configure Amazon CloudFront and your Application Load Balancer to prevent users from directly accessing the Application Load Balancer (Doc).

  • Outbound data transfer charges from AWS services to CloudFront is $0/GB. The cost coming out of CloudFront is typically half a cent less per GB than data transfer for the same tier and Region. What this means is that you can take advantage of the additional performance and security of CloudFront by putting it in front of your ALB, AWS Elastic Beanstalk, S3, and other AWS resources delivering HTTP(S) objects for next to no additional cost (Doc).

  • The CloudFront global network, which consists of over 100 points of presence (POP), reduces the time to establish viewer-facing connections because the physical distance to the viewer is shortened. This reduces overall latency for serving both static and dynamic content (Doc).

  • CloudFront maintains a pool of persistent connections to the origin, thus reducing the overhead of repeatedly establishing new connections to the origin. Over these connections, traffic between CloudFront and AWS origins are routed over a private backbone network for reliability and performance. This reduces overall latency for serving both static and dynamic content (Doc).

  • You can use geo restriction, also known as geo blocking, to prevent users in specific geographic locations from accessing content that you're distributing through a CloudFront distribution (Doc).

In other words you can use the benefits of ClodFront to add new features to your source (ALB, Elastic Beanstalk, S3, EC2) but if you don't need these features it is better not to do this configuration in your architecture.

fabidick22
  • 166
  • 6
1
  • Cloudfront enables you deliver content faster because Cloudfront Edge locations closer to the user requesting and are connected to the AWS Regions through the AWS network backbone.
  • You can terminate SSL at cloudfront and make the load balancer listen at port 80
  • Cloudfront allows to apply geo location restriction easily in 2 clicks.
Farooq Butt
  • 181
  • 1
  • 6
  • Your points are true, but they have already been written by others before (https://stackoverflow.com/a/67815119/2692292). Try avoiding duplicated answers. – romainsalles Jun 06 '21 at 16:45
  • I dont see anyone has mentioned that you can terminate SSL at cloudfront and make your load balancer listen at port 80. – Farooq Butt Jun 06 '21 at 16:53
  • Indeed, that's why I didn't downvote your answer :) But since the 2 others points were already explained by fabidick22, I supposed (maybe wrongly) that you didn't read other answers before writing your own. And I just wanted to inform you that's always better to do so. But contributions are always welcomed, so thank you! – romainsalles Jun 06 '21 at 17:00
0

I think another reason you may want to use CF in front of an ALB is that you could have a better experience with WAF (if you are already using (or planning to) WAF, of course).

Even though WAF is available for both ALB and CF, ALB and CF use different services for WAF. The reason is that Cloudfront is a global service and ALB is one per region.

That may bring more complex management and duplication of ACL (and probably more costs).

Claudio
  • 5,740
  • 5
  • 33
  • 40
-2

Cloudfront is really an amazing CDN content delivery network service like Akamai etc . Now if your web applications having lots of dynamic content like media files even you static code you can put them into a S3 bucket another object storage service by AWS .

Once you have your dynamic content to you S3 bucket you can create a Cloudfront distribution by considering that bucket as a origin this phenomena will cached your dynamic content across AWS multiple edge locations. And it will become fast to deliver on client side.

Now if we talk Load balancer . So it have it’s own purpose to serve image you are using a Application where you get an unpredictable traffic so here your Load balancer which we are considering an Application or classic Load balancer which is accepting request from Route 53 and passing it to your servers.

For high availability and scalability we consider such architecture of Application.

  • we create a autoscaling group of our ec2 instances and put them behind a load balancer and as per our scaling policy example: if my cpu or memory utilization goes more that 70% launch another instance or similar.

You can set a request policy as well on load balancer to serve traffic to your ec2 server maybe in round Robbin or on availability.

I just shared the best practices recommended of AWS fault tolerant and highly available architecture . I hope this may help you to get a better idea to decide now . Please let me know if I can help you with more suggestions on it.

Thanks and Happy Leanings!!

Prabhat Singh
  • 891
  • 8
  • 17
  • 2
    The question is "why would you use CloudFront if you're going to set a TTL of 0 on all content, essentially disabling all edge caching". – jarmod Dec 07 '18 at 00:36