I'd like to make sure I'm interpreting AWS's ECS Fargate pricing model correctly when compared to an m4.large EC2 instance (2vCPU, 8GB Mem) running non stop (even dropping to 1% cpu/mem utilization) for an entire month (730hrs).
# Monthly cost estimates
Fargate:
cpu = 730hrs * 2vCPU * $0.056 = $73.88
mem = 730hrs * 8GB Mem * $0.0127 = $74.17
total = $73.88 + $74.17 = $148.05
EKS ec2 node (1 yr reserved no upfront):
total = 730hrs * $0.062 = $45.26
EKS ec2 node (on demand):
total = 730hrs * $0.10 = $73.00
It appears Fargate would be ~3x as expensive as an EC2 instance. Does my Fargate pricing look accurate? I'm assuming Fargate isn't intended to be used for something like a 24/7 website, but more like a one off job, analogous perhaps to a Lamba function that runs a container image.
Am I correct that I'm billed for the entire Fargate task cpu & mem allocation, regardless if I'm utilizing 1% or 100% of the resources?
References: