> - Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.
do you actually get the promised 12.5 Gbps? I've been doing some experiments and it's really hard to get over 2.5Gbit/s upstream from AWS EC2, even when using large 64 vCPU machines. Intra-AWS (e.g. VPC) traffic is another thing and that seems to be ok.
We do get the promised throughput, but it depends on the destination as you've discovered. AWS actually has some docs on this[0]:
- For instances with >= 32 vCPUs, traffic to an internet gateway can use 50% of the throughput
- For instances with < 32 vCPUs, traffic to an internet gateway can use 5 Gbps
- Traffic inside the VPC can use the full throughput
So for us, that means traffic outbound to the public internet can use up to 5 Gbps, but for things like our distributed cache or pulling Docker images from our container builders, we can get the full 12.5 Gbps.
> > - Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1
with that pull quote, I thought you were going to point out their use of us-fail-1. I struggle to think of a service that I care so little about its availability that I'd host it there, but CI/CD for sure wouldn't be one
do you actually get the promised 12.5 Gbps? I've been doing some experiments and it's really hard to get over 2.5Gbit/s upstream from AWS EC2, even when using large 64 vCPU machines. Intra-AWS (e.g. VPC) traffic is another thing and that seems to be ok.