> And computers are big, too. You can buy a 1000MHz machine with 2 gigabytes of RAM and an 1000Mbit/sec Ethernet card for $1200 or so. Let's see - at 20000 clients, that's 50KHz, 100Kbytes, and 50Kbits/sec per client. It shouldn't take any more horsepower than that to take four kilobytes from the disk and send them to the network once a second for each of twenty thousand clients. (That works out to $0.08 per client, by the way. Those $100/client licensing fees some operating systems charge are starting to look a little heavy!) So hardware is no longer the bottleneck.
It seems to me that there are far fewer problems nowadays with trying to figure out how to serve a tiny bit of data to many people with those kinds of resources, and more problems with understanding how to make a tiny bit of data relevant.
It still absolutely can be. We've just lost touch.
This particular case, with the numbers given, would work as a server for profile pictures, for instance. Or for package signatures. Or for status pages of a bunch of services (generated statically, since the status rarely changes).
Yes, an RPi4 might be adequate to serve 20k of client requests in parallel, without crashing or breaking too much sweat. You usually want to plan for 5%-10% of this load as a norm if you care about tail latency. But a 20K spike should not kill it.
It seems to me that there are far fewer problems nowadays with trying to figure out how to serve a tiny bit of data to many people with those kinds of resources, and more problems with understanding how to make a tiny bit of data relevant.
It still absolutely can be. We've just lost touch.