When you are compute bound threads are just better. Async shines when you are i/o bound and you need to wait on a lot of i/o concurrently. I'm usually compute bound, and I've never needed to wait on more i/o connections than I could handle with threads. Typically all the output and input ip addresses are known in advance and in the helm chart. And countable on one hand.
Oh, right, sure. In Rust the async code and async executor are decoupled. So it's your _executor_ that decides how/whether tasks are mapped to threads and all that jazz.
Meanwhile the async _code_ itself is just a new(ish), lower-level way of writing code that lets you peek under an abstraction. Traditional ‘blocking’ I/O tries to pretend that I/O is an active, sequential process like a normal function call, and then the OS is responsible for providing that abstraction by in fact pausing your whole process until the async event you're waiting on occurs. That's a pretty nice high-level abstraction in a lot of cases, but sometimes you want to take advantage of those extra cycles. Async code is a bit more powerful and ‘closer to the metal’ in that it exposes to your code which operations are going to result in your code being suspended, and so gives you an opportunity to do something else while you wait.
Of course if you're not spending a lot of time doing I/O then the performance improvements probably aren't worth dropping the nice high-level abstraction — if you're barely doing I/O then it doesn't matter if it's not ‘really’ a function call! But even so async functions can provide a nice way of writing things that are kind of like function calls but might not return immediately. For example, request-response–style communication with other threads.