For modern large-scale production systems I agree except in cases where performance is critical. To choose a wild example - a microservice serving geospatial queries over millions of objects that can exist anywhere in the world has plenty of parallelism at the level of each query, but handling multiple queries can be done by scaling horizontally with multiple instances of the service.
Instead of "just throw it on a thread and forget about it" - in a production environment, use the job queue. You gain isolation and observability - you can see the job parameters and know nothing else came across, except data from the DB etc.
Instead of "just throw it on a thread and forget about it" - in a production environment, use the job queue. You gain isolation and observability - you can see the job parameters and know nothing else came across, except data from the DB etc.