Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Memory and PCIE lanes in larger systems can be attached to particular CPUs, or to sub sections of a single CPU (i.e. AMD Threadrippers / Eypcs in particular) where traversing the the inter CPU / CCX links can cause latency or bandwidth issues.

The software will be pinned to CPU cores close to the RAM or PCIE device they are using.

Only really seen it be an issue in crazy large scale systems, or where you have 4 CPUs, but I haven't spent a huge amount of time on microsecond critical workloads.



Isn’t this particular issue partially solved with proper NUMA support in whatever kernel or scheduler is being used?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: