Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any serious LLM work isn't going to use that. They'll use Metal GPU. No one is going to inference using the NPU on a Mac.

OP said "work stations" which is implying Macbook Pros and Studios.



> Any serious LLM work isn't going to use that.

That’s my point.

One would expect the platform owner (especially one where they own both the hardware AND software) to provide a reasonable / easy path to using LLMs if they are going to provide a framework for doing so. But Apple can’t because of how slow they ship updates




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: