Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Davidzheng
5 months ago
|
parent
|
context
|
favorite
| on:
Running GPT-OSS-120B at 500 tokens per second on N...
if I have a mac with 128Gb of integrated ram and I want to try this model, should I be using llama.cpp, mlx, or vllm, or something else? Sorry but I literally don't understand how I'm supposed to decide. Is it just compare inference speeds?
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: