Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

if I have a mac with 128Gb of integrated ram and I want to try this model, should I be using llama.cpp, mlx, or vllm, or something else? Sorry but I literally don't understand how I'm supposed to decide. Is it just compare inference speeds?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: