Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

NVIDIA will probably give us nice, coding-focused fine-tunes of these models at some point, and those might compare more favorably against the smaller Qwen3 Coder.


What is the best local coder model that that can be used with ollama?

Maybe a too opened ended question? I can run the deepseek model locally really nicely.


Probably Qwen3-Coder 30B, unless you have a titanic enough machine to handle a serious 480B model.


Is the DeepSeek model you're running a distill, or is it the 671B parameter model?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: