Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Qwen3.5-122B-A10B BF16 GGUF = 224GB. The "80Gb VRAM" mentioned here will barely fit Q4_K_S (70GB), which will NOT perform as shown on benchmarks.

Quite misleading, really.

 help



The larger 3.5 quants are actually pretty close to the full-blown 397B model's performance, at least looking at the numbers. Qwen 3.5 seems more tolerant of quantization than most.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: