It's a MoE model and the A3B stands for 3 Billion active parameters, like the recent Gemma 4.
You can try to offload the experts on CPU with llama.cpp (--cpu-moe) and that should give you quite the extra context space, at a lower token generation speed.
CPU-MoE still helps with mmap. Should not overly hurt token-gen speed on the Mac since the CPU has access to most (though not all) of the unified memory bandwidth, which is the bottleneck.
For sure I was running on autopilot with that reply. Though in Q4 I would expect it to fit, as 24B-A4B Gemma model without CPU offloading got up to 18GB of VRAM usage
No - this model has the weights memory footprint of a 35B model (you do save a little bit on the KV cache, which will be smaller than the total size suggests). The lower number of active parameters gives you faster inference, including lower memory bandwidth utilization, which makes it viable to offload the weights for the experts onto slower memory. On a Mac, with unified memory, this doesn't really help you. (Unless you want to offload to nonvolatile storage, but it would still be painfully slow.)
All that said you could probably squeeze it onto a 36GB Mac. A lot of people run this size model on 24GB GPUs, at 4-5 bits per weight quantization and maybe with reduced context size.
correct but it should be some ratio of model size like if model size is x GB, max context would occupy x * some constant of RAM. For quantized version assuming its 18GB for Q4 it should be able to support 64-128k with this mac
Have you ever tried going to the model registry and seeing that the model was recently updated? What updated? What changed? Should I re-download this 20GB file?
I guess if you're not frustrated with things like this then sure, no need to stop using it.
When the app only shows posts that are more than 10 hours old even when sorting by "hot" and shoving down the algorithmic feed on the app's home page, how are people still using the app?
Lately I've only been visiting a few subs that I'm interested in and keeping them open in safari with ublock; it's been a far better experience. This has drastically cut my reddit time now and if I do want to mindlessly scroll, I just use redlib(hosted in docker or one of their public instances)[0]. It has the same "sort" that's used on the desktop site.
Worked with Retiming using DC Compiler in an ASIC implementation. Remember a lot of back & forth, sometimes the tool just doesn't add enough registers to meet the constraint, had to test variable register depths; this was a design that used Synopsys DesignWare for FP ops lol.
reply