Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

It was already tough to run flagship-class local models and it's only getting worse with the demand for datacenter-scale compute from those specific big players. What happens when the model that works best needs 1TB of HBM and specialized TPUs?

AI computation looks a lot like early Bitcoin: first the CPU, then the GPUs, then the ASICs, then the ASICs mostly being made specifically by syndicates for syndicates. We are speedrunning the same centralization.



It appears to me the early exponential gains from new models have plateaued. Current gains seem very marginal, it could be the future model best model that needs "1TB of HBM and specialized TPUs" won't be all that better than the models we have today. All we need to do is wait for commodity hardware that can run current models, and OpenAI / Anthropic et al are done if their whole plan to monetize this is to inject ads into the responses. That is, unless they can actually create AGI that requires infrastructure they control, or some other advancement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: