It's always been possible to "do (worse) AI on less compute". We've had years of open models! I also don't understand how anyone can see this as anything but good news for OpenAI. The ultimate value proposition of AI has always depended on whether it stretches to AGI and beyond, and R1 demonstrates that there's several orders of magnitude of hardware overhang. This makes it easier for OpenAI to succeed, not harder, because it makes it less likely that they'll scale to their financial limits and still fail to surpass humans.
The point is that this was developed outside of OpenAI.
So the real question is why does anyone believe that OpenAI will bring AGI when actual innovation was happening in some hedge fund in China while OpenAI was going on an international tour trying to drum up a trillion dollars.
Okay, that argument makes no sense to me. I thought the whole point of VC is that money is cheaper than time to market? So OpenAI didn't microoptimize their training code, sure, but they didn't need to. All the innovation of R1 is that they managed to match OpenAI's tech demo from like a year ago using considerably worse hardware by microoptimizing the hell out of it. And that's cool, full credit to them, it's a mighty impressive model. But they did it like that because they had to. It's very impressive given their constraints, but it doesn't actually advance the field.
The interesting part is that distillations based on reinforcement learning based models are performing so well. That brings the cost down dramatically to do certain tasks.