Did you actually read the articles he made going through the finances of these companies? He definitely has a bone to pick, but his numbers don't lie. The amount of return these AIs need to give due to the amount of spend is so ridiculous that unless they really do automate most jobs, they're screwed. There's a reason these companies only post AI revenue now, not profit.
Bubble doomerism is nothing novel. As is always the case, he's right vertically and wrong horizontally. Serious people in serious publications still speculated that the internet was a fad and would be over soon as late as 2008.
OpenAI will collapse, almost certainly. Anthropic might get by if they can make it to IPO before it all comes tumbling down. Google will buy up all the datacenters in a fire sale like they did with dark fiber after the .com bubble popped and continue building out stuff like NotebookLM.
Amazon and Microsoft will still be there selling server time to model providers and doing custom enterprise solutions like always. They already host the major proprietary models and sell API access.[0]
The top open models are already good enough. At this point prompting and coordination are the big bottlenecks. It would be nice if the bubble lasts long enough for open models to match at least the latest Opus.
His problem is the focus on the bubble and not on what usually happens after. People will bandy his pieces about insisting it's all short lived and they can just wait it out. Kimi K2.5, GLM 5, and MiniMax 2.5 aren't going away.
Rare opportunity for me to actually downplay frontier AI for a change. We can do a lot better. I think the next 6 months will be a stream of releases that shall leave all the current models in the dust. Opus 4.6 will be no more relevant than 3.5 Sonnet.
If this is the case, all bubble talk will have to be re-evaluated.
If you squint hard enough, every new thing is an example of "answer-from-search / answer-from-remix". Solving any Erdős problem in this manner was largely seen as unthinkable just a year ago.
>the problem does not matter.
Really? All of the other Erdős problems? Millennium Problems? Anything at all? This gets us directly into the territory of "nothing can convince us otherwise".
Tiresome. You're quoting me out of context, and generally assigning me the POV you want to argue with. You come across as pro-AI looking for anti-AI to do combat with. First, I'm not the right guy, and second, all I'm really saying above is that if we're going to do argument-from-authority, maybe let's engage with what the authority is actually saying in TFA.
It's a rapidly evolving story and I expect H1 2026 to bring much clarity on this topic. Especially with upcoming model releases and more professional mathematicians taking an interest.
>AGI advocates treat machine intelligence like some sort of God that will smite non-believers and reward the faithful.
>The real world is not composed of rewards and punishments.
Most "AGI advocates" say that AGI is coming, sooner rather than later, and it will fundamentally reshape our world. On its own that's purely descriptive. In my experience, most of the alleged "smiting" comes from the skeptics simply being wrong about this. Rarely there's talk of explicit rewards and punishments.
I should be the target audience for this stuff, but I honestly can't name a single person who believes in this "Roko's basilisk" thing. To my knowledge, even the original author abandoned it. There probably are a small handful out there, but I've never seen 'em myself.
As per the conclusions of that great video, going back before Pong and defining a "first" video game depends heavily on your definition of both "video" and "game"
I don't think it is unreasonable to define a "video game" as one employing video graphics and real time input. Things like Tennis for Two (and the later Spacewar) are clearly video games in a sense that mere simulations of board games are not.
"AI fake, AI poo poo, AI going away!" is the only argument he ever had. Nothing more.
reply