Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>No, in fact I noticed a series of AI winters. In all things, progress is famously _not_ a straight line.

A series of winters? There's only one winter. Then after Geoffrey Hinton you can bullshit every 6 month lull into a "winter" if you want but everyone knows what the "actual" winter was. In general over a span of 10 years the line is UP.

>Also I find it interesting that your argument seems to boil down to “I’m smart because line goes up, you’re dumb because you think line goes down.” Everyone Clearly can see what would happen if line went up, I just; looking at the broad history and totality of factors(that I’m aware of) don’t think it’s inevitable.

The crazy thing is it's true. I never said that the line going up is inevitable. I said that's the most probable outcome. And you are dumb if you don't acknowledge the most probable outcome. like there's no logical way around this. You can sort of twist my argument into something that looks strange or stupid or whatever but there's no logical counter to what I said because it is factually the best answer.

>We literally stop progress all the time, every time we choose not to invest in something, crypto progress slowed from its height, Vr progress, green energy, I’d argue it’s relatively few technologies that progress forever.

You can't stop it. It can stop but you can't actually put your hand in front of it to stop it. That's what I mean. Nobody is choosing to stop progress and nobody really has this choice.

That being said you're right. No technology can progress forever. There is an upper bound. But AI. What's the upper bound? Do we have examples of the upper bound of intelligence? Do these things physically exist in reality that we can use these physical examples of Intelligence to measure how far in physical actuality and reality that we can go with AI?

No. No such examples exist. LLMs are the forefront of intelligence. There is nothing in reality more intelligent then LLMs and LLMs represent the physical limit in terms of evidence. Or is there something I'm missing here?

Yeah for certain things like space travel. It's possible we're hitting upper bounds, because we don't have physical examples of certain technologies.

But Again, intelligence? Do we have examples? What is the upper bound? Why don't you kick that brain (hint) into gear and think about it? One of the most realistic predictions of a continued upward trend in technology is in AI BECAUSE a PHYSICAL ACTUALITY of what we want to achieve both EXISTS and is reading this comment right now.

So we have a trendline that points up. And the actuality of what we want to achieve ALREADY exists. What is the most probable bet that you cannot just not acknowledge? The logic is inescapable. You must consider the outcome that AI continues to progress as that is the most likely outcome.

I'll grant you that AI not progressing and hitting another winter IS not at such a lower probability that we cannot consider it. But most of HN is just claiming we 100% hit a wall when all evidence is saying otherwise. In actuality another AI winter is the lower probability bet. Wait 10 years and come back to this comment and we'll see if you're right.



I think VR is a great example of a technology we are currently choosing to stop, very similarly to AI, all evidence suggests we’ll hit a cost/benefit wall before we get to superintelligent AI similar to the abandonment of VR progress currently in the works.

Contradictorally though - I am near certain we will declare victory on AGI much sooner than 10 years from now. OpenAI’s contract with Microsoft nearly requires it, and Sam Altman recently said that by reasonable measures of 5 years ago, ChatGPT 4 is AGI. In some sense that may best evidence things are stalling.

But really 10 years from now, either one of us could declare victory, and we’d probably be right.


So you agree. And your conclusion looks like it's coming from the fact that the trendline goes up. Clearly Sam altman saying garbage and some contract with microsoft doesn't mean shit unless there were trendlines behind them to back it up.


I disagree progress will be meaningful, but I’m not stupid enough to think anyone will be able to agree on a definition of “meaningful”


Then define it as what has already happened. If the trendline continues the upward progress in the next decade will be as meaningful as we consider the last decade to be.


I think as more and more people offload their thinking into LLMs we are going to hit a plateau. Innovation will stall and maybe even stop because LLMs need constant new input to improve and we will no longer be producing humans that create high quality things for LLMs to use as high quality inputs

Do you think constant growth is more or less likely than the situation that I outline?


The Stack Overflow Developer Survey suggests we're going to reach peak "offloading their thinking" sometime before the majority of people. It's going to be disastrous for those so afflicted, but it's not going to eliminate the production of training data.


Impossible to measure. Anyone can declare victory and find evidence to support it.


I believe there's still ways to engage in conversation/debate on good faith and match and rank things based off of qualitative evidence. We may not be able to measure it but most people can see a rough line of overall progress.


“but most people can see a rough line of overall progress.”

This idea is the core to my argument. That the bias of what can you see is creating a false sense of progress. I think my core argument would be progress is an asymptote, so you might say loosely I agree with you (yes of course there are always optimizations you can eke out) but at what cost, and is the asymptote approaching something that looks more like a thing that can solve all problems in theory but not in practice, getting better and better at solving problems in a laboratory, or getting better and better at solving problems we know the answer to, but never gets serious traction at solving novel problems or working in the real world(outside its core skill set; generating text)


We can never know when an asymptote will occur or if we're in one. There's simply not enough information as noise often makes it look like we're on an asymptote when in reality we are not. Asymptotes only exist in hindsight. By probability almost all time on a curve is spent NOT approaching an asymptote. So the bet with the highest probability of being true is that we are not approaching one.

>but never gets serious traction at solving novel problems or working in the real world(outside its core skill set; generating text)

This isn't true. Transformers now power self driving. I already stopped using Uber in SF. All my taxi rides are driven by AI.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: