Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So you agree. And your conclusion looks like it's coming from the fact that the trendline goes up. Clearly Sam altman saying garbage and some contract with microsoft doesn't mean shit unless there were trendlines behind them to back it up.


I disagree progress will be meaningful, but I’m not stupid enough to think anyone will be able to agree on a definition of “meaningful”


Then define it as what has already happened. If the trendline continues the upward progress in the next decade will be as meaningful as we consider the last decade to be.


I think as more and more people offload their thinking into LLMs we are going to hit a plateau. Innovation will stall and maybe even stop because LLMs need constant new input to improve and we will no longer be producing humans that create high quality things for LLMs to use as high quality inputs

Do you think constant growth is more or less likely than the situation that I outline?


The Stack Overflow Developer Survey suggests we're going to reach peak "offloading their thinking" sometime before the majority of people. It's going to be disastrous for those so afflicted, but it's not going to eliminate the production of training data.


Impossible to measure. Anyone can declare victory and find evidence to support it.


I believe there's still ways to engage in conversation/debate on good faith and match and rank things based off of qualitative evidence. We may not be able to measure it but most people can see a rough line of overall progress.


“but most people can see a rough line of overall progress.”

This idea is the core to my argument. That the bias of what can you see is creating a false sense of progress. I think my core argument would be progress is an asymptote, so you might say loosely I agree with you (yes of course there are always optimizations you can eke out) but at what cost, and is the asymptote approaching something that looks more like a thing that can solve all problems in theory but not in practice, getting better and better at solving problems in a laboratory, or getting better and better at solving problems we know the answer to, but never gets serious traction at solving novel problems or working in the real world(outside its core skill set; generating text)


We can never know when an asymptote will occur or if we're in one. There's simply not enough information as noise often makes it look like we're on an asymptote when in reality we are not. Asymptotes only exist in hindsight. By probability almost all time on a curve is spent NOT approaching an asymptote. So the bet with the highest probability of being true is that we are not approaching one.

>but never gets serious traction at solving novel problems or working in the real world(outside its core skill set; generating text)

This isn't true. Transformers now power self driving. I already stopped using Uber in SF. All my taxi rides are driven by AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: