Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We have not yet entered the AI age, though I believe we will.

LLMs are not AI. Machine learning is more useful. Perhaps they will evolve or perhaps they will prove a dead end.



> LLMs are not AI. Machine learning is more useful.

LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques.

I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI!

I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to.


LLMs have shown no signs of understanding.


> natural language used to be one of the metrics of AGI

what if we have chosen a wrong metric there?


I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.


> Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.

But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.


The technique itself is good. And paired with a good amount of data and loads with training time, it’s quite capable of extending prompts in a plausible way.

But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.


They do not understand. They predict a plausible next sequence of words.


I don't disagree with the conclusion, I disagree with the reasoning.

There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them.


The evidence so far is a definite no. LLMs will happily produce plausible gibberish, and are often subtly or grossly wrong in ways that betray complete lack of understanding.


We keep moving the goalposts...


The goal remains the same - AGI is what we see in sci-fi movies. An infallible human like intelligence that has access to infinite knowledge, can navigate it without fail and is capable of performing any digital action a human can.

What changed is how we measure progress. This is common in the tech world - some times your KPIs become their own goal, and you must design new KPIs.

Obviously NLP was not a good enough predictor of progress towards AGI and we must find a better metric.


Maybe it is linear enough to figure out where the goalposts will be 10, 20, 50 years from now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: