Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or maybe we've just moved onto the "next they laugh at you" part which will continue for a decade or so before any actually advancements in AI happen besides creation of larger matrixes of inputs to outputs, with equally poor top level performance. I don't think anybody is laughing about LLM's or their potential, but rather laughing and mocking the people who are AI doomers saying that everybody's job is kaput because LLMs can spit out empty copywriting and junior level code samples.


People's freakouts are completely reasonable. The pace of development in this field is unreal and shows few signs of slowing down. The primary blocker to adoption in many fields is just software integration, which will come within a few years at most. At that point we'll see what the limits of LLM capabilities are, and which professions are no longer economically viable.


My position here is certainly debatable but IMO the pace of development in AI as a whole is actually pretty lackluster, and instead appears monumental because of the massive influx of VC dollars (and development of computing hardware!) which allows for these more and more monolithic models. Training larger and larger models doesn't really represent "development" to me in the sense I usually think of it. The largest hurdle with AI, and one that LLM has not and likely will never solve, is that it doesn't truly understand anything at all, but is just instead (in simple terms) a large matrix of inputs to outputs. The areas where current AI falls flat on its face are areas where it will never be able to encroach as it works today, no matter how many TPU hours you waste on learning.


memory, internal iteration and scratchpads can and have been added. Foundation models don't include this yet, but it's not unexplored territory. Online learning is more difficult but I'd argue that various fine-tuning and control approaches are slowly working towards that.

So what is a brain other than a very complex non-linear function which takes inputs, iterates on them, integrates them with memory and eventually generates output?

> is that it doesn't truly understand anything at all, but is just instead (in simple terms) a large matrix of inputs to outputs

The first step to defeating the tiger is to realize that it cannot hurt you, for it is only made of simple atoms.


> So what is a brain other than a very complex non-linear function which takes inputs, iterates on them, integrates them with memory and eventually generates output?

A thing that processes and embodies sensory input, makes informed guesses about other people's intentions and values, grows tumors, worries about the dirty dishes.... How many other "complex non-linear functions" are made of meat?


>> is that it doesn't truly understand anything at all, but is just instead (in simple terms) a large matrix of inputs to outputs

> The first step to defeating the tiger is to realize that it cannot hurt you, for it is only made of simple atoms.

This is now my favorite rebuttal when people claim LLMs are simply some form of math.


When a person uses the phrase "No true understanding" I can tell they don't know what they are talking about at all.

It can engage, use, explain, transform and apply highly sophisticated concepts, philosophies, styles, and models of the world in writing. If that isn't understanding...


There's a distinct difference between asking somebody to write/copy/show a complex concept and understanding it. What I don't understand (get it?) is how people are still conflating LLM's behavior with true understanding of concepts when it's obviously not the same thing.


Please provide the definition of "true understanding." I think the only good argument here is that it lacks "lived experiences" and has synthetic understanding. That being said, have you been to a university? Profs seem to be entirely based on a synthetic understanding of the world anyways.


Indeed.

It can also write non-trivial code that works. There's a saying "if you can't code it, you don't know it". Conversely, if you can code it, you do understand it, at some functional level.


"No true understanding" is the new "no true Scotsman." If you're still repeating this line in mid-2023, you're not meaningfully engaging with the technological developments that are occurring in front of your eyes.


Not at all. Being able to give an example of something or spit something out that resembles a concept is not the same thing as understanding it, at least in my opinion. Keyword being true understanding, although I'll concede that it might have arguably basic understanding of such concepts.


You need to define "true understanding" in some concrete, measurable way, or accept that you're just using it to avoid thinking clearly about AI capabilities.


The difficulty with that is that trying to define "true understanding" crosses deeply into the philosophical realm. Moreover, a lack of ability to define a term does not invalidate my position. That being said, I'll instead present you with a question intended to display what I believe is the difference between the two. If I write an example of an integral, it clearly shows that I know what an integral is, but does that mean that I understand integrals? Even if I could concisely answer any question you asked about integrals (i.e. an all-knowing AI), that is not the same as my definition of "truly understanding" integrals. To pose another philosophical question which highlights what I believe it is to "truly understand" something: Does the library of babel understand? I would say that despite it having the answer to any question you ask, it does not "truly understand" anything. I wish that I could put the difference between these two into concise wording, but alas I've been unable to quite put my finger on it despite having spent some good part of the day giving thought to it.

Some of these concepts are explained/explored much better than I could do on the wikipedia[0] page for understanding, particularly the "Assessment" section.

[0]https://en.wikipedia.org/wiki/Understanding


What about the AP-Calc student that has "merely" practiced hundreds of integrals and is therefore equipped, not just to pass the examination, but to use integrals as part of more complex theories down the line?

If they know only enough to apply the concept successfully without knowing its fundamentals, have they "understood" it?


One reason I'm sympathetic to this view is how far behind pre-AI automation is. Whenever I work with small-businesses and governments (and bigco's too but in different ways) I am amazed by how much make work there actually is. Somebody is doing data entry, but considers it valuable because some fraction of that work is identifying accuracy issues early, or translating between systems and domains. Then somebody is applying basically rote business logic to make a decision and route things to somebody else to do work. All of that could be automated. As an industry we've been automating that kind of thing for 50+ years, but still there are mountains of inefficient processes that could be automated, that are basically just shuffling paper. But the overhead cost of hiring someone to analyze your processes and adopt it to some kind of system, the costs of maintaining that system, to costs of getting it wrong when the requirement didn't account for various edge-cases that weren't in the requirements because of the tacit knowledge embedded in employees, often makes the project not worth it or a failure.

If LLM capabilities just help provide better glue (or even just political/hype motivation) to enable easier integration/automation of basic white-collar workflows there could be significant disruption. There is a lot of low-hanging fruit out there ... every spreadsheet is a new potential SaaS business.


Yes, people keep saying LLM tech is laughable because it can't solve for cold-fusion or some nonsense in a single prompt.

In reality, all it has to do is replace a crap ton of white collar "lgtm and pass it on" jobs that exist with minimal transformation or logic applied. This is a crap ton of people


A decade can still be quite short considering the types of problems that may need to be solved, so perhaps that is the timescale that some of the people you're referring to are thinking in?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: