Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What jobs will AI create that AI cannot itself do?

Part of the problem is the definition of "AI" is extremely nebulous. In your case, you seem to be talking about an AGI which can self-improve, while also having some physical interface letting it interact with the real world. This reality may be 6 months away, 6 years away, or 600 years away.

Given the current state of LLMs it's much more likely they will create jobs, or change workflows in existing jobs, rather than wholesale replace humans. The recent public spectacle of Microsoft's state-of-the-art Github Copilot Agent[0] shows we're quite far away from AI agents wholesale replacing even very junior positions for knowledge work.

[0] https://news.ycombinator.com/item?id=44050152



in a sense I think this is my question, is anyone writing any of these think pieces providing specific definitions?


Yeah but LLMs won't stay at the current state though. I don't understand this argument. Is there any particular reason to believe that they'll stop getting better at this point?


Yes, I do think there is, that LLMs are a paradigm with a certain limited functionality and not a path to AGI. I actually find this assumption of constant and never ending improvement of LLMs interesting, almost all technology has diminishing returns in terms of improvements, why would LLMs be the exception? Why not believe that all future iterations of the LLMs will be gradual improvements of current behavior rather than LLMs necessarily can become super-intelligent AGI?


Most of the adults alive today have lived sometime when CPU speeds doubled every 12–24 months (see Moores law). This has conditioned many to believe that all information technologies improve exponentially, while, in reality, most are not.


It kind of reminds me of the solar power chart that one institute made where every year they predict that growth will definitely flatten from here on out and every year it fails to do so.

I don't have any reason to expect LLMs to be flattening or for the technology to be capped out. In fact, I have a lot of reasons to believe the opposite such as the plethora of papers and proposed techniques that haven't even been attempted at scale. (QuietSTaR my beloved, you will have your day.) It just doesn't look like a mature technology, rather the opposite. So my baseline assumption, on that evidential basis, is that number keeps going up, and if this prediction results in a strange-looking future then that says more about my taste than about the prediction.


The question though is the rate of growth. To get the kind of “end of scarcity AGI” advertised by Sam Altman probably requires continuous over year exponential growth. Will that happen in the normal course of LLM research? Or will it be more gradual, and the exponential growth will come from future paradigm shifts. I’m arguing the second.


I guess I'm arguing the first, yeah. LLMs are not anywhere close to tapped out as a technology.


> Is there any particular reason to believe that they'll stop getting better at this point?

Are they better now? When ChatGPT came out I could ask it for the song lyrics to The Beatles - Come Together. Now the response is

> Sorry, I can't provide the full lyrics to "Come Together" by The Beatles. However, I can summarize the song or discuss its meaning if you'd like.

You can argue that ChatGPT "knowing" that 9.11 is less than 9.9 or counting the 3 r's in strawberry means it's better now, but which am I more likely to ask an LLM?


I'm sure it still knows, it's just been explicitly told not to divulge it for copyright reasons. Iirc song lyrics are even explicitly forbidden in the system prompt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: