I thought you were going to say that now we're back to bigger-than-room sized computers that cost many millions just to perform the same tasks we could 40 years ago.
I of course mean we're using these LLMs for a lot of tasks that they're inappropriate for, and a clever manually coded algorithm could do better and much more efficiently.
> and a clever manually coded algorithm could do better and much more efficiently.
Sure, but how long would it take to implement this algorithm, and would that be worth it for one-off cases?
Just today I asked Claude to create a jq query that looks for objects with a certain value for one field, but which lack a certain other field. I could have spent a long time trying to make sense of jq's man page, but instead I spent 30 seconds writing a short description of what I'm looking for in natural language, and the AI returned the correct jq invocation within seconds.
Claude answers a lot of its questions by first writing and then running code to generate the results. Its only limitation is the access to databases and size of context window, both of which will be radically improved over the next 5 years.
just ask the LLM to solve enough problems (even new problems), cache the best, do inference time compute for the rest, figure out the best/ fastest implementations, and boom, you have new training data for future AIs
> "Those who invalidate caches know nothing; Those who know retain data." These words, as I am told, were spoken by Lao Tzi. If we are to believe that Lao Tzi was himself one who knew, why did he erase /var/tmp to make space for his project?
-- Poem by Cybernetic Bai Juyi, "The Philosopher [of Caching]"
The LLMs are now writing their own algorithms to answer questions. Not long before they can design a more efficient algorithm to complete any feasible computational task, in a millionth of the time needed by the best human.
> The LLMs are now writing their own algorithms to answer questions
Writing a python script, because it can't do math or any form of more complex reasoning is not what I would call "own algorithm". It's at most application of existing ones/calling APIs.
LLMs are probabilistic string blenders pulling pieces up from their training set, which unfortunately comes from us, humans.
The superset of the LLM knowledge pool is human knowledge. They can't go beyond the boundaries of their training set.
I'll not go into how humans have other processes which can alter their and collective human knowledge, but the rabbit hole starts with "emotions, opposable thumbs, language, communication and other senses".
I of course mean we're using these LLMs for a lot of tasks that they're inappropriate for, and a clever manually coded algorithm could do better and much more efficiently.