Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LLM does democratize knowledge, but you have to be the user of the LLM, not the target of the user of the LLM.

The LLM is the most powerful knowledge tool ever to exist. It is both a librarian in your pocket. It is an expert in everything, it has read everything, and can answer your specific questions on any conceivable topic.

Yes it has no concept of human value and the current generation hallucinates and/or is often wrong, but the responsibility for the output should be the user's, not the LLM's.

Do not let these tools be owned, crushed and controlled by the same people who are driving us towards WW3 and cooking the planet for cash. This is the most powerful knowledge tool ever. Democratize it.



Asking a statistics engine for knowledge is so unfathomable to me that it makes me physically uncomfortable. Your hyperbolic and relentless praise for a stochastic parrot or a "sentence written like a choose your own adventure by an RNG" seems unbelievably misplaced.

LLMs (Current-generation and UI/UX ones at least) will tell you all sorts of incorrect "facts" just because "these words go next to each other lots" with a great amount of gusto and implied authority.


My mind is blown that someone gets so little value out of an LLM. I get over software engineering stumbling blocks much faster by interrogating an LLM's knowledge about the subject. How do you explain that added value? Are you skeptical that I am actually moving and producing things faster?


My mind is also blown by how much people seemingly get out of them.

Maybe they’re just orders of magnitude more useful at the beginning of a career, when it’s more important to digest and distill readily-available information than to come up with original solutions to edge cases or solve gnarly puzzles?

Maybe I also simply don’t write enough code anymore :)


I'm very far from the beginning of my career, but maybe I see a point in your comment, because I frequently try technologies that I am not an expert in.

Just yesterday, I asked if Typescript has the concept of a "late" type, similar to Dart, because I didn't want to annotate a type with "| null" when I knew it would be bound before it was used. Searching for info would have taken me much longer than asking the LLM, and the LLM was able to frame the answer from a Dart perspective.

I would say that that information neither "important to digest" nor "readily available."


Ah yes, gathering information in a particular unfamiliar area probably describes it better.

For me, it's been able to give very good answers when they were within the first few Google results when searched for using the proper terms (but the value is in giving you these terms in the first place!).

For questions from my field, it's been wildly hallucinating and producing half-truths, outdated information, or complete nonsense. Which is also fair, because the documentation where the answers could be found is often proprietary, and even then it's either outdated or outright wrong half of the time :)


This happened to me looking up am obscure c library. It just confidently made up a function that didn't actually exist in the library. It got me unstuck but you can really fuck yourself if you trust it blindly.


I agree with you but at what point does it change? Aren’t we all just stochastic parrots? How do we ourselves choose the next word in a sentence?


In my view, one big learning from LLMs is that yes, more often than not we are just stochastic parrots. And more often than not that's enough!

But sometimes we're more than that: Some types of deep understanding aren't verbal or language-based, and I suspect that these are the ones that LLMs will have the hardest time getting good at. That's not to say that no AI will get there at all, but I think it'll need something fundamentally different from LLMs.

For what it's worth, I've personally changed my mind here: I used to think that the level of language proficiency that LLMs demonstrate easily would only be possible using an AGI. Apparently that's not the case.


If you wish to make an apple pie, first you must make the universe from scratch. (carl sagan)

We can generate thoughts that are spatially coherent, time aware, validated for correctness and a whole bunch of other qualities that LLMs cannot do.

Why would LLMs be the model for human thought, when it does not come close to the thoughts humans can do every minute of every day?

Aren't we all just stochastic parrots, is the kind of question that requires answering an awful lot about the universe before you get to an answer.


We use languages to express ideas. Sentences are always subordinate to the ideas. It's very obvious when you try to communicate in another language you're not fluent in. You have the thought, but you can't find the words. The same thing happens when writing code, taking ideas from the business domain and translating it into code.


God dammit please stop comparing these things to brains. Stop it. It's not even close.


> but the responsibility for the output is the user's, not the LLM's.

The current iteration of the internet (more specifically social media) has used the same rationality for its existence but at a level, society has proven itself too irresponsible and/or lazy to think for itself but be fed by the machine. What makes you think LLMs are going to do anything but make the situation worse? If anything, they’re going to reenforce whatever biases were baked into the training material, of which is now legally dubious.


> and can answer your specific questions on any conceivable topic

Yeah, I mean, so can I, as long as you don't care whether the answers you receive are accurate or not. The LLM is just better at pretending it knows quantum mechanics than I am.


Even if a human expert responds about something in their domain of expertise, you have to think critically about the answer. Something that fails 1% of the time is often more dangerous than something that fails 10% of the time.

The best way to use an LLM for learning is to ask a question, assume it's getting things wrong, and use that to probe your knowledge which you can iteratively use to prove the LLM's knowledge. Human experts don't put up with that and are a much more limited resource.


For a librarian, they’re confidently asserting factual statements suspiciously often, and refer me to primary literature shockingly rarely.


In other words they behave like a human?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: