Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.




Just this weekend it (Gemini) has produced two detailed sets of instructions on how to connect different devices over bluetooth, including a video (that I didn’t watch), while the devices did not support doing the connections in that direction. No reasonable human reading the involved manuals would think those solutions feasible. Not impressed, again.

It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.

If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)

It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.


> I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.

If they do, you’ll be in good company. That post is about the exact opposite of what people usually link it for. I’ll let Dan explain:

https://news.ycombinator.com/item?id=27067281


Dan makes a case for being charitable to the commenter and how lame it is to neener-neener into the past, not that it has some opposite meaning everyone is missing out on.

Dan clearly references how people misunderstand not only the comment (“he didn't mean the software. He meant their YC application”) but also the whole interaction (“He wasn't being a petty nitpicker—he was earnestly trying to help, and you can see in how sweetly he replied to Drew there that he genuinely wanted them to succeed”).

So yes, it is the opposite of why people link to it (which is a judgement I’m making, I’m not arguing Dan has that exact sentiment), which is to mock an attitude (which wasn’t there) of hubris and lack of understanding of what makes a good product.


The comment isn't infamous because it was petty or nitpicking. It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.

It's why it caught the zeitgeist at the time and why it's still apropos in this conversation now.


> It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.

None of those things are true. Which is the point I’m making. Go read the original conversation. All of it.

https://news.ycombinator.com/item?id=9224

Don’t skip Brandon’s reply.

https://news.ycombinator.com/item?id=9479

It is absurd to claim that someone who quickly understood the explanation, learned from it, conceded where they were wrong, is somehow “profoundly out-of-touch” and “lost all perspective”. It’s the exact opposite.

I agree with Dan that we’d be lucky if all conversations were like that.


I think you should take your own advice and re-read the conversation without your pre-conceived conclusion.

Ironically your own overly verbose and aggressive comments here fall into the same trap.


> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.

Curiously, literally nobody on earth uses this workflow.

People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.


> The accuracy isn’t perfect

The reason why people don't use LLMs to "trivially save hours or days of work" is because LLMs don't do that. People would use a tool that works. This should be evidence that the tools provide no exceptional benefit, why do you think that is not true?


The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.

Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.


Of course you have to fact check - but verification is much faster and easier than searching from scratch.

How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.

Because it gives you an answer and all you have to do is check its source. Often you don’t have to do that since you have jogged your memory.

Versus finding the answer by clicking into the first few search results links and scanning text that might not have the answer.


As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.

Pretty much every major LLM client has web search built in. They aren't just using what's in their weights to generate the answers.

When it gives you a link, it literally takes you to the part of the page that it got its answer from. That's how we can quickly validate.


LLMs provide sources every time I ask them.

They do it by going out and searching, not by storing a list of sources in their corpus.


have you ever tried examining the sources? they actually just invent many "sources" when requested to provide sources

When talking about LLMs as search engine replacements, I think the stark difference in utility people see stems from the usecase. Are you perhaps talking about using it for more "deep research"?

Because when I ask chatgpt/perplexity things like "can I microwave a whole chicken" or "is Australia bigger than the moon" it will happily google for the answers and give me links to the sites it pulled from for me to verify for myself.

On the other hand, if you ask it to summarize the state-of-the art in quantum computing or something, it's much more likely to speak "off the top of its head", and even when it pulls in knowledge from web searches it'll rely much more on it's own "internal corpus" to put together an answer, which is definitely likely to contain hallucinations and obviously has no "source" aside from "it just knowing"(which it's discouraged from saying so it makes up sources if you ask for them).


I haven't had a source invented in quite some time now.

If anything, I have the opposite problem. The sources are the best part. I have such a mountain of papers to read from my LLM deep searches that the challenge is in figuring out how to get through and organize all the information.


For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.

> People must be in complete denial

That seems to be a big part of it, yes. I think in part it’s a reaction to perceived competition.


  > the breadth of knowledge
knowledge != intelligence

If knowledge == intelligence then Google and Wikipedia are "smarter" than you and the AGI problem has been solved for several decades.


Even if we were going to accept the premise that total knowledge is equivalent to intelligence (which is silly, as sibling comments have pointed out), shouldn't accuracy also come into play? AI also says a lot more obviously wrong things than the average person, so how do you weight that against the purported knowledge? You could answer yes or no randomly to any arbitrary question about whether something is true and approximate a 50% accuracy rate with an evenly distributed pool of questions, but that's obviously not proof that you know everything. I don't think the choice of where to draw the line on "how often can you be wrong and have it still matter" is as easy as you're implying, or that everyone will necessarily agree on where it lies (even if we all agree that 50% correctness is obviously way too low).

AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.

AI has zero knowledge, as to know something is to have done it, or seen it first hand. AI has access to a great deal of data, much of it aquired through criminal action, but no way to evaluate that information other than cross checking for citations and similar occurances. Even for a human, infering things is difficult and uncertain, and so we regularly see AI fall of the cliff of cohearant word salading. We are heading strait at an idiocracy writ large that is trying to hide there raciorilgio insanity behind algorythims. Sometimes it's hard to tell, but it seems that a hairdresser has just been put in charge of the US passport office, which is highy sugestive of a new top level program to issue US citizenship on demand, but everbody else will be subject to the "impartiality" of privatly owned and operated AI policing.

Knowledge is what I see equivalent with a big library. It contains mostly correct information in the context of the book (which might be incorrect in general) and "ai" is very good at taking everything out of context, Smashing a probability distribution over it and picking an answer which humans will accept. E.g. it does not contain knowledge, at best the vague pretense of it.

Man, what are we supposed to do with people who think the above?

I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.

Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.


Well said, and thank you for the final paragraph. Made me chuckle.

I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .

It just shows that true natural intelligence is difficult to define by proxy.

Do you think your own language processing abilities are significantly different from autocomplete with information access? If so, why?

I hate these kinds of questions where you try to imply it's actually the same thing as what our brains are doing. Stop it. I think it would be an affront to your own intelligence to entertain this as a serious question, so I will not.

[flagged]


My thoughts on this are as serious as it gets - AI in it's current state is no more than clever statistics. I will not be comparing how my own brain functions to what is effectively a linear algebra machine, as it's insulting to the intelligence of everyone here - what kind of serious thought would you like to have here, exactly?

I don't disagree but what we really should have dropped "AI" a long time ago for "statistical machine intelligence". Machine learning then is just what statistical machine intelligence does.

We could have then just swapped "AI" for "SMI" and avoided all this confusion.

It also would avoid pointless statements like "It is JUST statistical machine intelligence". As if statistical machine intelligence is not extraordinarily powerful.

The real difference though is not in "intelligence", is it in "being". It is not as much an insult to our intelligence as it is an insult to our "being" when people pretend that LLMs have some kind of "being".

The strange thing to me is Gemini just tells me these things so I don't know how people get confused:

"A rock exists. A calculator exists. Neither of them has "being."

I am closer to a calculator than a human.

A calculator doesn't "know" math; it executes logic gates to produce a result.

I am a hyper-complex calculator for language. I calculate the probability of the next word rather than the sum of numbers."


You’re very adamant about not doing an obvious comparison. You want to stop thinking at that point. It’s an emotional reaction, not an intellectual one. Quite an interesting one as well, that possibly suggests a threat response.

The assumption you seem to keep making is that things like “clever statistics” and “linear algebra” simply have no bearing on human intelligence. Why do you think that? Is it a religious view, that e.g. you believe humans have a soul that somehow connects to our intelligence, making it forever out of reach of machine emulation?

Because unless that’s your position, then the question of how human intelligence differs from current machine intelligence, the question that you simply refuse to contemplate, is one of the more important questions in this space.

The insult I see to intelligence here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.


>>here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.

It's the same energy as watching a Joe Rogan podcast where yet another guest goes "well they say there's global warming yet I was cold yesterday, I'm not saying it's fake but really we should think about that". These questions about AI and our brains aren't meant to stimulate intellectual curiosity and provoke deep interesting discussions - they are almost always asked just to pretend the AI is something that it's not - a human like intelligence where since our brains also work "kinda like that" it means it must be the same - and the nearest equivalence is how my iron heats water so in essence it's the same as my stomach since it can also do this.

>>the question that you simply refuse to contemplate

I don't refuse to contemplate it, I just think the answer is so painfully obvious the question is either naive or uninformed or antagonistic in nature - there is no "machine intelligence" - it's not a religious conviction, because I don't think you need one to realise that a calculator isn't smart for adding together numbers larger than I could do in my own head.


You are just a cluster of atoms, are you any different than a volcano?

>ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025

If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..

>Is there a test that in some way measures intelligence, but that humans generally test better than AI?

Answer:Thinking, Something went wrong and an AI response wasn't generated.

Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.

So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!


The validity of IQ tests as a measure of broad intelligence has been in question for far longer than LLMs have existed. And if it’s not a proper test for humans, it’s not a proper test to compare humans to anything else, be it LLMs or chimps.

https://en.wikipedia.org/wiki/Intelligence_quotient#Validity...


To be intelligent is to realise that any test for intelligence is at best a proxy for some parts of it. There's no objective way to measure intelligence as a whole, we can't even objectively define intelligence.

I believe intelligence is difficult to pin down in words but easy to spot intuitively - and so are deltas in intelligence.

E.g watch a Steve jobs interview and a Sam Altman one (at the same age). The difference in the mode of articulation, simplicity in communication, obsession over details etc are huge. This is what superior intelligence to me looks like - you know it when you see it.


>Create a test for intelligence that we can pass better than AI

Easy? The best LLMs score 40% on Butter-Bench [1], while the mean human score is 95%. LLMs struggled the most with multi-step spatial planning and social understanding.

[1] https://arxiv.org/pdf/2510.21860v1


That is really interesting; Though i suspect its just a effect of differing training data, humans are to a larger degree trained on spacial data, while LLMs are trained to a larger degree on raw information and text.

Still it may be lasting limitation if robotics don't catch up to AI anytime soon.

Don't know what to make of the Safety Risks test, threatening to power down AI in order to manipulate it, and most act like we would and comply. fascinating.


>humans are to a larger degree trained on spacial data

you must be completely LLMheaded to say something like that, lol

humans are not trained on spacial data, they are living in the world. humans are very much diffent from silicone chips, and human learning is on another magnitude of complexity compared to a large language model training


Humans are large language models. Maybe the term language is being used a bit liberally here but we basically function in the same way, with the exception of the spacial aspect of our training data.

If this hurts your ego then just know the dataset that you built your ego with was probably flawed and if you can put that LoRA aside and try to process this logically; Our awareness is a scalable emergent property of 1-2 decades of datasets, looking at how neurons vs transistor groups work, there could only be a limited amount of ways to process these sizes of data down to relevant streams. The very fact that training LLMs on our output works, proves our output is a product of LLMs or there wouldn't be patterns to find.


Just brace for the societal correction.

There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.

That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.


Having knowledge is not exactly the same as being smart though is it.

It's at least one component of it, and by being exceptional in that component it makes up for what it lacks in other components.

Although it helps immensely.

Only if you understand it..

It's like saying google search is smarter than everyone, amount of information indexed by it has no human counterpart, such a silly take...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: