Hacker Newsnew | past | comments | ask | show | jobs | submit | samuellevy's commentslogin

Funny that you mention "accessible"... Because most of these components are anything but.

Modern HTML and CSS are awesome tools on their own, and are able to do so much without needing to rely on massive JavaScript bundles, but you still end up with component libraries that are <div><div><div><div> all the way down.


That's covered by the very first line of the main body of the article:

> Because a VPN in this sense is just a glorified proxy.


Imagine telling someone about your tincd setup to do NAT traversal and access your home server, and upon hearing the word "VPN" they ask what provider you use.

I feel comment-grandparent's pain.


More than just needing an oracle - the keys and the house are both physical items. There's not really any practical way for a contract on the blockchain to validate that a particular physical item is in fact the item that it purports to be.

Are these ACTUALLY the keys to this house? Are they the only set? The original set? Were the locks changed, and this set in the contract is no longer valid?

Then putting aside all of that... How do you ENFORCE a "smart contract"? Probably through... Existing contract law. Because that's what it's there for. Smart contracts are just more convoluted paper, and we can do that already with DocuSign or any number of other digital contract options - all of which provide, so far as I can tell, precisely the same level of verification that a smart contract does. The only "advantage" of a smart contract over those platforms is that the history of the "document" is more or less baked into the chain, instead of trusting that the third party platform hasn't modified it... Which they will never have any motivation to do...

People have been initialing pages to mark them as read/accepted for more years than I've been alive. In the event of a contract dispute, smart contract or not, it's going to be up to a third party (mediator, judge, etc.) to decide on resolution anyway... At which point even the exact wording of the contract may well be discarded as being unenforceable because _contracts are not above the law_.


Thinking of a real estate transaction as an exchange of physical things is already a mistake. Most people expect to take possession of a structure in most deals, but it is sort of beside the point. What you're trading is a legal filing where you go to the county recorder (most states) and just claim to own something. What are you really buying? The promise from the other guy that they won't claim to own it in the future. But, under our deeply stupid title system, there really isn't a guarantee that the seller "owns" it in the first place. All kinds of people could have claims on it.

In a legal system this vague, smart contracts simply do not have a niche.


> In a legal system this vague, smart contracts simply do not have a niche.

This is an interesting point. The way I think about this is, if we can ignore for a second the bitcoin-related baggage of smart-contracts as a concept, then there's still a lot of overlap with related concepts like open government and automated legal reasoning. So I'm curious if you think of those things as also intractable. Also, blockchain isn't some magic wand that replaces the need for other datastructures. Why should partial or even doubtful ownership be impossible to model and do secure/verifiable/conditional compute on?


> All kinds of people could have claims on it.

That is not necessarily true. Often, one or more of the closing documents addresses this very issue, attesting that there are no such known claims and/or assigning any unknown ones to you as the new owner. Liability is part of ownership, after all, and all ownership is "just a legal filing" unless it's backed by force. While it's true that a real estate transaction is not the same as a transfer of a physical thing, dismissing such transactions as fictional is a bit sophist.


> More than just needing an oracle - the keys and the house are both physical items. There's not really any practical way for a contract on the blockchain to validate that a particular physical item is in fact the item

Responding to you but this applies to lots of stuff in this thread. Quoting wikipedia, "a smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need for trusted intermediators, arbitration costs, and fraud losses, as well as the reduction of malicious and accidental exceptions."

How can anyone possibly object to this technology as if it were a) impossible or b) useless? In the next sentence we get into "commonly associated with cryptocurrencies", but I think the main idea is already there in the opening. There is no strict requirement for whatever implementation details that you love to hate (blockchain, digital goods, digital titles, web3, etc).


> How can anyone possibly object to this technology as if it were a) impossible or b) useless?

Because it doesn't work, nor do I believe it ever really can work, at least as it's largely advertised. I mean, you just read the description from Wikipedia and are basically saying "How can people object to this idea?" That's like reading about all the great things flying cars can do and then saying "How can anyone object to flying cars?"

The point is that I (and many others, but I'll only speak for myself) do not believe that the utility the crypto boosters like to tout about smart contracts is technically feasible, at all, for most of the things we use contracts for in the real world.


Final will and testament: When I'm dead, move all the money from account A into account B. What is "not feasible" about a government API that answers whether a citizen is alive and a banking API that runs a funds transfer? Let's stick the code for this in some large cloud provider where it checks the credentials and conditions involved every minute.

We could debate whether this is a cheaper/easier/safer approach than trusting a law firm/banks/clergy/clerks to execute things on your behalf. But it's absurd to say that this is not possible (because every part of this is already done), or that it is not useful (it has exactly the same use-case as a classic will, but moves trust from a law firm to a cloud provider).


Because what you are describing is not a smart contract, at least how it is nearly always commonly understood.

What you are describing is simple API automation. Nobody describes what IFTTT or Zapier can do as a smart contract, yet that is literally exactly what you have described.


>Because what you are describing is not a smart contract, at least how it is nearly always commonly understood.

Shrug. So now we've moved through your criticisms of "it's not possible" and "it's not useful", and we're splitting hairs about whether it's in the right category. It seems like you want to have a conversation where "smart contracts" means exactly/only Ethereum as it exists today. If you're asking about the use-cases of abstract technology, and then pivot to insist that discussion revolves around existing brands/implementations, it feels like you're moving the goal-posts. You're of course free to insist that smart-contracts ARE ethereum and vice versa, but ironically when you do that you're a clear victim of marketing, and you're essentially endorsing the branding that you claim to dislike!

If "mere API automation" is disqualified as "smart contracts" according to your definitions because it isn't blockchainy enough, and if everything that IS blockchainy is disqualified as stupid or a scam, then I guess you win debates before they start. But that's just not a very interesting conversation for any one else.

FWIW, ethereum does have a concept of oracles ( https://ethereum.org/en/developers/docs/oracles/ ). I wonder if ethereum and zapier did have a lovechild, would you call it a smart-contract then? Do we need the contract AND the decision-data AND the assets to be blockchained, or can we blockchain a subset and still call it a smart-contract?

A mix between zapier, plus something like ethereum, together with legislation that requires open-APIs for critical services is probably exactly what we need to satisfy tons of practical real world use-cases. That's what you claimed to be interested in, right?


This conversation has probably been the biggest waste of time I've ever had on HN. "Moving the goalposts"??? This whole thread is on an article titles "Smart Contract Security Field Guide". To be clear, what you are talking about has absolutely nothing to do with the concept of smart contracts as discussed in that article, and honestly I have never heard someone in the past 10 years or so discuss smart contracts in a way that doesn't include distributed consensus. It has very little to do with ethereum specifics, but if you want "smart contract" to mean any automated behavior, then sure.

Feel free to call a giraffe a dog and then get upset when people point out that nobody else calls that thing a dog.


How big of a problem are "snake bite victims"? I live in Australia, one of those countries that people would class as "dangerous" with regards to snakes, and... There's only been about 40 deaths in the past 20 years...

It's just a really strange thing to ping as a "big problem to solve", and such a bizarrely expensive solution to the problem, too. I think that a much better solution would be improving development and access to antivenin


"130,000 deaths and over 300,000 paralyzing injuries and amputations last year." https://www.dw.com/en/snakebites-kill-at-least-80000-people-...

Dangerous in terms of crime rate, false arrests that human pilots won't accept hazard pay for. Rural people have either no clinic or poorly trained nurse with limited supplies. Urgently going to nearest hospital, refrigerated pharmacy. Also other medical emergencies such as heart attack or moving into a city lacking roads.

I've been prescribed Xanax since 2004 and at worst used low dose 3 times a day when around negative strangers, the rest of the time none. Certainly alcohol's worse.


The 130K per annum figure is a global death count.

This is serious, other causes of death more so, e.g:

    The Global status report on road safety 2018, launched by WHO in December 2018, highlights that the number of annual road traffic deaths has reached 1.35 million. Road traffic injuries are now the leading killer of people aged 5-29 years.
That's 10 motor vehicle deaths for every snake bite death.

It's certainly possible for both to be addressed and it's certain there are parts of the world with more snake caused deaths than road fatalities.


I did read the article, and it only vaguely describes the "problem" with smartphones in one paragraph, then it spends the rest of the article talking about the effects of the pandemic. The problem that it attributes to smartphones is that people will get distracted, then expect instantaneous communication, relationships require time & attention...

But the thing that they're blaming smartphones for is nothing new. Communication has always been a difficult thing, and before people had their heads buried in smartphones, they had their heads buried in TV, or newspapers/magazines/books, or they just simply went to bars/pubs.

The whole article seems like a vague, "hot take", nothing. It's an opinion backed up with zero research or evidence other than "I'm a couples therapist, trust me, I know."


I read the article afterwards and it kind of petered out without taking much of a stance other than "go to couples therapy earlier than too late".


They have strict specifications about the types of nails and wood that they can hammer, but they're not documented anywhere. If you send them the wrong type of nails or wood, they'll put them both into an industrial shredder and send you back the dust, because technically the nails have now been integrated with the wood.

Their free plan will let you hammer in 5 nails per month into a single piece of wood, but you can't use a different piece of wood each month. For $30/month you get 50 nails, and up to 5 pieces of wood, or for $60/month you can get 120 nails and unlimited pieces of wood, and two-factor (they'll call you before they hammer in the nails, and ask where you actually want the nails hammered). If you want to have unlimited nails, you have to contact them for enterprise pricing.

They will also sell the measurements of your wood and the nail positions to other carpenters.


Ok.

I don't like React.


https://blog.samuellevy.com/ - I haven't posted in a few years, and I really need to upgrade it/clean up everything. It's not remotely mobile friendly.

I've had a few relatively popular posts over the years:

https://blog.samuellevy.com/post/41-php-is-the-right-tool-fo... A kind of response to a certain post about PHP that still makes the rounds...

https://blog.samuellevy.com/post/46-do-i-look-like-i-give-a-... "Do I Look Like I Give A Shit Public Licence" an alternative to the WTFPL


There's definitely a few echo chambers around AI, but it's definitely not something that "just techies" are onto.

ChatGPT made some waves at the end of last year. My in-laws were wanting to talk to (at) me about it at Christmas. There's plenty of awareness outside of the tech circles, but most of the discussion (both out and in of the tech world) seems to miss what LLMs actually _are_.

The reason why ChatGPT was impressive to me wasn't the "realism" of the responses... It was how quickly it could classify and chain inputs/outputs. It's super impressive tech, but like... It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying. "Hallucinations" is a fun term, but it's not hallucinating information, it's just guessing at the next token to write because that's all it ever does.

If it was "intelligent" it would be able to recognise a limitation in its knowledge and _not_ hallucinate information. But it can't. Because it doesn't know anything. Correct answers are just as hallucinatory as incorrect answers because it's the exact same mechanism that produces them - there's just better probabilities.


In your opinion, how does the "hallucination" issue differ from the same behaviour we see in humans?

I don't claim or believe that any LLM is actually intelligent. It just seems that we (at least on an individual basis) can also meet the criteria outlined above. I know plenty of people who are confidently incorrect and appear unwilling to learn or accept their own limitations, myself included.

In my opinion, even if we did have AGI it would still exhibit a lot of our foibles given that we'd be the only ones teaching it.


> In your opinion, how does the "hallucination" issue differ from the same behaviour we see in humans?

I feel like if you have any belief in philosophy then LLMs can only be interpreted as a parlour trick (on steroids). Perhaps we are fanciful in believing we are something greater than LLMs but there is the idea that we respond using rhetoric based on trying to find reason within in what we have learned and observed. From my primitive understanding, LLMs rhetoric and reasoning is entirely implied based on an effectively (compared to the limitations of human capacity to store information) infinite amount of knowledge they've consumed.

I think if LLMs were equivalent to human thinking then we'd all be a hell of a lot stupider, given our lack of "infinite" knowledge compared to LLMs.


> if you have any belief in philosophy [...]

You're going to have to explain which part of philosophy you mean, because what came after this doesn't follow from that premise at all. It's like saying a Chinese Room is fundamentally different from a "real" solution even though nobody can tell the difference. That's not a "belief in philosophy", that's human exceptionalism and perhaps a belief in the soul.


The belief that your thoughts are constructed based on an understanding of principles such as logic, rationality, ethics. That your interactions are built from a solid understanding of these ideas. As opposed to every train of thought just being glued together from pertinent fragments you can recall from your knowledge in response to a prompt provided by the circumstances of reality.

> that's human exceptionalism and perhaps a belief in the soul.

I would also argue that LLMs are not proven to be equivalent to what's going on in our minds. Is it really "human exceptionalism" to state that LLMs are not yet and perhaps never will be what we are? I feel like from their construction it is somewhat evident that there are differences, since we don't raise humans the same way we raise LLMs. In terms of CPU years babies require significantly less time to train.


Yeah I've never gotten this argument at all. "Humans aren't actually intelligent they're just machines designed to optimize their probability of reproducing "


> how does the "hallucination" issue differ from the same behaviour we see in humans?

In humans “hallucination” means observing false inputs. In GTP it means creating false outputs.

Completely different with massively different connotations.


Great point, perhaps “confabulation” is a better way of describing it, which means “the replacement of a gap in a person's memory by a falsification that they believe to be true”. For example, the term is sometime used to describe dementia patients, who might wander somewhere and forget how they got there. The patient then might confabulate a story about why they are there, e.g. they were getting their keys so they could drive to the store to run an errand, despite the fact they no longer have a car.


That's kind of the point, but also kind of not.

GPT isn't making true or false outputs. It's just making outputs. The truthiness or falseness of any output is irrelevant because it has no concept of true or false. We're assigning those values to the outputs ourselves, but like... it doesn't know the difference.

It's like blaming a die for a high or a low roll - it's just doing rolls. It has no knowledge of a good or a bad roll. GPT is like a Rube Goldberg machine for rolling dice that's _more likely_ to roll the number that you want, but really it's just rolling dice.


> It's just making outputs.

Yeah, one way to conceive of the issue is that GPT doesn't know when to shut up. Intuitively, you can kind of understand how this might be the case: the training data reflects when someone did produce output, not when they didn't, which is going to bias strongly toward producing confident output.

A lot of the conversation about GPT hallucinations has felt like an extended rehash of the conversations we've been having out the difference between plausible and accurate machine translations since like, 2016ish.


You could apply the same logic to humans.

Whenever a human speaks, it's just vibrations of wave molecules, triggered by the mouth and throat, which in turn are controlled by electric signals in the human's neural network. Those neurons, they just make muscles move. They don't have any concept of true of false. At least nobody has found a "true of false" neuron in the brain.


all of it coheres to consciousness, we know what it's like to be a human, but I think it'd be hubris to think we've cracked the code and made a blueprint of anything other than a word calculator


Hubris goes both ways. It is also hubris to assume our intelligence is special, instead of a boring neural network with sufficient number of neurons that exhibit emergent properties.


There's probably more dimensions to hubris but typically I understand it as flying too close to the sun, the other way for me is humility.


It’s more than next-word prediction though. The supervised fine tuning and RLHF steps are ways to possibly train it to favor truthful answers. Not sure whether this is currently the emphasis of ChatGPT though…


> In humans “hallucination” means observing false inputs.

How do you know that? You can only observe the output of the humans (other than yourself).


A person can hallucinate under the effects of drugs or mental disorder and then tell you about it after they've recovered from it.

This experience is available to you and is well documented.


How do you know they are observing false inputs, as opposed to creating false outputs? (acting as if they have seen halucinations)

How do you know that the LLM is not observing false inputs but creating false outputs? Would an LLM which tells you very convincingly about how it obtained a false information make you change your mind?

> This experience is available to you and is well documented.

You are misunderstanding what I'm asking. Sure, drug induced hallucinations in humans is very well documented. What I'm asking if this purported difference between "hallucinating on the inputs" vs "creating false outputs" is meaningful distinction.


So humans have a level of knowledge, understanding, and reasoning ability that LLMs simply don't have. I'm writing a response to you right now, and I "know" a certain amount of information about the world. That knowledge has limits, and I can expand it, I can forget it, all sorts of things...

"Hallucination" is a term that works well for actual intelligence - when you "know" something that isn't true, and has no path of reasoning, you might have hallucinated the base "knowledge".

But that doesn't really work for LLMs, because there's no knowledge at all. All they're doing is picking the next most likely token based on the probabilities. If you interrogate something that the training data covers thoroughly, you'll get something that is "correct", and that's to be expected because there's a lot of probabilities pointing to the "next token" being the right one... but as you get to the edge of the training data, the "next token" is less likely to be correct.

As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares. None of them have meaning to you, they're just colours and shapes that are in random seeming sequences, but there's a frequency to them. "Red circle, blue square, gren triangle" is a much more common sequence than "red circle, blue square, black triangle", so if someone hands you a piece of paper with "red circle, blue square", you can reasonably guess that what they want back is a green triangle.

Expand the model a bit more, and you notice that "rc bs gt" is pretty common, but if there's a yellow square a few symbols before with anything in between, then the triangle is usually black. Thus the response to the sequence "red circle, blue square" is usually "green triangle", but "black circle, yellow square, grey circle, red circle, blue square" is modified by the yellow square, and the response is "black triangle"... but you still don't know what any of these things _mean_.

When you get to a sequence that isn't covered directly by the training data, you just follow the process with the information that you _do_ have. You get "red triangle, blue square" and while you've not encountered that sequence before, "green" _usually_ comes after "red, blue", and "circle" is _usually_ grouped with "triangle, square", so a reasonable response is "green circle"... but we don't know, we're just guessing based on what we've seen.

That's the thing... the process is exactly the same whether the sequence has been seen before or not. You're not _hallucinating_ the green circle, you're just picking based on probabilities. LLMs are doing effectively this, but at massive scale with an unthinkably large dataset as training data. Because there's so much data of _humans talking to other humans_, ChatGPT has a lot of probabilities that make human-sounding responses...

It's not an easy concept to get across, but there's a fundamental difference between "knowing a thing and being able to discuss it" and "picking the next token based on the probabilities gleaned from inspecting terabytes of text, without understanding what any single token means"


"Picking the most likely token based on probabilities" doesn't accurately describe their architecture. They are not intrinsically statistical, they are fully deterministic. The next word is scored (and then normalized to give something interpretable as a probability), But the calculation performed to determine the score for the next token considers the full context window and features therein, while leveraging the meaning of the terms by way of semantic embeddings and its trained knowledge base. It is not obvious that the network does not engage with the meaning of the terms in the context window when scoring the next word, and it certainly can't be dismissed by characterizing it as just engaging with probabilities. There is reason to believe that the network does understand to some degree in some cases. I go into some detail here: https://www.reddit.com/r/naturalism/comments/1236vzf/on_larg...


What you're describing is very close to what the thought experiment of Chinese Room (https://en.wikipedia.org/wiki/Chinese_room).

But yes, it's unfortunate that when the next tokens are joined token and laid out in the form of a sentence it appears "intelligent" to people. However if you instead lay out the individual probabilities of each token instead then it'll be more obvious what ChatGPT/LLMs actually do.


What do you think your brain does when deciding the next word to speak? It is scoring words based on the appropriateness considering context and all the relevant known facts, as well as your communicative intent. But it is not obvious that there is nothing like communicative intent in LLMs. When you prompt it, you are engaging some subset of the network relevant to the prompt that induces a generative state disposed to produce a contextually appropriate response. But the properties of this "disposition to contextually appropriate responses" is sensitive to the context. In a Q&A context, the disposition is to produce an acceptable answer, in a therapeutic context, the disposition is to produce a helpful or sensitive response. The point is that communicative intent is within the solution space of text prediction when the training data was produced with communicative intent. We should expect communicative intent to improve the quality of text prediction, and so we cannot rule out that LLMs have recovered something in the ballpark of communicative intent.


> What do you think your brain does when deciding the next word to speak? It is scoring words based on the appropriateness considering context and all the relevant known facts

I mean, it's not. It's visualizing concepts internally and then using a grammar model to turn those into speech.


>It's visualizing concepts internally and then using a grammar model to turn those into speech.

First off, not everyone "visualizes" thought. Second, what do you think "using a grammar model to turn those into speech" actually consists of? Grammar is the set of rules by which sequences of words are mapped to meaning and vice-versa. But this is implemented mechanistically in terms of higher activation for some words and lower activation for other words. One such mechanism is scoring each word explicitly. Brains may avoid explicitly scoring irrelevant words, but that's just an implementation detail. All such mechanisms are computationally equivalent.


Yep, the "chinese room" is the classic thought experiment, but I feel like it fails to get the point across because the characters still represent language, so you could conceivably "learn" the language. I prefer the idea of symbols that aren't inherently language, as it really nails in the idea that it doesn't matter how long you spend, there's not something that you can ever learn to "speak" fluently.


> I'm writing a response to you right now, and I "know" a certain amount of information about the world.

How do you know? And more importantly, how do you prove it to others? The only way to prove it is to say: "OK, you are human, I am human, each of us know this is true for ourselves, let's be nice and assume it's true for each other as well".

> But that doesn't really work for LLMs, because there's no knowledge at all.

How do you know? I know your argument saying that the LLM "is just" guessing probabilities, but surely, if the LLM can complete the sentence "The Harry Potter book series was written by ", the knowledge is encoded in its sea of parameters and probabilities, right?

Asserting that it does not know things is pretty absurd. You're conflating "knowledge" with the "feeling" of knowing things, or the ability to introspect one's knowledge and thoughts.

> As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares.

I'd argue thought experiments are pretty useless here. The smaller models are quantitatively different from the larger models, at least from a functional perspective. GPT with hundreds of parameters may be very similar to the one you're describing in your thought experiment, but it's well known that GPT models with billions of parameters have emergent properties that make them exhibit much more human-like behavior.

Does your thought experiment scale to hundreds of thousands of tokens, and billions of parameters?

Also, as with the Chinese Room argument, the problem is that you're asserting the computer, the GPU, the bare metal does not understand anything. Just like how our brain cells don't understand anything either. It's _humans_ that are intelligent, it's _humans_ that feel and know things. Your thought experiment would have the human _emulate_ the bare metal layer, but nobody said that layer was intelligent in the first place. Intelligence is a property of the _whole system_ (whether humans or GPT), and apparently once you get enough "neurons" the behavior is somewhat emergent. The fact that you can reductively break down GPT and show that each individual component is not intelligent does not imply the whole system is not intelligent -- you can similarly reductively break down the brain into neurons, cells, even atoms, and they aren't intelligent at all. We don't even know where our intelligence resides, and it's one of the greatest mysteries.

Imagine trying to convince an alien species that humans are actually intelligent and sentient. Aliens opens a human brain and looks inside: "Yeah I know these. Cells. They're just little biological machines optimized for reproduction. You say humans are intelligent? But your brains are just cleverly organized cells that handles electric signals. I don't see anything intelligent about that. Unlike us, we have silicon-based biology, which is _obviously_ intelligent."

You sound like that alien.


You can figure out if someone knows what they’re talking about or not by asking them questions about a subject. A bullshitter will come up with plausible answers; an honest person will say they don’t know.

ChatGPT isn’t even a bullshitter when it hallucinates – it simply does not know when to stop. It has no conceptual model that guides its output. It parrots words but does not know things.


(Unless you're intentionally going on a tangent --)

The discussion is whether LLMs have "knowledge, understanding, and reasoning ability" like humans do.

Your reply suggests that a bullshitter has the same cognitive abilities as an LLM, which seems to validate that LLMs are on-par with some humans. The claim that "it simply does not know when to stop" is wrong (it does stop, of course, it has a token limit -- human bullshitters don't). The claim that "It has no conceptual model that guides its output." is just an assertion. "It parrots words but does not know things." is just begging the question.

Lots of assertions without back up. Thanks for your opinion, I guess?


Yes, you may be. But you still have an internal world model - through conditioning or otherwise that you're playing off against.

An LLM doesn't have that. It's very impressive parlour trick (and of course a lot more), but it's use is hence limited (albeit massive) to that.

Chaining and context assists resolving that to some extent, but it's a limited extent.

That's the argument anyway, that doesn't mean it's not incredibly impressive, but comparing it to human self-awareness, however small, isn't a fair comparison.

It's next token prediction, which is why it does classification so well.


AlphaGo is not aware that it’s playing a game either, but it’s better than humans at it. Awareness is not necessary to make people lose their jobs.


I don't really know anything about AlphaGo. There's more types of "AI" than LLMs, but that's not really the point. You don't need AI for people to lose their jobs... but nobody is losing their jobs to AlphaGo, and in the grand scheme of things it's unlikely that people are going to lose their jobs to GPT, too.


If you make people who produce text 25% more productive you can fire one in four and increase your profits.


> Awareness is not necessary

Wasn't it the plot of a sci-fi novel by Vernor Vinge or someone at least as popular?


You might be thinking of Blindsight by Peter Watts. Great book.


It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying.

Conflating intelligence and awareness seems to me the biggest confusion around this topic.

When non-technical people ask me about it, I ask them to consider three questions:

- is alive?

- thinks?

- can speak (and understand)?

A plant, microbe, primitive animals... are alive, don't think, can't speak.

A dog, a monkey... are alive, think, can't speak.

A human is alive, thinks, can speak.

These things aren't alive, think, can speak.

I know some of the above will be controversial, but clicks for most people, that agree: if you have a dog, you know what I mean whith "a dog thinks". Not with words, but they're capable intricate reasoning and strategies.

Intelligence can be mechanical, the same as force. For a man from the ancient times, the concept of an engine would have been weird. Only live beings were thought to move on their own. When a physical process manifested complex behaviour, they said that a spirit was behind it.

Intelligence doesn't need awareness. You can have disembodied pieces of intelligence. That's what Google, Facebook, etc. have been doing for a long time. They're AI companies.

It doesn't help with the confusion that speaking is a harder condition than thinking and thinking seems to be harder than being alive: "these things aren't alive so they can't think" but they speak, so...


Ehh... my dog is alive, thinks, and "speaks" in a manner - not a cute term for barking, but he communicates (with relatively high effectiveness) his wants and desires. Maybe not using human words, but he certainly has his own sort of crude language, as does my cat.

The problem is that LLMs aren't alive, and they _don't think_. The speaking is arguable.


You might be onto something (or not, I'm not sure), but its extremely well-documented that both dogs and monkeys can speak.

They can't speak English like a human, but they both can understand a good deal of English, and they both can speak in their own ways (and understand the speaking of others).

I think the key thing about these LLMs is that they upend the notion that speaking requires thinking/understanding/intelligence.

They can "speak", if you mean emit coherent sentences and paragraphs, really well. But there is no understanding of anything, nor thinking, nor what most people would understand as intelligence behind that speaking.

I think that is probably new. I can't think of anything that could speak on this level, and yet be completely and obviously (if you give it like, an hour of back and forth conversation) devoid of intelligence or thinking.

I think that's what makes people have fantastical notions about how intelligent or useful LLMs are. We're conditioned by the entirety of human history to equate such high-quality "speech" with intelligence.

Now we've developed a slime mold that can write novels. But I think human society will adapt quickly, and recalibrate that association.


I can't think of anything that could speak on this level, and yet be completely and obviously (if you give it like, an hour of back and forth conversation) devoid of intelligence or thinking.

It's not devoid of intelligence or thinking. You're just using "what I'm doing right now" as the definition of intelligence and thinking. It isn't alive so it can't be the same. You are noticing that its intelligence is not centralized in the same way as your own mind.

But that's not the same as saying it's dumb. Try an operational definition that involves language and avoid vague criteria that try to judge internal states. Your dog might understand some words, associate them to the current situation and react, but can't understand a phrase.

These things can analyze the syntax of a phrase, can follow complex instructions, can do what you tell them to do. How is that not "understanding"?

If that isn't intelligence for you, I don't know what else to say.


Not to be difficult but wouldn't "confabulating" be a preferable description for this behaviour? Hallucinating doesn't quite feel right but I can't exactly articulate why confabulate is superior in this context


"Hallucinating" (normally) means having a subjective experience of the same type as a sensory perception, without the presence of a stimulus that would normally cause such a perception. I agree it's weird to apply this term to an LLM because it doesn't really have sensory perception at all.

Of course it has text input, but if you consider that to be equivalent to sensory perception (which I'd be open to) then a hallucination would mean to act as if something is in the text input when it really isn't, which is not how people use the term.

You could also consider all the input it got during training as its sensory perception (also arguable IMHO), but then a proper hallucination would entail some mistaken classification of the input resulting in incorrect training, which is also not really what's going on I think.

Confabulation is a much more accurate term indeed, going by the first paragraph of wikipedia.


Nah, my issue with both terms is that they imply that when the answer is "correct" that's because the LLM "knows" the correct answer, and when it's wrong it's just a brain fart.

It doesn't matter if the output is correct or not, the process for producing it is identical, and the model has the exact same amount of knowledge about what it's saying... which is to say "none".

This isn't a case of "it's intelligent, but it gets muddled up sometimes". It's more of the case that it's _always_ muddled up, but it's accidentally correct a lot of the time.


>It doesn't matter if the output is correct or not, the process for producing it is identical

I don't see how this differs from a human earnestly holding a mistaken belief.


You haven't seen them already? The "AI Lawyer", all of the people trying to sell LLMs as search engines, and just generally hundreds of projects that are outright dangerous uses of LLMs but seem like they might be feasible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: