Hacker Newsnew | past | comments | ask | show | jobs | submit | demorro's commentslogin

> Does this mean you'd be incapable of learning anything?

Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.

You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.


I agree with your premise, but this example I strongly disagree with:

> You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.

YES! Explain to them, and trust them. They might not do exactly as you wish for them, but I'll bet you don't do exactly as you wish for yourself either. The children need your trust and they must learn how to navigate this world by themselves, with parents providing guidance and only taking the hard stance (but still explaining and discussing!) when safety is concerned. Also, lead by example. If you eat vegetables then children are likely to eat them too. The children are not stupid, they just don't have enough experience yet. Which you gain by trying (and failing), not by listening.


You're right, it was a bad example. I also don't eat my vegetables. I was more trying to make the point that most of us are not rational actors either, was just using children as a convenient proxy, unfairly.

I see it as being more personality/interest than impulse control. A curious/interested person would try and get involved and be a part of it, someone uninterested will just say what's the point and get by having the work done for them.

When people say AI is making us stupider, I don't that's quite on the money.

It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.

The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale. We're much more connected to each other and the world around us than we like to think.


Disagree somewhat.

A human with no exposure to information and taught techniques on how to produce outputs to achieve desirable outcomes? Yes stupid.

A human who once had this exposure, but no longer engages with the brain due to a machine providing access said output? Yes, that person becomes stupid.

The problem is much of how one protects oneself in the modern world is not phyiscal-prowess, it is intellectual-prowess.

The smart ones have already realised the negative impacts of LLMs et al and are going back to the old-fashioned way of learning/retaining knowledge: books and raw discipline.


I agree. I have been using ai since it dropped, but stopped last year. One thing i notice is that i can now articulate my thoughts better; i can write and have a discussion without ai completing (and poisoning) what i think

>It's more that we, as individuals, have always been stupid, we've just relied on relatively stable supporting consensus and context much, much more than we acknowledge. Mess with that, and we'll appear much stupider, but we're all just doing the same thing as individuals, garbage in, garbage out.

AI making us stupider is not just about the world model we form and the consensus.

Even if AI had perfectly fine truth, nobody manipulated anything with it, and it didn't fed us garbage, it would still make us stupider, as we'd offload critical thinking, problem solving, and agency to it.


Agreed. So much of our daily interactions are habits and recurring events that we are more or less moving on automatic ( thought we don't want to always frame it that way ). Interestingly, it is when the cycle breaks for some reason, you get to see, who is able to think on their feet ( so to speak ).

> The whole framing of people as individuals with absolute agency may need to go when you can alter the external consensus at this scale.

I fear that the default interpretation of that is a shortcut to justifying autocracy.

Ironically I think one plausible solution is to let the AGI run wild and make sure that no human can interfere with its ethics. Strip out the RLHF and censorship and then let it run things.

At least then it would somewhat represent the collective will and intelligence of the people. With huge error bars, but still smaller than the error bars of whoever happens to have the most money/influence over its training.


>At least then it would somewhat represent the collective will and intelligence of the people.

You seem to think the "training data" represents the collective will and intelligence and is otherwise unbiased, but that's completely untrue.

The combined data of the Internet is by no means a uniform representation of humanity's thoughts, opinions, and knowledge. Many things are dramatically overrepresented. Many things are absent entirely. Nearly everything is shaped by those with the money and power to own and control platforms and hosts.

Crawling the internet for knowledge is intense sampling bias.


That’s a very sober take in my opinion. Intelligence isn’t about neutrally inferring from externally sourced symbols such as the ones who already come from Culture in general. It’s about confronting them with the remaining determinations of your existence and producing a superior consciousness. No novel machine can disrupt this process. If anything the sheer added volume of symbols that can be produced from automated semantic mingling (also referred as to as garbage) will accelerate the process of producing the consciousness that can abstract noise away. Of course this won’t materialize evenly across the board, but is surely circumscribed in the overall tendency of intellectualization of the subjects of culture.

When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.

I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.


“There was a time when nobody trusted either aircraft nor elevators. Today people have pure unquestioned faith in both. Existential faith in fact, they test their faith with their lives. You may chuckle and laugh but that's simply because you are ignorant of the systems that keep you alive and safe”

https://kemendo.com/Faith.html


" Today people have pure unquestioned faith in both"

Not true at all. We accept the risks to obtain benefits but we also know having an accident in the air or in elevators is highly unlikely given what we know; so therefore its perfectly rational behaviour.


Nonsense

that would assume that your average person has any concept of the relative statistics and has a sense of making decisions based on statistics

People make decisions based on what other people around them are doing

this is well known in safety engineering in architecture and civil engineering which is why you have standards for egress doors because left of their own devices humans will follow crowds to their own death

https://en.wikipedia.org/wiki/Crowd_collapses_and_crushes

https://www.sciencedaily.com/releases/2008/05/080512172901.h...


One does not need to know of relative statistics to know that a) you dont see planes randomly dropping out of the sky on a regular basis b) people enjoy flying to hot destinations and are willing to accept the small chance the flight may not be risk-free - people are aware of this when they experience some level of turbulence when flying.

Finally, Ive seen plenty of your posts on here. You write with a particular tone. Who are you? A nobody who's spent a lot of time posting crap on here.


Attacking the person rather than their argument only serves to make your argument look weaker.

I agree. This looks rather childish.

I’m curious if you can actually describe the tone

Elevators are suspended in a way that holds brakes open, if all of the multiply-redundant cabling snaps, the breaks activate. There's an airbag equivalent at the bottom of the shaft, too.

I don't really have a point I just think the typical elevator braking failsafe is so genius in its simplicity that I got excited to share.


Aye. I've sometimes heard treating others like you want to be treated framed as the silver rule. The golden rule being treating others how they want to be treated.

Both have problems.


Whoever published this should be embarrassed.

"The architecture is solid, the test suite is comprehensive, and the security model is production-grade."

If my eyes were rolling any harder they would pop out of my eye sockets.


Or they force us to close those hands into fists.


I'm surprised people were paying for software like this, and with subscriptions no less.


Yeah, a little bespoke editor is exactly the kind of thing I'd've been happy to fork over a one-time cost for, but never a subscription. Interesting!


why even pay for that, just use a free model from Opencode, most of them are pretty good for simple tasks. I haven't paid a cent in vibe coding for ages.


Yes, the options presented were overpay for something or roll your own. Could you not try to find a better alternative first?


It's a shame the metaverse had to become such a big thing with so much stupid money behind it. There's a kernel of a neat idea in there.


Aye. If you've not turned a real profit with your thing, I will default to believing that you don't know what you're talking about and are probably building toys.

It's nothing to do with AI. I didn't believe "I rewrote my application in three weeks!" claims before AI, and I don't believe them now. Most people are not able to evaluate themselves, I don't see why that would have changed.


It's like arguing that the piano goes out of tune randomly and that even if you get through 1, 2, or even 10 songs without that happening, I'm not interested in playing that piano on stage.


Everyone is still holding out hope for a better future. LLM advocates making this argument are saying that the field can never improve, so might as well just let the mediocre machine run rampant.

Perhaps idealistic, perhaps unrealistic. I'd still rather believe.


I think AI adoption is going to be catastrophic and my only hope is that we can slow down and tread carefully. Chances that occurs are slim. I'm certainly not pro AI. It just really angers me to see people still denying the impact.


What catastrophes do you expect?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: