Hacker Newsnew | past | comments | ask | show | jobs | submit | 9x39's commentslogin

Actually that’s what data and the preponderance of victims allege: an intersection of immigration and policing which interlocked to systematically deprioritize the investigation into abuse of working-class white girls by an over represented ethnic group.

In the local data that the audit examined from three police forces, they identified clear evidence of “over-representation among suspects of Asian and Pakistani-heritage men”.

It’s unfortunate to watch people and entire countries twist themselves in logic pretzels to avoid ever suggesting that immigration has no ills, and we’re just being polite here about it.

https://www.aljazeera.com/news/2025/6/17/what-is-the-casey-r...

https://celina101.substack.com/p/the-uks-rape-gang-inquiry


Odds vs stakes argument, kinda. Is it perfect? no. Should you do something? probably.

In personal protective gear, you have ballistic helmets. They don't cover the face. They often have cutouts around your ears. They don't cover your neck. They can generally stop a low velocity handgun round, and anything more energetic except a glancing rifle round is usually going right through. Even if it doesn't penetrate, backface deformation may be lethal. They're still generally worn as the only game in town.


You can see the same mechanisms - albeit with less available cash - in South African farms, such as general property defenses and safe rooms.

Spending a little as a hedge against anarcho-tyranny and its collateral damage showing up in your (gated) neighborhood seems rational for those who can afford it.


> I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.

Advancing a case for a precedent-creating decision is a well-known tactic for creating the environment of success you want for a separate goal.

It's possible you can find a genuine belief in the people who advance the cause. Charitably, they're perhaps naive or coincidentally aligned, and uncharitably sometimes useful idiots who are brought in-line directly or indirectly with various powerful donors' causes.


There was a meme going around that said the fall of Rome was an unannounced anticlimactic event where one day someone went out and the bridge wasn't ever repaired.

Maybe AGI's arrival is when one day someone is given an AI to supervise instead of a new employee.

Just a user who's followed the whole mess, not a researcher. I wonder if the scaffolding and bolt-ons like reasoning will sufficiently be an asymptote to 'true AGI'. I kept reading about the limits of transformers around GPT-4 and Opus 3 time, and then those seem basic compared to today.

I gave up trying to guess when the diminishing returns will truly hit, if ever, but I do think some threshold has been passed where the frontier models are doing "white collar work as an API" and basic reasoning better than the humans in many cases, and once capital familiarizes themselves with this idea more, it's going to get interesting.


But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes.

This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.

My dad is retired and enamored with ChatGPT. He’s been teaching classes to seniors and evangelizing the use to all his friends. Every time he calls he gives me an update on who he’s converted into a ChatGPT user. He seems disappointed with anyone who doesn’t use it for everything after he tells them about it.

A couple days ago he was telling me one lady he was trying to sell on it wouldn’t use it. She took the position that if she can’t trust the answers all the time, she isn’t going to trust or use it for anything. My dad almost seemed offended by this idea, he couldn’t understand why someone wouldn’t want the benefits it could offer, even if it wasn’t perfect.

I think her position was very sound. We see how much misinformation spreads online and how vulnerable people are to it. Wanting a trusted source of information is not a bad thing. Getting information more quickly is of little value if it isn’t reliable data.

If I prod my dad enough about it, he will admit that ChatGPT has made some mistakes that he caught. He knew enough to question it more when it was wrong. The problem is, if he already knew the answer, why was he asking in the first place… and if it was something he wasn’t well versed on, how does he know it’s giving him good data?

People are defaulting to trust, unless they catch the LLM in a lie. How many times does someone have to lie to a person before they are labeled a liar and no longer trusted at face value? For me, these LLMs have been labeled a liar and I don’t trust them. Trust takes a long time to rebuild once it’s broken.

I mostly use LLMs to augment search, not replace it. If it gives me an answer, I’ll click through to the sourced reference and see what it says there, and evaluate if it’s a source with trusting. In many cases the LLM will get me to the right page, but it will jumble up the details and get them wrong, like a bad game of telephone.


How do you know that it’s a source worth trusting?

I think the expectation of AI being perfect all the time is probably driven by the hype and marketing of “1 million PhDs in your pocket”.

If you compare AI to an average person or a random website you’d come across google I would wager that AI is more likely to be accurate in almost every scenario.

Hyper specific areas, niche domains and rapidly evolving data that is not being published - a lot less so.


Thanks for sharing that anecdote. I think everyone is susceptible to misinformation, and seniors might be especially unprepared to adapt to LLMs tricks.

The problem becomes your retirement. Sure, you've earned "expert" status, but all the junior developers won't be hired, so they'll never learn from junior mistakes. They'll blindly trust agents and not know deeper techniques.

We are currently at a point where the master furniture craftsmen are doing quality assurance at the new automated furniture factory. Eventually, everyone working at the factory will have never made any furniture by hand and will have grown up sitting on janky chairs, and they will be the ones supervising.

This is a great example...

Designing and building chairs (good chairs, that is) is actually a skill that takes a lot of time and effort to develop. It's easy to whip up a design in CAD, but something comfortable? Takes lots of iterations, user tests etc. The building part would be easy once the design is hammered out, but the design is the tough part.


The majority can be like that but the few can set the tone for many.

You can get experience without an actual job.

Can I rephrase this as "you can get experience without any experience"? Certainly, there's stuff you can learn that's adjacent to doing the thing; that's what happens when juniors graduate with CS degrees. But the lack of doing the thing is what makes them juniors.

>that's what happens when juniors graduate with CS degrees

A CS degree is going to give you much less experience than building projects and businesses yourself.


How much time will someone realistically dedicate to this if they need to have a separate day job? How good will they get without mentors? How much complexity will they really need to manage without the bureaucracy of an organization?

Are senior software engineers of the future going to be waiting tables along side actors for the first 10+ years of their adult life, working on side projects on nights and weekends, hoping to one day jump straight to a senior position in a large company?

The questions I instinctively ask myself when looking at a new problem, having worked in an enterprise environment for 20 years, are much different than what I’d be asking having just worked on personal projects. Most of the technology I’ve had access to isn’t something a solo hobbyist dev will ever touch. Most of the questions I’m asking are influenced by having that access, along with some of the personalities I’ve had to deal with.

How will people get that kind of experience?

There is also the big issue of people not knowing what to build. When a person gets a job, they no long need to come up with their own ideas. Or they generate ideas based on the needs of the environment they’re in. In the context of my job, I have no shortage of ideas. For solo projects, I often draw a blank. The world doesn’t need a hundred more todo apps.


>How much time will someone realistically dedicate to this if they need to have a separate day job?

Typically parents subsidize the living of their children while they are still learning.

>Most of the technology I’ve had access to isn’t something a solo hobbyist dev will ever touch

That's already true today. Most developers are react developers. If hired for something else they will have to pick that up on the job. When you have niche tech stacks you already will need to compromise on the kind of experience people have. With AI having exact experience in the technology is not that necessary since AI can handle most of it.


Parents can only subsidize children if they are doing well themselves, most aren’t.

That “learning” phase used to end in the 18-25 range. Getting rid of juniors and making someone get enough experience on side projects to be considered a senior would take considerably longer. Exactly how long are parents supposed to be subsidizing their children’s living expenses? How can the parents afford to retire when they still have dependents? And all of this is built on the hope that the kid will actually land that job in 10 years? That feels like a bad bet. What happens if they fail? Not a big deal when the kid is 27, but a pretty big deal at 40 when they have no other marketable skills and have been living off their parents.

The difference is there are juniors getting familiar with those enterprise products today. If they go away, they will step into it as senior people and be unprepared. It’s not just about the syntax of a different language, I’m talking more about dealing with things like Active Directory, leveraging ITSM systems effectively, reporting, metrics, how to communicate with leadership, how to deal with audits. AI might help with some of this, but not all of it. For someone without experience with it, they don’t know what they don’t know… in which case the AI won’t help at all.

I even see this when dealing with people from a small company being acquired by a larger company. They don’t know what is available to them or the systems that are in place, and they don’t even know enough to ask. Someone from another large company knows to ask about these things, because they have that experience.


>Not a big deal when the kid is 27, but a pretty big deal at 40 when they have no other marketable skills

Let's say someone started building products since 10. By the time they were 27 they would have 17 years of experience. By 40 they would have 30 years of experience. That is more than enough time for one to gain a marketable skill that people are looking for.

>they don’t know what they don’t know… in which case the AI won’t help at all.

I think you are underestimating at AI's ability to sus out such unknown unknowns.


You’re expecting kids in 5th grade to pick a career and start building focused projects on par with the experience one would get in a full time position at a company?

This can’t be serious?

How does AI solve the unknown unknowns problem?

Even if someone may hear about potential problems or good ideas from AI, without experience very few of those things are incorporated into how a person operates. They have never felt the pain of missing those steps.

There are plenty of signs at the pool that say not to run, but kids still try to run… until they fall and hurt themselves. That’s how they learn to respect the sign.


>You’re expecting kids in 5th grade to pick a career and start building focused projects on par with the experience one would get in a full time position at a company?

Yes, I am. Do not underestimate how smart 5th graders are and what they can do with all of the free time they have.

>How does AI solve the unknown unknowns problem?

You can ask it what it thinks you should know. You can ask it for what pitfalls to look out for. You can ask it to roleplay to play out scenarios and get practice with them. I think such practice is enough to get them to a state of being hirable.


I’m sure there are some exceptional 5th graders doing amazing things. The number that will keep that same interest into adulthood is exceptionally low. Kids also need a chance to be kids. Expecting them to be heads down working on their career ambitions at 10 is dystopian.

It’s not about just getting hired. It’s about being effective once hired. I expect a senior to have preferences and opinions, informed by experience, on how things can and should run… while also being able to adapt to the local culture. We should be able to debate ideas in real time without having to run to the LLM to read the next reply. If that’s all someone is bringing to the table, just tell the team to use an LLM during brainstorming sessions.


From my experience, if you think AI is better than most workers, you're probably just generating a whole bunch of semi-working garbage, accepting that input as good enough and will likely learn the hardware your software is full of bugs and incorrect logic.

hardware / hard way, auto-correct is a thing of beauty sometimes :)

I'd always imagined that AGI meant an AI was given other AIs to manage.

I don't think this is how it'll play out, and I'm generally a bit skeptical of the 'agent' paradigm per se.

There doesn't seem to be a reason why AIs should act as these distinct entities that manage each other or form teams or whatever.

It seems to me way more likely that everything will just be done internally in one monolithic model. The AIs just don't have the constraints that humans have in terms of time management, priority management, social order, all the rest of it that makes teams of individuals the only workable system.

AI simply scales with the compute resources made available, so it seems like you'd just size those resources appropriately for a problem, maybe even on demand, and have a singluar AI entity (if it's even meaningful to think of it as such, even that's kind of an anthropomorphisation) just do the thing. No real need for any organisational structure beyond that.

So I'd think maybe the opposite, seems like what agents really means is a way to use fundamentally narrow/limited AI inside our existing human organisations and workflows, directed by humans. Maybe AGI is when all that goes away because it's just obviously not necessary any more.


>These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.

This is the key I think that Altman and Amodei see, but get buried in hype accusations. The frontier models absolutely blow away the majority of people on simple general tasks and reasoning. Run the last 50 decisions I've seen locally through Opus 4.6 or ChatGPT 5.2 and I might conclude I'd rather work with an AI than the human intelligence.

It's a soft threshold where I think people saw it spit out some answers during the chat-to-LLM first hype wave and missed that the majority of white collar work (I mean it all, not just the top software industry architects and senior SWEs) seems to come out better when a human is pushed further out of the loop. Humans are useful for spreading out responsibility and accountability, for now, thankfully.


LLMs are very good at logical reasoning in bounded systems. They lack the wisdom to deal with unbounded systems efficiently, because they don't have a good sense of what they don't know or good priors on the distribution of the unexpected. I expect this will be very difficult to RL in.

Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?

It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.


Doesn't seem like a very good clone. I wonder if he's hoping he's in their training data for a payout, if he can force that to be disclosed.

I think a few random samples trivially shows NotebookLM is higher pitched, although if you generalize to "deep male voice with vocal fry" you could lump them together with half the radio and podcast voices.


This view reduces countries to nothing more than oversized hotels or economic zones, as if they don’t have communities that go back many generations and who would fight or die to defend the borders.

Think this through: If the world likes your real estate, they can just come in and take it over overnight? Borders suddenly don’t matter?

Pop caps can easily be understood as visa or naturalization buffers. Hysteria doesn’t help.


Do you own that land? If you don't, then its not your land and not for you to decide what to do with it. Where has anyone proposed taking over someone's land?

The Swiss own Switzerland, to clear this up.

Which swiss owns switzerland? Switzerland is 42000 sq km. Can you show me land deeds by owners that cover the entire area?

Well, you can start with Wikipedia: https://en.wikipedia.org/wiki/Cantons_of_Switzerland#Constit...

Then, you could reach out to the Cantons and ask about individual parcels or titles.

We can measure whether they have ownership by testing if they respond to trespass, maybe by constructing a building and seeing they mobilize a response.

Where do you want to move the goalposts next?


You are the one who made this really odd claim, you do the homework and show me, otherwise retract your nonsense. Who is they? Unless they physically own a title to the land the building was being made on, why should they get any say? You are the one said swiss "own" switzerland, so you need to show me the sum total of all private land deeds covers the entirely of switzerland. Not your land, not your decision. And unless you can produce an actual land deed, "trespass" is completely bullshit.

Nationalism and borders have done very little for humanity.

The world would be a better place if we defined our communities by how we welcomed people and not by who we excluded


If that was the case, there would be no need to worry about migrating to the area formerly-known-as-Switzerland to be among its people, would there?

The people who clamor for moving there now could simply remake what they imagine liking about it in another area - careful not to erect borders or engender any kind of pride or loyalty to what they build, naturally.


I support anti nationalism all over the world. The rest of the countries are shitty too. I still am yet to see what good nationalism and religon have given us in balance against the huge negatives they have bred.

Pause working visas and citizenship applications?

Organic population growth doesn’t have to be criminalized or authoritarian-controlled like China tried.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: