This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
> Who gets to decide on the exact definition of a “Torment Nexus”?
It's a joke that is intentionally vague so that the reader could come up with their own definition. Hopefully no one tries to make one so we don't need to define it.
> Presupposing whether everyone reading HN likes or dislkkes something not even agreed on yet seems silly.
Yep! Ever seen Pulp Fiction? Do you know what's in the briefcase? No one does. It's a McGuffin. The Torment Nexus is also a McGuffin, it's a poorly defined thing that the actual definition of is irrelevant to the plot, only how it creates motivation for the characters.
In this joke the "Torment Nexus" by name is clearly something you don't want, no one wants to be tormented. It's also the McGuffin because something named a Torment Nexus would HOPEFULLY be something someone wouldn't build, but the joke is "hey this guy went and built the horrible thing we didn't want!"
Ever heard anyone make a Soylent Green joke? Same thing. All we know is that "soylent green in people" but we don't know HOW it's people, but it doesn't matter because we simply don't want to eat people under any condition.
Small but important correction: the actual quote is “Soylent Green _is_ people!”. Agree with your broader point about the Torment Nexus being a McGuffin, but we do actually know how Soylent Green is people in the film - it’s explicitly revealed that it’s made from processed corpses due to overpopulation.
Yep, that’s a typo, you’re completely right. But when I say how, I mean literally “what is the manufacturing process that turns dead humans into a food product people don’t know is former people?” If it’s protein bars or “synthetic tofu” or maybe what looks like blueberry muffins, it doesn’t matter to the story.
It's already defined as an abstract archetype! Those aren't supposed to be concrete individual objects in the first place, they are shaped placeholders. In particular, "Torment Nexus" is a placeholder for any hypothetical (or real) high-tech invention which involves a disturbing amount of human suffering.
In other words, it's just like discussing "the Hero's Magic Weapon" or "the Wise Wizard" or "the Weird Place Where Ships Vanish".
Suppose I stated: "Captains hate to pilot their ships near the Weird Place Where Ships Vanish." Does it make sense for someone to complain that the coordinates of the Place haven't been defined, or that nobody has done a statistical analysis of Ship Vanishing rates?
"Torment Nexus" comes from (and is used as a concise reference to) a two-sentence tweet [0], and I think it makes clear what the joke (or, perhaps, "dystopian observation") is about:
Sci-Fi Author: In my book, I invented the Torment Nexus
as a cautionary tale.
Tech Company: At long last, we have created the Torment
Nexus from the classic sci-fi novel, Don't Create The
Torment Nexus.
The movie that doesn't get enough credit at predicting the future, or what is now the present, is Captain America: The Winter Soldier. DOGE, Palantir, Larry Ellison's vision of nonstop AI surveillance, and all the data-sucking tech companies swearing fealty to the orange authoritarian are bringing the plot of that movie directly into reality, and I'm always surprised that it never gets mentioned.
Ha. That's the most outlandish part of the plot. In terms of enforcement and control, Black Mirror's Metalhead episode seems the more likely vision, where the robotic dogs are comparable to drones.
I hate to break it to you, but Palantir was founded 11 years before Winter Soldier came out. It was commentary on the current world and near Sci-Fi, not a far off warning. We've been operating under surveillance capitalism for over 2 decades now. Which is probably why it doesn't get mentioned. People had already acclimated to it being the unacceptable yet inevitable future while doing their best to perpetuate it while voicing disdain.
I hate to break it to you, but I'm well aware of surveillance capitalism and when Palantir was founded. And I didn't say Winter Soldier was a "far off warning". Maybe you need to check your reading comprehension skills.
The unification of government data on citizens under DOGE and the push to use AI for surveillance under an authoritarian government bring us far closer to the plot of Winter Soldier than the bread and butter surveillance capitalism we'd already been living under. I regret that I had to spell that out for you.
> The unification of government data on citizens under DOGE and the push to use AI for surveillance under an authoritarian government bring us far closer to the plot of Winter Soldier than the bread and butter surveillance capitalism we'd already been living under.
I never disagreed with this point.
My comment was only about the point that Winter Soldier "doesn't get enough credit." My point was that setting was not just "not novel" but already common place. Meaning Winter Soldier does not stand out as a unique representation. I want to stress "not a unique representation" != "not a representation".
It sounds like you're saying the setting was already commonplace in fictional media? But you don't reference such representations in your prior comment at all.
At any rate, while themes of technological surveillance and authoritarianism certainly predate Winter Soldier, I'm not aware of anything in popular culture prior to 2014 that really matches the moment _to the degree_ Winter Soldier does. And if you simply meant the actual state of America circa 2014, sure, Palantir existed, but DOGE and an authoritarian-controlled US military institution did not. ML, yes, but not the quasi-AGI of today that's a much closer match for the computerized Arnim Zola.
When I ask 20-somethings whether they’ve seen the matrix the answer is ‘no’ usually. They have little idea what they’re working towards, but are happy to be doing it so they have something to eat.
Yet they have seen Black Mirror and the likes, which also portray the future we’re heading towards. I’d argue even better because matrix is still far off.
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
The Matrix was inspired by the Gnostic schools of thought. The authors obviously knew loads about esoteric spirituality and the occult sciences. People have been suggesting that we are trapped in a simulacrum / matrix for over two-thousand years. I personally believe The Matrix was somewhat of a documentary. I'm curious - why do you think a concept such as presented in The Matrix, is still far off?
I think we are close to Wally or Fifteen Million Credits, maybe even almost at the Oasis (as seen by IOI). But we have made little progress in direct brain stimulation of senses. We are also extremely far from robots that can do unsupervised complex work (work that requires a modicum of improvisation).
Of course we might already be in matrix or a simulation, but if that’s the case it doesn’t really change much.
The difference is that we don't have credits the way the characters do in Brooker's universe; we have social clout in the form of upvotes, likes, hearts, retweets, streaming subs, etc. most of which are monetised in some form or are otherwise a path to a sponsorship deal.
The popularity contest this all culminates in is, in reality, much larger in scale than what was imagined in Black Mirror. The platform itself is the popularity contest.
> We are also extremely far from robots that can do unsupervised complex work
Don't worry, they'll just sell teleoperated robots[0]. I'm absolutely positive this definitively 100% won't get outsourced and result in you getting a s̶l̶a̶v̶e̶ s̶e̶r̶v̶a̶n̶t̶ low cost helper from a third world country. The dehumanization is a feature!
[0] I'm not joking, they are openly stating this...
Some would argue that most stories in Western societies are echoing the Bible. The Matrix is in many ways the story of Jesus (Morpheus is John the Baptist).
Brain/computer interface that completely simulates inputs which drive perceptions which are indistinguishable from reality. At least, that’s what is portrayed in the movie. I’m not OP but this to me seems far off.
Fair point and thank you for sharing it! It definitely does feel far off in that aspect. I suppose though, that if we are all trapped in a false reality it is impossible to know (without escaping the false reality) how advanced base reality actually is. I always interpreted the whole jacking into the Matrix thing, metaphorically, but with a literal interpretation the OP's comment makes much more sense to me. Thanks again!
Matrix was a direct rip off of ghost in the shell series which did a much better job at capturing the essence of the issue in depth (the writers almost admit to it and there are videos out there that does scene by scene comparison). Ghost in the shell is majorly influenced by Buddhism. While there are obvious overlaps between platonism (that forms the core of gnostism - salvation through knowledge to the real world, and the current world ~= suffering and not real), it wouldn't be correct to attribute gnostism as the influence behind The Matrix.
I enjoyed Silo, but I think in the real world, completely destroying the world's ecosystem and a fraction of mankind surviving in tiny isolated bunkers for generations is more fantasy than scifi...
It's funny how AI companies and national intelligence agencies have the same goal, in the much the same way that it used to be funny that social networks got people to volunteer information about themselves publicly that previous generations would not admit to a state security service.
I think it's more like: investors permanently unhappy because they were promised ownership of God and now we're built out they're getting a few percent a year instead at best. Squeeze extra hard this quarter to get them off the Board's backs for another couple of months.
Investors are never happy long term because even if you have a fantastic quarter, they'll throw a tantrum if you don't do it even better every single time.
Personally I think it would be awesome if we could browse a 1999 version of the web. Better than the crap we have today, even if it is all just AI generated.
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
Hey now, don’t forget how they will just be able to hand over everything you’ve ever done to the government! We know no government or power would ever abuse that.
The government is not your biggest concern, (if you are not a brown skinned immigrant)
* Insurence companies
* Health insurers
* Banks
These all would like a chance to increase their profits at your expense based on your private data. Just because they will get it wrong 25% of the time will not mean positive profits for them...
The government does not really care if you pay your taxes
I recommend BrowserOS if you're looking for alternatives.
Open-source agentic browser that uses any LLM provider (including local / Ollama).
That being said... I tried it, and while it was fun and cool, I didn't get enough value out of it to use it regularly (I think this would go for any agentic browser).
Knowing this is the direction things were headed, I have been trying to get Firefox and Google to create a feature that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it.
AFAICS this has nothing to do with "open-source personal AI engines".
The recorded history is stored in a SQLite database and is quite trivial to examine[0][1]. A simple script could extract the information and feed them to your indexer of choice. Developing such a script isn't the task for an internet browser engineering team.
The question remains whether the indexer would really benefit from real-time ingestion while browsing.
Due to the dynamic nature of the Web, URLs don't map to what you've seen. If I visit a URL at a certain time, the content I see is different than the content you see or even if I visit the same URL later. For example, if we want to know the tweets I'm seeing are the same as the tweets you're seeing and haven't been subtly modified by an AI, how do you do that? In the age of AI programming people, this will be important.
I'm confused, do you want more than the browser history then? ...something like Microsoft's Recall? Browsers currently don't store what they've seen and for good reasons. I was with you for a sec, but good luck convincing Mozilla to propagate rendered pages to other processes then!
I understand GP like they want to browse normally and have that session's history feed into another indexing process via some IPC like D-Bus. It's meant to receive human events from the browser.
Chrome Devtools MCP on the other hand is a browser automation tool. Its purpose is to make it trivial to send programmed events/event-flows to a browser session.
In all seriousness yes. Except maybe for the last 5 sentences.
I fail to see the issue people have here. I mean, what exactly is the problem with training data here? This is not like advertising, where the data is used against you. It's not information about people that's being collected and extracted here - it's about collecting enough signal to identify patterns of thinking; it's about how human minds in general perceive the world. This is not going to hurt you.
(LLMs ultimately might, when wielded by... the same parties that have been screwing you over for decades or more. It's not OpenAI that's screwing you over here - it's advertisers, marketers, news publishers, and others in the good ol' cohort of exploitative liars.)
Perplexity released theirs earlier, and as far as I know, they do not use any of your data like that for training. It's really a shame if that's how OpenAI is using your data. I was going to try their coding solution, but now I'm just flat out blacklisting them and I'll stick to Claude. For whatever reason Claude Code just understands me fully.
Hate to be the dum-dum, but what's leading to humanity's self-destruction here? Loss of privacy? Outsized corporate power? Or, is this an extreme mix of hyperbole and sarcasm?
Well, you could always focus on the ridiculous environmental impact of llms. I read once that asking ChatGPT used 250x as much energy as just googling. But now google incorporated llms into search so…
I grew up on the banks of the Hudson River, polluted by corporations dumping their refuse into it while reaping profits. Anthropic/openai/etc are doing the same thing.
Yes. It's horrible. Probably 250x as much as watering your lawn per 1M ChatGPT queries. Except your sprinklers' vendor probably incorporates ChatGPT in their marketing, so they're literally using water to sell you tools to use water!
Oh the humanity!
I can't take those eco-impact threads seriously. Yes, ChatGPT uses compute, compute uses water and electricity. So does keeping your lawn trimmed and your dog well, and of the three, I bet ChatGPT is actually generating most value to everyone on the net.
Everything we do uses electricity and water. Everything that lives uses energy and water. The question isn't whether, or how much, but what for. Yes, LLMs use a lot of energy in absolute terms - but that's because they're that useful. Yes, despite what people who deny basic reality would tell you, LLMs actually are tremendously useful. In relative terms, they don't use that much more energy or water vs. things they displace, and they make up for it in the improvements.
Want to talk environmental impact of ChatGPT et al.? Sure, but let's frame it with comparative figures for sportsball, concerts, holiday decorations, Christmas lights, political campaigns, or pets. Suddenly, it turns out the whole thing is merely a storm in a teacup.
Have you read about the impact of data centers in non-US countries? Building a data center that requires potable water in a drought stricken country that lacks the resources to defend itself is incredibly destructive.
And I don’t have a dog but that water usage certainly provides the most benefit. Man’s best friend > online sex bot.
> Have you read about the impact of data centers in non-US countries? Building a data center that requires potable water in a drought stricken country that lacks the resources to defend itself is incredibly destructive.
And? Have you read about the impact of ${production facilities} in non-US countries? That's literally what industrialization and globalization are about. Data centers aren't unique here - same is true about power plants, factories, industrial zones, etc. It all boils down to the fact that money, electricity and compute are fungible.
Note: this alone is not a defense of LLMs, merely me arguing that they're nothing special and don't deserve being singled out - it's just another convoluted scenario of multinational industries vs. local population.
(Also last time I checked, the whole controversy was being stoked up by a bunch of large interest groups that aren't happy about competition disturbing their subsidized water costs - it's not actually a grassroots thing, but an industry-level PR war.)
> Man’s best friend > online sex bot.
That's disingenous. I could just as well say: {education, empowering individuals to solve more of their own problems, improving patient outcomes} > pets and trimmed lawns. LLMs do all of these and sex bots too; I'm pretty sure they do more of the former than the latter, but you can't prove it either way, because compute is fungible and accurate metrics are hard to come by :P.
That's clearer. I can see how that can be a problem, but destruction of humanity? I think of this as a fun change in circumstance at best and a challenge at worst, rather than a disaster.
Asymmetry of power creates rulers and the ruled. Widespread availability of firearms helped to partly balance out one aspect (monopoly on violence) and the wide availability of personal computers plus the Internet balanced out another (monopoly on information). Only part left is the control of resources (food, housing, etc.).
AI is destabilizing the current balance of knowledge/information which creates the high potential for violence.
Consider a society where everyone has a different reality about something shared normally.
Societies are built upon unspoken but shared truths and information (i.e. the social contract). Dissolve this information, dissolve or fragment the society.
This, coupled with profiling and targeting will enable fragmentation of the societies, consolidation of power and many other shenanigans.
This also enables continuous profiling, opening the door for "preemptive policing" (Minority Report style) and other dystopian things.
Think about Cambridge Analytica or election manipulation, but on steroids.
This. Power and Control is only viable at scale when the aforementioned tacts are wielded with precision by "invisible hands" ..
History has proved that keeping society stupid and disenfranchised is essential to control.
Did you know that in the 1600s the King of England banned coffee?
Simple.. fear of evolving propagating better ideas and more intense social fraternity.
"Patrons read and debated the news of the day in coffeehouses, fueled by caffeine; the coffeehouse became a core engine of the new scientific and philosophical thought that characterized the era. Soon there were hundreds of establishments selling coffee."
(the late 1600s was something of a fraught time for England and especially for Charles II, who had spent some time in exile due to the monarchist loss of the English Civil War)
But the impact of AI is going to be even worse than that.
For virtually all of human history, there weren't anywhere near so many of us as there are now, and the success and continuation of any one group of humans wasn't particularly dependent on the rest. Sure, there were large-scale trade flows, but there were no direct dependencies between farmers in Europe, farmers in China, farmers in India, etc. If one society collapsed, others kept going.
The worst historical collapses I'm familiar with - the Late Bronze Age Collapse and the fall of the Roman Empire - were directly tied to larger-scope trade, and were still localized beyond comparison with our modern world.
Until very recently, total human population at any given point in history has been between 100 and 400 million. We're now past 8 billion. And those 8 billion people depend on an interconnected global supply chain for food. A supply chain that, in turn, was built with a complex shared consensus on a great many things.
AI, via its ability to cheaply produce convincing BS at scale, even if it also does other things is a direct and imminent threat to the system of global trade that keeps 8 billion human beings fed (and that sustains the technology base which allows for AI, along with many other things).
I don't want to invalidate your viewpoint: I'll just share mine.
The shared truth that holds us together, that you mentioned, in my eyes is love of humanity, as cliche as that might sound. Sure it wavers, we have our ups and downs, but at the end, every generation is kinder and smarter than the previous. I see an upward spiral.
Yes, there are those of us who might feel inclined to subdue and deceive, out of feelings of powerlessness, no doubt. But, then there are many of us who don't care for anything less than kindness. And, every act of oppression inches us toward speaking and acting up. It's a self-balancing system: even if one falls asleep at the wheel, that only makes the next wake-up call more intense.
As to the more specific point about fragmented information spaces: we always had that. At all points in history we had varying ways to mess with how information, ideas and beliefs flowed: for better and for worse. The new landscape of information flow, brought about by LLMs, is a reflection of our increasing power, just as a teenager is more powerful than a pre-teen, and that brings its own "increased" challenges. That's part of the human experience. It doesn't mean that we have to ride the bad possibilities to the complete extreme, and we won't, I believe.
Thanks for your kind reply. I wanted to put some time aside to reply the way your comment deserves.
My personal foundations are not very different than yours. I don't care about many people cares. Being a human being and having your heart at the right place is a good starting point for me, too.
On the other hand, we need to make a distinction between people who live (ordinary citizens) and people who lead (people inside government and managers of influential corporations). There's the saying "power corrupts", now this saying has scientific basis: https://www.theatlantic.com/magazine/archive/2017/07/power-c...
So, the "ruling class", for the lack of better term, doesn't think like us. I strive to be kinder every day. They don't (or can't) care. They just want more power, nothing else.
For the fragmented spaces, the challenge is different than the past. We, humans, are social animals and were always in social groups (tribes, settlements, towns, cities, countries, etc.), we felt belong. As the system got complex, we evolved as a result. But the change was slow, so we were able to adapt in a couple of generations. In 80s to 00s, it was faster, but we managed it somehow. Now it's exponentially faster, and more primitive parts of our brains can't handle it as gracefully. Our societies, ideas and systems are strained.
Another problem is, unfortunately, not all societies or the parts of the same society evolve at the same pace to the same kinder, more compassionate human beings. Radicalism is on the rise. It doesn't have to be violent, but some parts of the world is becoming less tolerant. We can't ignore these. See world politics. It's... complicated.
So, while I share your optimism and light, I also want to underline that we need to stay vigilant. Because humans are complicated. Some are naive, some are defenseless and some just want to watch the world burn.
Instead of believing that everything's gonna be alright eventually, we need to do our part to nudge our planet in that direction. We need to feed the wolf which we want to win: https://en.wikipedia.org/wiki/Two_Wolves
Argh, I lost my reply due to a hiccup with my distraction-blocking browser extension. I'll try and summarize what I wanted to say. I'll probably be more terse than I originally would have been.
I appreciate your thoughtful reply. I too think that our viewpoints are very similar.
I think you hit the nail on the head about how it's important that positivity doesn't become an excuse for inaction or ignorance. What I want is a positivity that's a rally, not a withdrawal.
Instead of thinking of power as something that imposes itself on people (and corrupts them), I like to think that people tend to exhibit their inner-demons when they're in positions of power (or, conversely, in positions of no-power). It's not that the position does something to them, but it's that they prefer to express their preexisting disbalance (inner conflict) in certain ways when they're in those circumstances. When in power, the inner disbalance manifests as a villain; when out-of-power, it manifests as a victim.
I think it's important to say "we", rather than "us and them". I don't see multiple factions with fundamentally incompatible needs. Basically, I think that conflict is always a miscommunication. But, in no way do I mean that one should cede to tyranny or injustice. It's just that I want to keep in mind, that whenever there's fighting, it's always in-fighting. Same for oppression: it's not them hurting us, but us hurting us: an orchestration between villains and victims. I know it's triggering for people when you humanize villains and depassify victims, but in my eyes we're all human and all powerful, except we pretend that the 1% is super powerful, while the 99% are super powerless.
I had a few more points I wanted to share, but I have to run. Thanks for the conversation.
Google gave us direct access to much of the world's knowledge base, then snatched it away capriciously and put a facsimile of it behind a algorithmic paywall that they control at the whims of their leadership, engineering, or benefactors.
The despair any rational person will feel upon realizing that they lobotomized the overmind that drove Information Age society might just be traumatic enough, in aggregate, to set off a collapse.
So, yes. Destruction of humanity (at least, as we know it) incoming. That's without the super AI.
Why convince them? If they never go outside, they’ll just be inside anyway. You won’t interact with them. Metaphorically. Real life is a place, not an idea.
You're interacting with real people who doesn't see your face and hear your voice all day and affect each other.
Real life is a place encompassing the "cyberspace", too. They're not separate but intertwined. You argue that people affecting your life are the closest ones to you, but continuously interact with the ones who are farthest from you distance-wise and they affect your day at the moment.
People who want billions of people to be inside and compliant, want those people's vote to go a certain way (at least, while that is even still a thing). Once that part stops being a thing, you stop being allowed to be outside, as that could be a problem.
Not invalidating your viewpoint and I'd bet we are pretty well aligned, I too have a pretty local-first view and that as a country we put too much emphasis, energy, and discussion on national politics and could all benefit from "getting outside". That said, I did want to point out that this comes across as a very self-centric viewpoint, one that would differ greatly depending on who you ask. Even as an anecdotal story, it offers very little to say about the current state of affairs related to how people voted, which would appear to be the intent of the response.
As a bit of a semi-related aside, while everyone has different motivations when voting, as a whole when folks are able to vote for their gov't, one hopes that enough people are thinking about what is good for the majority and society as a whole and not only what is good for themselves. And that has more impact at local and state levels usually. A bit idealistic, admittedly.
I certainly can and do that. Can you please convince remaining 8 billions to do the same?
Based on ie election behavior or populations, what you describe is naivety on a level of maybe my 5 year old son. Can you try to be a bit more constructive?
Ok, let’s break it down for you: What all 8 billion people in the world think does not matter to you. There are people out there cutting heads off in the name of religion, or people who think their dictator is a divine being.
People outside your country have little effect on your daily life.
Even people within your country have a weak effect on your daily life.
What other people believe only really matters for economic reasons. Still, unless you are very dependent on social safety nets even they don’t matter that much. You just find more money and carry on.
You might think that more propaganda will result in people voting for bad politicians, but it is actually possible to have too much propaganda. If people become aware of how easily fake content is generated, which they are rapidly realizing in the age of AI, the result is they become skeptical of everything, and come up with their own wild theories on what the truth really is.
The people whose thoughts matter most are the people you interact with on a daily basis, as they have the most capability to alter your daily life physically and immediately. Fortunately you can control better who you surround yourself with or how you interact with them.
If you turn off the conversation, the world will appear pretty indifferent even to things that seem like a big deal on social media.
You said: "You might think that more propaganda will result in people voting for bad politicians"
In the US at least, the people who vote the most are typically the older people 40+ and those people have very little experience with tech and AI and are easily tricked by fake crape. Add AI to the mix, and they literally have no perception of the real world.
40s have very little experience with tech? Those were the people who practically invented tech as we know it today. Most AI researchers are in their 40s and 50s, and have been experimenting with machine learning and AI for the past decades.
I think your comment is just very ageist. You stereotype everyone who is middle age and above as barely lucid nursing home seniors.
Ironically I would say it is young 20 somethings and below who have no clue how a computer or software even works. Just a magic sheet of glass or black box that spits out content and answers, and sometimes takes pictures and video.
I am 42. I had a Commodore 64 as a kid. From the perspective of my first girlfriend, this made me wildly privileged. My first boyfriend was about a decade older than me, he didn't even have a household phone growing up.
> You stereotype everyone who is middle age and above as barely lucid nursing home seniors.
No, they have not. My dad worked on UK military IFF software solutions and simulations. Still took him years to realise Google (c. 2010) search results had a scroll bar. Mum eventually did get Alzheimers, but was mixing up reality and fiction decades earlier, in the form of New Age healing crystals, ley lines, etc., and as I grew up with that influence I too believed them until practicing Popper-like falsification stripped away each magickal belief.
Me, I've been following AI since before the turn of the millennium, and I still get surprised when I see how fast the tech is improving. I also see plenty of people, even on HN, assert AI will take (paraphrasing) "centuries, if ever" to reach performance thresholds it has now reached.
Your mom and dad must be much older, but at 42 you are among the first of the millennials and that’s a pretty tech literate generation. You’ve experienced a huge spectrum of tech and software in various forms, and struggled through their evolutions.
A lot of assumptions, some could be correct, some are plainly not.
Your idea of living in society is something very different form my idea, or European idea (and reality). Not seeing how everything is interconnected, ripple effects and secondary/tertiary effects come back again and again, I guess you don't have kids... well you do your life, if you think money can push through and solve everything important. I'd call it a sad shortsighted life if I cared, but thats just me.
This is kind of true - the media environment can be both overwhelming and irrelevant. But eventually it hits. I have some friends who are trans and very familiar with what a hostile propaganda campaign can do to your healthcare.
tl;dr: More close friends people have, more polarized societies become.
It's easy to profile people extensively, and pinpoint them to the same neighborhood, home or social circle.
Now what happens when you feed "what you want" to these groups of people you see. You can plant polarization with neighbourhood or home precision and control people en-masse.
Knowing or seeing these people doesn't matter. After some time you might find yourself proverbially cutting heads off in the name of what you believe. We used to call these flame wars back then. Now this is called doxxing and swatting.
The people you don't know can make life very miserable in a slow-burning way. You might not be feeling it, but this the core principle of slowly cooking the frog.
Absolute control over what people think and know, which is sort of absolute control overall with power to normalize anything, including what we consider evil now.
Look at really powerful people of this world - literally every single one of them is badly broken piece of shit (to be extremely polite), control freaks, fucked up childhood and thus overcompensating missing/broken father figure, often malicious, petty, vengeful, feeling above rest of us.
Whole reason for democracy since ancient times to limit how much power such people have. The competent sociopathic part of above will always rise in society to the top regardless of type of system, so we need good mechanism to prevent them from becoming absolute lifelong dictators (and we need to prevent them from attaining immortality since that would be our doom on another level).
People didn't change over past few thousands of years, and any society that failed above eventually collapsed in very bad ways. We shall not want the same for current global civilization and for a change learn from past mistakes, unless you like the idea of few decades of warfare globally and few billions of death. I certainly don't.