Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do.
But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
> The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use
That seems more like a canary than anything. This is the demographic that doesn't even know which tech company they're talking to in congress. That's not the demographic in touch with tech. They have gotten more excited about even dumber stuff.
For people under 50, it's a wildly common insult to say something seems AI generated. They are disillusioned with the content slop filling the internet, the fact that 50% of the internet is bots, and their future job prospects.
The only people I've seen liking AI art, like fake cat videos, are people over 50. Not that they don't matter, but they are not the driver of what's popular or sustainable.
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
I live in a medium-sized British town of 100,000 people or so. It may be a slightly more creative town than most — lots of arts and music and a really surprisingly cool music scene — but I can tell you that AI pleases (almost) nobody.
I think actually a lot about it is the sort of crass, unthinking, default-American-college-student manner about the way ChatGPT speaks. It's so American and we can feel it. But AI generated art and music is hugely unpopular, AI chatbots replacing real customer service is something we loathe.
Generally speaking I would say that AI feels like something that is being done to us by a handful of powerful Americans we profoundly distrust (and for good reason: they are untrustworthy and we can see through their bullshit).
I can tell you that this is so different to the way the internet was initially received even by older people. But again, perhaps this is in part due to our changing perspectives on America. It felt like an exciting thing to be part of, and it helped in the media that the Web was the brainchild of a British person (even if twenty years later that same media would have liked to pretend he wasn't at a European research institution when he did it).
The feeling about AI is more like the feeling we have about what the internet eventually did to our culture: destroying our high streets. We know what is coming will not be good for what makes us us.
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
Frankly, tech deserves its bad reputation in SF (and worldwide, really).
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
EDIT: Removed part of my post that pissed people off for some reason. shrug
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.
Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?
It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.
From my PoV you are trolling with virtue signalling and thought terminating memes.. You don't want to discuss why every(?) technological introduction so far has ignored priorities such as your sentiments and any devil's adovocate must be the devil..
The members of HN are actually a pretty strongly biased sample towards people who get the omelet when the eggs get broken.
No not the devil, but years ago I stopped finding it funny or useful when people "played" the part of devil's advocate because we all know that the vast majority of the time it's just a convenient way to be contrarian without ever being held accountable for the opinions espoused in the process. It also tends to distract people from the actual discussion at hand.
People not being assholes and having opinions is not "trolling with virtue signaling". Even where people do virtue signal, it is significant improvement over "vice signaling" which you seem to be doing and expecting others to do.
I have an “enabling suicidal ideation” concern for starters.
To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.
I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.
You're being needlessly prescriptive with language here. I am taking about health and safety writ large. I don't appreciate the game you're playing and it's why these discussions rarely go anywhere. It can't all be flippant retorts and needling words. I am clearly saying that we need to as a society be willing to discuss the possible issues with LLM's and make informed decisions about how we want this technology to exist in our lives.
If you don't care about that so be it - just say it out loud then. But I do not feel like getting bogged down in justifying why we should even discuss it as we circle what this is really about.
All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.
If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.
I would disagree. Luddite, to me, is a negative and pejorative label because history has shown Ned Ludd and his followers to have been a short-sighted, self-sabotaging reactionary movement.
I think the same thing of the precautionary movements today, including the AI skeptic position you are advocating for here. The comparison is valid, and it is negative and pejorative because history is on the side of advancing technology.
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.