Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we must make it clear that this is not related to AI at all, even if the product in question is AI-related.

It is a very common problem with modern marketing teams, that have zero empathy for customers (even if they have one, they will never push back on whatever insane demands come from senior management). This is why any email subscription management interface now is as bloated as a dead whale. If too many users unsubscribe, they just add one more category and “accidentally” opt-in everyone.

It’s a shame that Proton marketing team is just like every other one. Maybe it’s a curse of growing organization and middle management creep. The least we can do is push back as customers.





I disagree: in as much as I have noticed this *far* more with AI than any other advancement / fad (depending on your opinion) than anything else before.

This also tracks with every app and website injecting AI into every one of your interactions, with no way to disable it.

I think the article's point about non-consent is a very apt one, and expresses why I dislike this trend so much. I left Google Workspace, as a paying customer for years, because they injected gemini into gmail etc and I couldn't turn it off (only those on the most expensive enterprise plans could at the time I left).

To be clear I am someone that uses AI basically every day, but the non-consent is still frustrating and dehumanising. Users–even paying users–are "considered" in design these days as much as a cow is "considered" in the design of a dairy farm.

I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.


To add to this, it's the same attitude that they used to create the AI in the first place by using content which they don't own, without permission. Regardless of how useful it may be, the companies creating it and including it have demonstrated time and again that they do not care about consent.

> the same attitude that they used to create the AI in the first place by using content which they don't own, without permission

This was a massive "white pill" for me. When the needs of emerging technology ran head first into the old established norms of ""intellectual property"" it blew straight through like a battle tank, technology didn't even bother to slow down and try to negotiate. This has alleviated much of my concern with IP laws stifling progress; when push comes to shove, progress wins easily.


For big corps yes.

For everyone else, chains.


You haven't taken to the high seas?

How can you get a machine to have values? Humans have values because of social dynamics and education (or lack of exposure to other types of education). Computers do not have social dynamics, and it is much harder to control what they are being educated on if the answer is "everything".

It's not hard if the people in charge had any scruples at all. These machines never could have done anything if some human being, somewhere in the chain, hadn't decided that "yeah, I think we will do {nefarious_thing} with our new technology". Or should we start throwing up our hands when someone gets stabbed to death like "well, I guess knives don't have human values".

Human beings are doing this.


> How can you get a machine to have values?

The short answer is a reward function. The long answer is the alignment problem.

Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.


Well, it’s pretty clear to me that the current reward function of profit maximization has a lot of down sides that aren’t sufficiently taken into account.

The only thing worse than it is anything else-maximisation.

That sounds like the valued-at-billions-and-drowning-in-funding company’s problem. The issue is they just go “there are no consequences for solving this, so we simply won’t.”

Maybe if we can't build a machine that isn't a sociopath the answer should be don't build the machine rather then oh well go ahead and build the sociopaths

This has real Torment Nexus[0] energy

[0] Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus


I’d argue that a lot of the scrape-and-train is just the newest and most blatant exploitation of the relationship that always existed, not a renegotiation of it. Stack overflow monetized millions of hours of people’s work. Same thing with Reddit and Twitter and plenty of other websites.

Legally it is different with books (as Anthropic found out) but I would argue morally it is more similar: forum users and most authors write not for money, but because they enjoy it.


I don't know, it feels odd to declare people wrote "because they enjoy it" and then get irritated when someone finds a way to monetize it retrospectively.

Like you're either doing this for the money or you're not, and its okay to re-evaluate that decision...but at the same time there's a whole lot of "actually I was low key trying to build a career" type energy to a lot of the complaining.

Like I switched off from Facebook aboutna years after discovering it when it increasingly became "look at my new business venture...friends". LinkedIn is at least just upfront about it and I can ignore the feed entirely (use it for job listings only).


The shift from "you just don't understand" to damage control would be funny if it wasn't so transparent.

> We have identified a bug in our system... we take communication consent very seriously

> There was a bug, and we fucked up... we take comms consent seriously

These two actors were clearly coached into the same narrative. I also absolutely don't believe them at all: some PM made the conscious decision to bypass user preferences to increase some KPI that pleases some AI-invested stakeholder.


> only those on the most expensive enterprise plans could at the time I left.

lol. so the premium feature is the ability to turn off the AI? That's one way to monetise AI I suppose.


Hahaha. It's like a protection racket for the new age.

"Nice user experience you got there. Would be a real shame if AI got added to it."


> I left Google Workspace, as a paying customer for years, because they injected gemini into gmail

I wonder if this varies by territory. In UK, none of the Gmail accounts I use has received this pollution

> I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.

The latter sounds safer. The former may add "AI" tomorrow.


I am in the UK. TBC this isn't a gmail.com email address, this is a paid "small business" workspace against a custom domain.

Eventually they backtracked and allowed (I think?) all paid customers to disable gemini, but I had already migrated to Fastmail so :shrug:


Ah. My addresses are @gmail.com.

Perhaps the fact you paid got you marked as a likely gull :)


I think in that case you have even less ability to turn that stuff off? If it's not there for you yet, perhaps it's a slow rollout still?

Perhaps yes. We'll see :(

Gmail <> Google Workspaces

Maybe not equal but when I launch Gmail the page says "Google Workspace" and I get Gmail, Docs etc. as per https://workspace.google.com/intl/en_uk/resources/what-is-wo... .

Google has always released to Workspaces and Gmail separately. In this case the Gemini button is in Workspaces (because they’re a paid tier) but not yet Gmail.

Yeah this is not a new thing with AI, you can unsubscribe all you want, they are still gonna email you about "seminars" and other bullshit. AWS has so many of those and your email is permanently in their database, even if you delete your account. I also still get Oracle Cloud emails even though I told them to delete my account as well, so I can't even log in anymore to update preferences!

Fun fact, requiring login for unsubscribe is illegal per the canspam act. The most you can do is force a user to verify their email address to you.

> I disagree: in as much as I have noticed this far more with AI than any other advancement / fad (depending on your opinion) than anything else before

Isn't that because most of the other advancements/fads were not as widely applicable?

With earlier things there was usually only particular kinds of sites or products where they would be useful. You'd still get some people trying to put them in places they made no sense, but most of the places they made no sense stayed untouched.

With AI, if well done, it would be useful nearly everywhere. It might not be well done enough yet for some of the places people are putting it so ends up being annoying, but that's a problem of them being premature, not a problem of them wanting to put AI somewhere it makes no sense.

There have been previous advancements that were useful nearly everywhere, such as the internet or the microcomputer, but they started out with limited availability and took many years to become widely available so they were more like several smaller advancements/fads in series rather than one big one like AI.


This is a very strange argument. If AI was so bloody revolutionary than you didn't have to sneak it into your products without consent.

Very often AI seems to be a solution looking for a problem.


> With AI, if well done, it would be useful nearly everywhere.

I fundamentally disagree with this.

I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans.

I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified. Not now, not ever.


Keep in mine I said "if well done". That was not meant to imply that I think the current AI offerings are well done. I'd take "well done" to mean that it performs the tasks it is meant for as well as human assistants perform those tasks.

> I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans. [...] I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified.

That's fine, but generally the tools involved in doing those things are designed to be general purpose.

A word processor isn't just going to be used by people writing personal things for example. It will also be used by people writing documentation and reports for work. Without AI it is common for those people to ask subordinates, if they are high enough in their organization to have them, to write sections of the report or to read source material and summarize it for them.

An AI tool, if good enough to do those tasks, would be useful to those users, and so it makes sense for such tools to be added by the word processor developer.

Again, I'm not saying that the AI tools currently being added to basically everything are good enough.

The point is that

(1) a large variety of tools and products have enough users that would find built-in AI useful (even if some users won't) that it makes a lot of sense for them to include those tools (when they become good enough), and

(2) AI may be unique compared to prior advances/fads in how wide a range of things this applies to and the speed it has reached a point that companies think it has become good enough (again, not saying they have made the right judgement about whether it is good enough).


How about machine translation and fixing grammar in languages you're not very familiar with? That's the only use of "AI" I've found so far. I'd rather read (and write) broken English in informal contexts like this forum, but there are enough more formal situations.

Remember, I am responding to this:

> With AI, if well done, it would be *useful nearly everywhere.*

I'm not saying it doesn't have uses.

Having said that, there are two things I never want AI to do: a) degrade or remove the need for me to express myself as a human being, b) do work I'd have to redo to prove it did it correctly.

On translation, sycophancy is a problem. I can't find it now, but there was an article I read about an LLM mistranslating papers to exclude data it thought the user wasn't interested in. So no, I wouldn't trust it for anything I cared about.

I do use AI: I'm literally reviewing some Claude generated code at the moment. But I can read that and know that it's done it right (or not, as the case often is). This is different from translation or summarisation, where I'd have to do the whole task again to prove correctness.


If you're not familiar, how could you possibly know if what you're conveying is accurate to your intention? And if you don't, why bother at all?

I don't want those added to anything either - if I want to translate something I'll use a dedicated tool.

Even WhatsApp has it in the search bar

For me it’s just a multi-coloured ring like a gamer’s mood light, but it’s literally just slapped in the corner of the UI the same way a shitty Intercom widget would be.

Totally a thing a growth hacking team would do, injecting an interface on top of a design.


>I disagree: in as much as I have noticed this far more with AI than any other advancement / fad

I agree with gp that new spam emails that override customers' email marketing preferences is not an "AI" issue.

The problem is that once companies have your email address, their irresistible compulsion to spam you is so great that they will deliberately not honor their own "Communication Preferences" that supposedly lets customers opt out of all marketing emails.

Even companies that are mostly good citizens about obeying customers' email marketing preferences still end up making exceptions. Examples:

Amazon has a profile page to opt out of all email marketing and it works... except ... it doesn't work to stop the new Amazon Pharmacy and Amazon Health marketing emails. Those emails do not have an "Unsubscribe" link and there is no extra setting in the customer profile to prevent them.

Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"

Neither of those aggressive emails have anything to do with AI. Companies just like to make exceptions to their rules to spam you. The customer's email inbox is just too valuable a target for companies to ignore.

That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox. Maybe it's commendable that Google is showing incredible restraint so far. (Or promoting Gemini in Chrome and web apps is enough exposure for them.)


> That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox.

That's because they put their alerts in the gmail web interface :-/

"Try $FOO for business" "Use drive ... blah blah blah"

All of these can be dismissed, but new ones show up regularly.


>That's because they put their alerts in the gmail web interface :-/

I agree and that's what I meant by Google's "web apps" having promos about Gemini.

But in terms of accessing Gmail accounts via the IMAP protocol in Mozilla Thunderbird, Apple Mail client, etc, there are no spam emails about Gemini AI. Google could easily pollute everybody's gmail inboxes with endless spam about Gemini such that all email clients with IMAP access would also see them but that doesn't seem to happen (yet). I do see 1 promo email about Youtube Premium over the last 5 years. But zero emails about Google's AI.


> Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"

That's "transactional" I'm sure. It makes sense that a company is legally allowed to send transactional emails, but they all abuse it to send marketing bullshit wherever they can blur the line.


How is it transactional in any way? It looks to me like post-transaction upsell, pure and simple.

I 100% agree with you, but it seems like the courts do not. Even while they were functioning.

Has this been actually tested in court, though.

It's not, but it's their justification

> Maybe it's commendable that Google is showing incredible restraint so far.

Or the Gmail spam filter is working.


This is not an issue in Europe, due to effective regulation.

>This is not an issue in Europe, due to effective regulation.

This article's author complaining about Proton overriding his email preferences is from the UK. Also in this thread, more commenters from UK and Germany say companies routinely ignore the law and send unwanted spam. Companies will justify it as "oops it was a mistake", or "it's a different category and not marketing", etc.


Imagine making this argument for other technologies. There is no opt-out button for machine learning, choosing the power source for their datacenters, the coding language in their software, etc. Conceptually there is a difference between opting out of an interaction with another party vs opting out of a specific part of their technology stack.

The three examples you listed are implementation details, so it's not clear if this is a serious post. Which datacenter they deploy code in is (other than territory for laws etc, which is something you may wish to know about and pick from) an implementation detail.

A better example would be: imagine every single operating system and app you use adds spellcheck. They only let you spell check in American[1]. You will get spell check prompts from your Operating System, your browser, and the webapp you're in. You can turn none of them off.

[1] in this example, you speak the Queen's English, so spell color colour etc


Unrelated but interesting to think about terms like "queens English" now that the queen is gone. Will we be back to kings English some day? I suppose the monarchy might stay too irrelevant to bother changing phrases.

They’re already calling it the Kings Birthday public holiday in Australia and it just seems wrong.

I believe this is combined with something I call "asymmetry blindness". They may say "but we send an single e-mail per month, this can't be bad".

We the users get a barrage of e-mails everyday because every marketing team is thinking we only get their mail, and it makes our lonely and cold mailbox merrier.

No, users are in constant "Tsunami warning!" mode and these teams are not helping.


If they were sending just one per month I might actually read them occasionally. It's the three a day from the likes of aliexpress that get deleted without a second glance.

But yes, you're absolutely right - "no raindrop considers itself responsible for the flood".


That marketing team only sends 1 email a month, but the 25 other marketing teams at the same company also only send 1 email a month.

Indeed. I received 28 unwanted emails of this kind in January so far (just counted), which is a bit more than once per day, despite quite avidly unsubscribing from this kind of emails. This month I had to unsubscribe from ChatGPT and GitHub emails of this kind too, although I don’t recall opting in to them in the first place and neither of them spammed me until recently.

> although I don’t recall opting in to them in the first place and neither of them spammed me until recently

Dark pattern. They know you'd spot immediate abuse , so they delay until you are likely to have forgotten whether you opted in.


Did you by any chance report them to something like spamcop.net ?

Aggressive spamming => Aggressive reporting.


>unsubscribe from ChatGPT emails

Really? I've never got a spam from them. Hell, I just searched and I'm not really seeing anything from them after the point where I signed up.


On Jan 11th I received "Easy self-care you can start today" advertising how ChatGPT can be used for meal planning or finding a local gym (ending with "Ask ChatGPT for more wellness tips"), and on Jan 19th I received "Use ChatGPT to make life easier" advertising how ChatGPT can for example improve my coffee brewing skills (ending with "Ask ChatGPT for more ways to get it all done"). I certainly consider these "spam", and until recently didn’t receive such emails from them.

I'm pretty sure some people have performance metrics attached to their "newsletter".

Our subscription product costs less than expensive coffee. Unused RAM is wasted.

Again, no raindrop considers itself responsible for the flood: if you buy enough coffee-priced subscriptions, that's unaffordable. Usually people already have their coffee-priced budget allocated to something. Like coffee.

(Incidentally, this is why mobile gaming uses so many anti-patterns, to make people keep making "just one more" tiny purchase)


> if you buy enough coffee-priced subscriptions, that's unaffordable

Yes. This was the point.


I guess the people you quote also missed that not all of us work in Silicon Valley and can afford those expensive coffees every day. I’d like an estimate of how many Nescafé powder coffee cups I’d have to skip per month to use their subscription.

The problem is not just empathy. It is also ethics. The fine distinction between opting out of A and opting out of B described in the post served to justify ignoring the opt out request. That's lazy ethically. The entire US business sector's customer relations are completely compromised ethically. It's taken to extremes in tech contexts.

In large organizations motivated reasoning trumps ethics. Behavior starts working along incentive gradients like an ant heap. Spend enough time in an environment like that and you learn to frame every selfish decision as good for the customer.

I think maintaining ethics in large organizations is one of the main challenges of our time, now that mega corps dominate our time and attention.


> Spend enough time in an environment like that and you learn to frame every selfish decision as good for the customer.

This reminds me of "in order to save the environment, we are going to delete all of your recordings older than 2 years, in 2 weeks. You can't download them."


"Corporation are people, folks" said Mitt Romney (as a result of the Citizen United case). The whole thing is so cringe on so many levels.

What Romney did not say is that these particular "people" tend strongly towards sociopathic behavior.


> I think we must make it clear that this is not related to AI at all

There are clear AI-specific reasons why it's being crammed down everybody's necks.

Namely: someone in management has bet the entire strategy on it. The strategy is not working and they need to juice the numbers desperately.


It's not really AI itself though, it's just whatever the current hype cycle is - it was crypto and cloud before this.

Cloud is probably the better comparison, since crypto never had the sort of mainstream management buy-in that the other two got. Microsoft's handling of OneDrive in particular foreshadows how AI is being pushed out.

The difference is OneDrive is moderately useful.

i dont like onedrive very much. i get it its useful as a pigeonhole, what i really dont like is how it is used. its the thing that moves files to onedrive and destroys local copies, that i hate, and onedrive is something that enables that. so i dont hate onedrive, i just dont like it.

LLMs are also moderately useful.

the comparison is pretty good actually

"AI" agents randomly delete your files

and so does OneDrive


I have never received a Crypto spam email from any place where I opted out from it. Same for cloud. It feels different. With crypto it was everyone wanting to ride the hype train. With AI they spent a bunch of money up front and are desperate to see ROI.

There is at least a magnitude difference in the spread.

The idea that the marketing team has the ability to really push back against senior management doesn't align with the reality I have seen. The best they can do is say that this will do brand damage -- but they don't have the ability to really call the shots. Most organizations marketing is not in a real seat of power - more like an advisory position.

I'm not trying to unfair to marketing - they do have an important role - I have hardly seen a company give marketing real power at an org. So the idea that this is because marketing don't push back on senior management -- is because they know they don't have the power to do this.


“I was just following orders” is not an excuse. If your job requires you to do immoral things, it is your responsibility to quit.

They may not have power to push back on KPIs, but even just sticking to regulatory compliance would be good enough. Nobody in management will say in writing that marketing should ignore GDPR, for example. And that means that if you, say, introduce a new category, everyone is supposed to be unsubscribed by default. So non-compliance is always a choice.

> I think we must make it clear that this is not related to AI at all

Yeah, many companies do that. I unsusbcribed from newline, they still keep spamming me. Funny thing is, they realised they had made a mistake and promised to remove unsubs. One week later, the spam started.

The correct solution is the spam button. Always


> The correct solution is the spam button. Always

The correct solution is filing complaints with your country's relevant authority


In theory. In practice-- I would spend all my time just filing complaints, because today, in 2026, I get more spam from "legitimate" companies than "Nigerian scammer" types

I wish I could without going through a long process involving tons of personal info

The spam button risks false positives.

It's not a false positive to classify a company as a bad actor and move their emails to the spam folder if they refuse to respect user choices. If anything, I wish it would happen more often and at a massive scale, because then maybe companies would have an incentive to stop being so hostile around this.

Agreed, but the false positive I am referring to is the cathching of the non-spam message from the source of the previous spam message.

They shouldn't send marketing mail from an address they want to be read. I think that's been the standard for a while, in practice - most actual transactions come from orders@<blank> or something similar while marketing mail comes from a dozen other addresses.

Agreed, but many do. They want all mail to be read. Worst offender here is a bank.

Still happy that Tuta Mail is anti AI, and does not push ads on you via email.

I wonder who told Proton that it’s a good idea to copy big tech tactics.


* I wonder who told Proton that it’s a good idea to copy big tech tactics.*

But people subscribe to Proton because they want to move away from big tech. What’s the point of paying them if they get as bad.

Though for now I’ll assume that it’s a genuine mistake with things not properly escalated by customer support.


With customer support positions, escalating to engineering is also seen as a negative metric. They might blame customer support for this but it’s likely that they’d have been turned away with “why are you escalating this stupid thing to us?”

Does??

> I wonder who told Proton that it’s a good idea to copy big tech tactics.

The lure of big tech profits.


Genuinely: What profits!?! The only company profiting from AI has been nVidia. Every indicator we've received for this entire alleged industry is companies buying hundreds of millions of dollars in graphics cards that then either sit in warehouses depreciating in value or, worse, are plugged in and immediately start losing money.

The tech industry has coasted on it's hypergrowth story for decades, a story laden with as many bubbles as actual industries that sprang up. All the good ideas are done now. All the products anyone actually needs exist, are enshittified, and are selling user data to anyone who will pay, including products that exist solely to remove your data from everyone who bought it and probably then sell it to some other people.

This shit is stupid at this point. All Silicon Valley has to do is to grow up into a mature industry with sensible business practices and sustainable models of generating revenue that in most other industries would be fantastic, and they're absolutely apoplectic about this. They are so addicted to the easy, cheap services that upended entire other industries and made them rich beyond imagining that they will literally say, out loud, with their human mouths, that it is a bad, undesirable thing to simply have a business that makes some money.

The people at the top of this industry are literally fucking deranged and should be interred at a psychiatric facility for awhile for their and everyone else's good.


If you're not the shareholder, you're the product.

The business model of any publicly traded corporation, at least in 2025, is to increase the value of its circulating stock. No more and no less. The nominal business model of the company is a cover story to make line go up. The reason why the stock price matters is because of access to capital markets: if a business wants to buy another business, they are not going to dip into the cash on hand. They are going to take out a loan, and that loan is collateralized by... the value of the business. Which is determined by the stock price.

So if you can keep the line going up, you can keep buying competitors. But if you act like a normal, mature business, you can't.

Profit as a concept is a concern for capitalism. But these businesses are not interested in capitalism, they're angling to become the new lords of a growing feudal economy. That's what "going meta" really means.


Right on. You might like the Better Offline podcast by Ed Zitron, although the two of you seem to be so closely aligned that you might not learn anything new.

>All Silicon Valley has to do is to grow up into a mature industry with sensible business practices

Negative sum game: Growing up is easy if it doesn't kill you. The problem with being ethical when everyone else is unethical is that you'll likely go broke.

The next issue is we're seeing, is not that Silicon Valley is ever going to improve, but the bullshit is spreading to eat up every other industry in the US. Engaging in outright fraudulent behavior is A'ok in the US (I mean we even elected a president convicted on a pile of counts of fraud).

Effectively industries cannot manage themselves, we need regulations to prevent them from being bastards. Problem, we elect bastards that cannot keep from committing fraud themselves.

It doesn't get better from here.


> Genuinely: What profits!?!

Those foreseen. :)

(Should have gone to Specsavers.)


Not :)

> It’s a shame that Proton marketing team is just like every other one.

Having gone through the Proton hiring process was an eye opener for me: despite its stated mission, the company isn't special when it comes to its management, it's as bad as any other.


To be fair, they are working to stay in business. None of my business, but did they treat you unfairly?

I am developing a severe anti huge corporation bias, and I try to do business with smaller companies.


> If too many users unsubscribe, they just add one more category and “accidentally” opt-in everyone.

I always "report spam" ("!" key in GMail) before unsubscribing.


Last to months several of my connections on Linkedin used private messaging for mass marketing "emails" - "normal" proper companies, not recruiters/outsourcing/... that have been spamming us for years. There is not limit to the things they will try.

On Proton: I don't get the love they get here. There ethics I find questionable and their product (e.g. search) I find unusable.


It is entirely related, because AI marketing is an amped up version of traditional dark-pattern marketing. And since every tech company is on the AI hype train, then they all fall into the same willingness to justify the worst behavior because of their desperate need to get on the forefront of what they’ve convinced themselves is the only path to growth. But as consumers, since we are confronted with all tech companies all following the same dark patterns, we feel the impact suddenly much stronger than with past one-at-a-time panicky company over-marketing efforts.

AI marketing is not different at all from performance marketing of other solutions. This is really just ordinary consent management and privacy problem. I have participated in large scale email marketing implementations too, and know how it looks from the inside. As a tech partner you may even have a good peer on the business side who cares about customers and compliance, but institutional resistance is still hard to overcome. Sometimes it is just as dumb as an old spreadsheet with contacts being uploaded again and again into your mass mail system, without any consent tracking and ignoring any opt-outs that came after the spreadsheet was created.

The spam was advertising AI, the point of the article was how aggressively AI is being shoved down our throats, and it seems very likely that when he went to complain about the AI spam it was an an AI chatbot which gave him the useless answers until it finally "checked with the team" (presumably a human) who lied to him about what counted as AI spam.

It seems like this is very much about AI even though it's ultimately humans pushing AI and disregarding people's spam preferences. Right now, everything "AI" is ultimately humans (like the way humans are using/abusing the AI tools, or the human intellect behind all of the data that was used to train them and all of the knowledge they output, or the humans deciding what they'll allow their AI to be used for, or the humans failing to safeguard the users of their AI products, etc) so this is as much about AI as anything is.


>The spam was advertising AI, the point of the article was how aggressively AI [...] It seems like this is very much about AI

Yes, the gp you responded to already said the same thing that the particular email was about AI (Lumos) when he wrote : >", even if the product in question is AI-related."

To go beyond that, the gp highlighted that the bad behavior is rooted in companies ignoring customers' email preferences instead of the AI. The article is misdiagnosing the unwanted email issue as "AI Consent Problem" when it's actually fundamentally about "Email Consent Problem". The author deliberately opted out of email marketing and Proton ignored it (by "mistake") and this is a common misbehavior companies did before AI. It's worth separating those 2 factors out.

We get unwanted spam about "Amazon Pharmacy" and "Apple TV" that overrides our profile settings to opt-out of those emails but that doesn't mean we misdiagnose it as "Pharmacy Consent Problem" and "Video Streaming Consent Problem". Instead, the generalization is still fundamentally an "email consent" problem. Always has been. The repeated abuse of the customer's email address (with or without AI in the picture) is what the gp was emphasizing.

Likewise, if a future hot technology household such as residential robots causes email marketing campaigns that blasts unwanted spam about Tesla house robots... the issue of that unwanted spam "Tesla robots 10% off!" ... is still about ignoring customers' email preferences. The unwanted robots themselves would be a separate issue. Companies will continue to make "mistakes" to send out new marketing email spam with <HotNewThing> in the subject field that will infuriate customers. And the future root cause of that problem still won't be <HotNewThing> but instead about companies ignoring customers email preferences because the incentives and greed are too great.


This was my first reaction too. It is a bit ironic that the issue of “overlapping labels” can be applied to the OP as well.

My instinct is to classify this as an email consent issue not because AI needs defending, but because the solution need not be specific to AI. The Next Big Thing will also probably have this problem because marketing is at odds making your customers happy with a great product.


> I think we must make it clear that this is not related to AI at all, even if the product in question is AI-related.

Did they ever send Rust related unsolicited emails?


The problem with tech is that there's absolutely zero accountability.

Marketing is, to some extent at least, regulated. There's so little consumer protection in the tech industry, it's a joke. We've got GDPR (in Europe) and I'm really struggling to think what else. Imagine if other forms of engineering had the same level of control.

There's this absolutely fallacious notion that in a free market, customers can just vote with their feet.

From big players with vendor lock-in and network effects, to specialists (I know of few decent competitors to Proton), the average consumer is not sufficiently protected from malpractice.

We may say, "oh, it's just a marketing email", but TFA perfectly encapsulates the relationship we have with our suppliers.


Now that we're at it, let's talk about Google ads. I reported a Google ad because I deem it political, and in Europe you must make it clear that a political ad is a political ad and not just an ad (and it failed to do so, it should be corrected or eliminated).

Google refused to comply and act in any way, because they "don't moderate 3rd party content". Except that EU says you _must_ comply if you're publishing a political ad. I'm bringing this forward with an appeal and then I'm going to escalate to the national authority if they still refuse to act.

The laws are there. It's just that big tech think they can ignore them freely and even if down the road there's a fine it's going to be much less than what they gained by spreading ads.


>then I'm going to escalate to the national authority if they still refuse to act.

You are actually doing this wrong...

Report to the national authority first...

Then report to Google.

Fuck them, it is not in your interest to report to them first, make them react for their bullshit. Over here in the states this is how I ended up dealing with telecom in the ISP industry. "Hello, I have put in an FTC/FCC complaint on $issue, and would like to see about getting it resolved".

It didn't matter that's not the order you're supposed to go in, at the telecom side they send it off to a team that actually gets shit solved before it becomes a regulatory problem.


You might have a stronger case with the national authority if you first do the full "trail" of reporting, appealing, and eventually escalating.

But yes, I feel that there's something wrong in having a stronger case if you first do it "gently" when they wouldn't bother if it were the other way


>You might have a stronger case with the national authority

At least on the ISP side, we started doing it this way after the telcos would yank our chains for weeks or months first, when we had issues that needed to get solved quickly. More so I started working with our competitor ISPs because it was very common we'd all the have the same issues. More than one complaint of the same type in the same area to these agencies tends to get noticed and followed up quickly. The follow through process on it starts to get expensive for the telcos too.

My next recommendation on this political ad bullshit is don't go at it alone. Find as many like minded people to dig up and complain on these ads as you can. Flood the regulators with violations that are occurring. When you think of it in reverse, these companies breaking the law will have no issues with pooling resources and going after you.


Enforcement in UK is pathetic e.g. HelloFresh's recent spam campaign cost it <0.2p per message in fines. A bargain.

> I think we must make it clear that this is not related to AI at all, even if the product in question is AI-related.

It is not specific to "AI" but it is very much related to it.

> If too many users unsubscribe, they just add one more category and “accidentally” opt-in everyone

... and "forget" to add its opt-out to the list.


I feel more and more like. That email should be like DMs.

Do you want to accept emails from xxx?

Yes

No

On client side...


I think that would lead to this:

Do you want to accept emails from "For a limited time, save up to 35% on orders from Fluppsi! Click Yes for this amazing opportunity!"


Very dangerous, when the same From address may be used for "Log in inside 14 days or your dormant account will be deleted".

It is an error to believe this is only happening in/with marketing. In general, "empathy" and "capitalism" are mutually exclusive. If profit is your goal, you don't care about individuals.

To name and shame two: LinkedIn and MyHeritage. If you ever made an account with either of them, they will never stop spamming you. They have configuration options to select which mail to receive, but they appear to consider them temporary suggestions.

A special dishonourable mention goes to Wal-mart. I never interacted with them in any way whatsoever, as well I wouldn't since they don't exist on my continent as far as I know, yet they still send me spam. DKIM signed and all!


LinkedIn once seemed to somehow go through my (GMail?) contacts and ask if I should invite my, late, grandfather to the platform in the subject of a marketing message.

Left a bitter taste.


I guess you also received the Linkedin Gaming spam a couple of weeks ago?

I opted out of almost every category and I never opted in to a category like that. So why is there now a new category which I have to opt out of?

It seems to me blatant, unpunished disregard of GDPR - but their whole business was founded on abuse of emails and there's no reason to expect a Microsoft acquisition to make a company act more in line with the law.


That gaming email took me mentally straight back to Facebook circa 2009, and not in a good way. LinkedIn always serves as a fantastic example of exactly how not to treat your users.

There’s probably a bigger association with it. I don’t like ai and see it everywhere, in every app I use, every service I purchase, my goddamn start bar.

So, when they start emailing unwanted emails, it feels like a spam problem, when really it’s insidious on multiple fronts.

I can’t wait for the enshittification phase. When the products royally fuck their fan base.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: