I think we must make it clear that this is not related to AI at all, even if the product in question is AI-related.
It is a very common problem with modern marketing teams, that have zero empathy for customers (even if they have one, they will never push back on whatever insane demands come from senior management). This is why any email subscription management interface now is as bloated as a dead whale. If too many users unsubscribe, they just add one more category and “accidentally” opt-in everyone.
It’s a shame that Proton marketing team is just like every other one. Maybe it’s a curse of growing organization and middle management creep. The least we can do is push back as customers.
I disagree: in as much as I have noticed this *far* more with AI than any other advancement / fad (depending on your opinion) than anything else before.
This also tracks with every app and website injecting AI into every one of your interactions, with no way to disable it.
I think the article's point about non-consent is a very apt one, and expresses why I dislike this trend so much. I left Google Workspace, as a paying customer for years, because they injected gemini into gmail etc and I couldn't turn it off (only those on the most expensive enterprise plans could at the time I left).
To be clear I am someone that uses AI basically every day, but the non-consent is still frustrating and dehumanising. Users–even paying users–are "considered" in design these days as much as a cow is "considered" in the design of a dairy farm.
I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.
To add to this, it's the same attitude that they used to create the AI in the first place by using content which they don't own, without permission. Regardless of how useful it may be, the companies creating it and including it have demonstrated time and again that they do not care about consent.
> the same attitude that they used to create the AI in the first place by using content which they don't own, without permission
This was a massive "white pill" for me. When the needs of emerging technology ran head first into the old established norms of ""intellectual property"" it blew straight through like a battle tank, technology didn't even bother to slow down and try to negotiate. This has alleviated much of my concern with IP laws stifling progress; when push comes to shove, progress wins easily.
How can you get a machine to have values? Humans have values because of social dynamics and education (or lack of exposure to other types of education). Computers do not have social dynamics, and it is much harder to control what they are being educated on if the answer is "everything".
It's not hard if the people in charge had any scruples at all. These machines never could have done anything if some human being, somewhere in the chain, hadn't decided that "yeah, I think we will do {nefarious_thing} with our new technology". Or should we start throwing up our hands when someone gets stabbed to death like "well, I guess knives don't have human values".
The short answer is a reward function. The long answer is the alignment problem.
Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.
Well, it’s pretty clear to me that the current reward function of profit maximization has a lot of down sides that aren’t sufficiently taken into account.
That sounds like the valued-at-billions-and-drowning-in-funding company’s problem. The issue is they just go “there are no consequences for solving this, so we simply won’t.”
Maybe if we can't build a machine that isn't a sociopath the answer should be don't build the machine rather then oh well go ahead and build the sociopaths
I’d argue that a lot of the scrape-and-train is just the newest and most blatant exploitation of the relationship that always existed, not a renegotiation of it. Stack overflow monetized millions of hours of people’s work. Same thing with Reddit and Twitter and plenty of other websites.
Legally it is different with books (as Anthropic found out) but I would argue morally it is more similar: forum users and most authors write not for money, but because they enjoy it.
I don't know, it feels odd to declare people wrote "because they enjoy it" and then get irritated when someone finds a way to monetize it retrospectively.
Like you're either doing this for the money or you're not, and its okay to re-evaluate that decision...but at the same time there's a whole lot of "actually I was low key trying to build a career" type energy to a lot of the complaining.
Like I switched off from Facebook aboutna years after discovering it when it increasingly became "look at my new business venture...friends". LinkedIn is at least just upfront about it and I can ignore the feed entirely (use it for job listings only).
The shift from "you just don't understand" to damage control would be funny if it wasn't so transparent.
> We have identified a bug in our system... we take communication consent very seriously
> There was a bug, and we fucked up... we take comms consent seriously
These two actors were clearly coached into the same narrative. I also absolutely don't believe them at all: some PM made the conscious decision to bypass user preferences to increase some KPI that pleases some AI-invested stakeholder.
Google has always released to Workspaces and Gmail separately. In this case the Gemini button is in Workspaces (because they’re a paid tier) but not yet Gmail.
Yeah this is not a new thing with AI, you can unsubscribe all you want, they are still gonna email you about "seminars" and other bullshit. AWS has so many of those and your email is permanently in their database, even if you delete your account. I also still get Oracle Cloud emails even though I told them to delete my account as well, so I can't even log in anymore to update preferences!
> I disagree: in as much as I have noticed this far more with AI than any other advancement / fad (depending on your opinion) than anything else before
Isn't that because most of the other advancements/fads were not as widely applicable?
With earlier things there was usually only particular kinds of sites or products where they would be useful. You'd still get some people trying to put them in places they made no sense, but most of the places they made no sense stayed untouched.
With AI, if well done, it would be useful nearly everywhere. It might not be well done enough yet for some of the places people are putting it so ends up being annoying, but that's a problem of them being premature, not a problem of them wanting to put AI somewhere it makes no sense.
There have been previous advancements that were useful nearly everywhere, such as the internet or the microcomputer, but they started out with limited availability and took many years to become widely available so they were more like several smaller advancements/fads in series rather than one big one like AI.
> With AI, if well done, it would be useful nearly everywhere.
I fundamentally disagree with this.
I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans.
I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified. Not now, not ever.
Keep in mine I said "if well done". That was not meant to imply that I think the current AI offerings are well done. I'd take "well done" to mean that it performs the tasks it is meant for as well as human assistants perform those tasks.
> I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans. [...] I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified.
That's fine, but generally the tools involved in doing those things are designed to be general purpose.
A word processor isn't just going to be used by people writing personal things for example. It will also be used by people writing documentation and reports for work. Without AI it is common for those people to ask subordinates, if they are high enough in their organization to have them, to write sections of the report or to read source material and summarize it for them.
An AI tool, if good enough to do those tasks, would be useful to those users, and so it makes sense for such tools to be added by the word processor developer.
Again, I'm not saying that the AI tools currently being added to basically everything are good enough.
The point is that
(1) a large variety of tools and products have enough users that would find built-in AI useful (even if some users won't) that it makes a lot of sense for them to include those tools (when they become good enough), and
(2) AI may be unique compared to prior advances/fads in how wide a range of things this applies to and the speed it has reached a point that companies think it has become good enough (again, not saying they have made the right judgement about whether it is good enough).
How about machine translation and fixing grammar in languages you're not very familiar with? That's the only use of "AI" I've found so far. I'd rather read (and write) broken English in informal contexts like this forum, but there are enough more formal situations.
> With AI, if well done, it would be *useful nearly everywhere.*
I'm not saying it doesn't have uses.
Having said that, there are two things I never want AI to do: a) degrade or remove the need for me to express myself as a human being, b) do work I'd have to redo to prove it did it correctly.
On translation, sycophancy is a problem. I can't find it now, but there was an article I read about an LLM mistranslating papers to exclude data it thought the user wasn't interested in. So no, I wouldn't trust it for anything I cared about.
I do use AI: I'm literally reviewing some Claude generated code at the moment. But I can read that and know that it's done it right (or not, as the case often is). This is different from translation or summarisation, where I'd have to do the whole task again to prove correctness.
For me it’s just a multi-coloured ring like a gamer’s mood light, but it’s literally just slapped in the corner of the UI the same way a shitty Intercom widget would be.
Totally a thing a growth hacking team would do, injecting an interface on top of a design.
>I disagree: in as much as I have noticed this far more with AI than any other advancement / fad
I agree with gp that new spam emails that override customers' email marketing preferences is not an "AI" issue.
The problem is that once companies have your email address, their irresistible compulsion to spam you is so great that they will deliberately not honor their own "Communication Preferences" that supposedly lets customers opt out of all marketing emails.
Even companies that are mostly good citizens about obeying customers' email marketing preferences still end up making exceptions. Examples:
Amazon has a profile page to opt out of all email marketing and it works... except ... it doesn't work to stop the new Amazon Pharmacy and Amazon Health marketing emails. Those emails do not have an "Unsubscribe" link and there is no extra setting in the customer profile to prevent them.
Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"
Neither of those aggressive emails have anything to do with AI. Companies just like to make exceptions to their rules to spam you. The customer's email inbox is just too valuable a target for companies to ignore.
That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox. Maybe it's commendable that Google is showing incredible restraint so far. (Or promoting Gemini in Chrome and web apps is enough exposure for them.)
>That's because they put their alerts in the gmail web interface :-/
I agree and that's what I meant by Google's "web apps" having promos about Gemini.
But in terms of accessing Gmail accounts via the IMAP protocol in Mozilla Thunderbird, Apple Mail client, etc, there are no spam emails about Gemini AI. Google could easily pollute everybody's gmail inboxes with endless spam about Gemini such that all email clients with IMAP access would also see them but that doesn't seem to happen (yet). I do see 1 promo email about Youtube Premium over the last 5 years. But zero emails about Google's AI.
> Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"
That's "transactional" I'm sure. It makes sense that a company is legally allowed to send transactional emails, but they all abuse it to send marketing bullshit wherever they can blur the line.
>This is not an issue in Europe, due to effective regulation.
This article's author complaining about Proton overriding his email preferences is from the UK. Also in this thread, more commenters from UK and Germany say companies routinely ignore the law and send unwanted spam. Companies will justify it as "oops it was a mistake", or "it's a different category and not marketing", etc.
Imagine making this argument for other technologies. There is no opt-out button for machine learning, choosing the power source for their datacenters, the coding language in their software, etc. Conceptually there is a difference between opting out of an interaction with another party vs opting out of a specific part of their technology stack.
The three examples you listed are implementation details, so it's not clear if this is a serious post. Which datacenter they deploy code in is (other than territory for laws etc, which is something you may wish to know about and pick from) an implementation detail.
A better example would be: imagine every single operating system and app you use adds spellcheck. They only let you spell check in American[1]. You will get spell check prompts from your Operating System, your browser, and the webapp you're in. You can turn none of them off.
[1] in this example, you speak the Queen's English, so spell color colour etc
Unrelated but interesting to think about terms like "queens English" now that the queen is gone. Will we be back to kings English some day? I suppose the monarchy might stay too irrelevant to bother changing phrases.
I believe this is combined with something I call "asymmetry blindness". They may say "but we send an single e-mail per month, this can't be bad".
We the users get a barrage of e-mails everyday because every marketing team is thinking we only get their mail, and it makes our lonely and cold mailbox merrier.
No, users are in constant "Tsunami warning!" mode and these teams are not helping.
If they were sending just one per month I might actually read them occasionally. It's the three a day from the likes of aliexpress that get deleted without a second glance.
But yes, you're absolutely right - "no raindrop considers itself responsible for the flood".
Indeed. I received 28 unwanted emails of this kind in January so far (just counted), which is a bit more than once per day, despite quite avidly unsubscribing from this kind of emails. This month I had to unsubscribe from ChatGPT and GitHub emails of this kind too, although I don’t recall opting in to them in the first place and neither of them spammed me until recently.
On Jan 11th I received "Easy self-care you can start today" advertising how ChatGPT can be used for meal planning or finding a local gym (ending with "Ask ChatGPT for more wellness tips"), and on Jan 19th I received "Use ChatGPT to make life easier" advertising how ChatGPT can for example improve my coffee brewing skills (ending with "Ask ChatGPT for more ways to get it all done"). I certainly consider these "spam", and until recently didn’t receive such emails from them.
Again, no raindrop considers itself responsible for the flood: if you buy enough coffee-priced subscriptions, that's unaffordable. Usually people already have their coffee-priced budget allocated to something. Like coffee.
(Incidentally, this is why mobile gaming uses so many anti-patterns, to make people keep making "just one more" tiny purchase)
I guess the people you quote also missed that not all of us work in Silicon Valley and can afford those expensive coffees every day. I’d like an estimate of how many Nescafé powder coffee cups I’d have to skip per month to use their subscription.
The problem is not just empathy. It is also ethics. The fine distinction between opting out of A and opting out of B described in the post served to justify ignoring the opt out request. That's lazy ethically. The entire US business sector's customer relations are completely compromised ethically. It's taken to extremes in tech contexts.
In large organizations motivated reasoning trumps ethics. Behavior starts working along incentive gradients like an ant heap. Spend enough time in an environment like that and you learn to frame every selfish decision as good for the customer.
I think maintaining ethics in large organizations is one of the main challenges of our time, now that mega corps dominate our time and attention.
> Spend enough time in an environment like that and you learn to frame every selfish decision as good for the customer.
This reminds me of "in order to save the environment, we are going to delete all of your recordings older than 2 years, in 2 weeks. You can't download them."
Cloud is probably the better comparison, since crypto never had the sort of mainstream management buy-in that the other two got. Microsoft's handling of OneDrive in particular foreshadows how AI is being pushed out.
i dont like onedrive very much. i get it its useful as a pigeonhole, what i really dont like is how it is used. its the thing that moves files to onedrive and destroys local copies, that i hate, and onedrive is something that enables that. so i dont hate onedrive, i just dont like it.
I have never received a Crypto spam email from any place where I opted out from it. Same for cloud. It feels different. With crypto it was everyone wanting to ride the hype train. With AI they spent a bunch of money up front and are desperate to see ROI.
The idea that the marketing team has the ability to really push back against senior management doesn't align with the reality I have seen. The best they can do is say that this will do brand damage -- but they don't have the ability to really call the shots. Most organizations marketing is not in a real seat of power - more like an advisory position.
I'm not trying to unfair to marketing - they do have an important role - I have hardly seen a company give marketing real power at an org. So the idea that this is because marketing don't push back on senior management -- is because they know they don't have the power to do this.
They may not have power to push back on KPIs, but even just sticking to regulatory compliance would be good enough. Nobody in management will say in writing that marketing should ignore GDPR, for example. And that means that if you, say, introduce a new category, everyone is supposed to be unsubscribed by default. So non-compliance is always a choice.
> I think we must make it clear that this is not related to AI at all
Yeah, many companies do that. I unsusbcribed from newline, they still keep spamming me. Funny thing is, they realised they had made a mistake and promised to remove unsubs. One week later, the spam started.
In theory. In practice-- I would spend all my time just filing complaints, because today, in 2026, I get more spam from "legitimate" companies than "Nigerian scammer" types
It's not a false positive to classify a company as a bad actor and move their emails to the spam folder if they refuse to respect user choices. If anything, I wish it would happen more often and at a massive scale, because then maybe companies would have an incentive to stop being so hostile around this.
They shouldn't send marketing mail from an address they want to be read. I think that's been the standard for a while, in practice - most actual transactions come from orders@<blank> or something similar while marketing mail comes from a dozen other addresses.
With customer support positions, escalating to engineering is also seen as a negative metric. They might blame customer support for this but it’s likely that they’d have been turned away with “why are you escalating this stupid thing to us?”
Genuinely: What profits!?! The only company profiting from AI has been nVidia. Every indicator we've received for this entire alleged industry is companies buying hundreds of millions of dollars in graphics cards that then either sit in warehouses depreciating in value or, worse, are plugged in and immediately start losing money.
The tech industry has coasted on it's hypergrowth story for decades, a story laden with as many bubbles as actual industries that sprang up. All the good ideas are done now. All the products anyone actually needs exist, are enshittified, and are selling user data to anyone who will pay, including products that exist solely to remove your data from everyone who bought it and probably then sell it to some other people.
This shit is stupid at this point. All Silicon Valley has to do is to grow up into a mature industry with sensible business practices and sustainable models of generating revenue that in most other industries would be fantastic, and they're absolutely apoplectic about this. They are so addicted to the easy, cheap services that upended entire other industries and made them rich beyond imagining that they will literally say, out loud, with their human mouths, that it is a bad, undesirable thing to simply have a business that makes some money.
The people at the top of this industry are literally fucking deranged and should be interred at a psychiatric facility for awhile for their and everyone else's good.
If you're not the shareholder, you're the product.
The business model of any publicly traded corporation, at least in 2025, is to increase the value of its circulating stock. No more and no less. The nominal business model of the company is a cover story to make line go up. The reason why the stock price matters is because of access to capital markets: if a business wants to buy another business, they are not going to dip into the cash on hand. They are going to take out a loan, and that loan is collateralized by... the value of the business. Which is determined by the stock price.
So if you can keep the line going up, you can keep buying competitors. But if you act like a normal, mature business, you can't.
Profit as a concept is a concern for capitalism. But these businesses are not interested in capitalism, they're angling to become the new lords of a growing feudal economy. That's what "going meta" really means.
Right on. You might like the Better Offline podcast by Ed Zitron, although the two of you seem to be so closely aligned that you might not learn anything new.
>All Silicon Valley has to do is to grow up into a mature industry with sensible business practices
Negative sum game: Growing up is easy if it doesn't kill you. The problem with being ethical when everyone else is unethical is that you'll likely go broke.
The next issue is we're seeing, is not that Silicon Valley is ever going to improve, but the bullshit is spreading to eat up every other industry in the US. Engaging in outright fraudulent behavior is A'ok in the US (I mean we even elected a president convicted on a pile of counts of fraud).
Effectively industries cannot manage themselves, we need regulations to prevent them from being bastards. Problem, we elect bastards that cannot keep from committing fraud themselves.
> It’s a shame that Proton marketing team is just like every other one.
Having gone through the Proton hiring process was an eye opener for me: despite its stated mission, the company isn't special when it comes to its management, it's as bad as any other.
Last to months several of my connections on Linkedin used private messaging for mass marketing "emails" - "normal" proper companies, not recruiters/outsourcing/... that have been spamming us for years. There is not limit to the things they will try.
On Proton: I don't get the love they get here. There ethics I find questionable and their product (e.g. search) I find unusable.
It is entirely related, because AI marketing is an amped up version of traditional dark-pattern marketing. And since every tech company is on the AI hype train, then they all fall into the same willingness to justify the worst behavior because of their desperate need to get on the forefront of what they’ve convinced themselves is the only path to growth. But as consumers, since we are confronted with all tech companies all following the same dark patterns, we feel the impact suddenly much stronger than with past one-at-a-time panicky company over-marketing efforts.
AI marketing is not different at all from performance marketing of other solutions. This is really just ordinary consent management and privacy problem. I have participated in large scale email marketing implementations too, and know how it looks from the inside. As a tech partner you may even have a good peer on the business side who cares about customers and compliance, but institutional resistance is still hard to overcome. Sometimes it is just as dumb as an old spreadsheet with contacts being uploaded again and again into your mass mail system, without any consent tracking and ignoring any opt-outs that came after the spreadsheet was created.
The spam was advertising AI, the point of the article was how aggressively AI is being shoved down our throats, and it seems very likely that when he went to complain about the AI spam it was an an AI chatbot which gave him the useless answers until it finally "checked with the team" (presumably a human) who lied to him about what counted as AI spam.
It seems like this is very much about AI even though it's ultimately humans pushing AI and disregarding people's spam preferences. Right now, everything "AI" is ultimately humans (like the way humans are using/abusing the AI tools, or the human intellect behind all of the data that was used to train them and all of the knowledge they output, or the humans deciding what they'll allow their AI to be used for, or the humans failing to safeguard the users of their AI products, etc) so this is as much about AI as anything is.
>The spam was advertising AI, the point of the article was how aggressively AI [...] It seems like this is very much about AI
Yes, the gp you responded to already said the same thing that the particular email was about AI (Lumos) when he wrote : >", even if the product in question is AI-related."
To go beyond that, the gp highlighted that the bad behavior is rooted in companies ignoring customers' email preferences instead of the AI. The article is misdiagnosing the unwanted email issue as "AI Consent Problem" when it's actually fundamentally about "Email Consent Problem". The author deliberately opted out of email marketing and Proton ignored it (by "mistake") and this is a common misbehavior companies did before AI. It's worth separating those 2 factors out.
We get unwanted spam about "Amazon Pharmacy" and "Apple TV" that overrides our profile settings to opt-out of those emails but that doesn't mean we misdiagnose it as "Pharmacy Consent Problem" and "Video Streaming Consent Problem". Instead, the generalization is still fundamentally an "email consent" problem. Always has been. The repeated abuse of the customer's email address (with or without AI in the picture) is what the gp was emphasizing.
Likewise, if a future hot technology household such as residential robots causes email marketing campaigns that blasts unwanted spam about Tesla house robots... the issue of that unwanted spam "Tesla robots 10% off!" ... is still about ignoring customers' email preferences. The unwanted robots themselves would be a separate issue. Companies will continue to make "mistakes" to send out new marketing email spam with <HotNewThing> in the subject field that will infuriate customers. And the future root cause of that problem still won't be <HotNewThing> but instead about companies ignoring customers email preferences because the incentives and greed are too great.
This was my first reaction too. It is a bit ironic that the issue of “overlapping labels” can be applied to the OP as well.
My instinct is to classify this as an email consent issue not because AI needs defending, but because the solution need not be specific to AI. The Next Big Thing will also probably have this problem because marketing is at odds making your customers happy with a great product.
The problem with tech is that there's absolutely zero accountability.
Marketing is, to some extent at least, regulated. There's so little consumer protection in the tech industry, it's a joke. We've got GDPR (in Europe) and I'm really struggling to think what else. Imagine if other forms of engineering had the same level of control.
There's this absolutely fallacious notion that in a free market, customers can just vote with their feet.
From big players with vendor lock-in and network effects, to specialists (I know of few decent competitors to Proton), the average consumer is not sufficiently protected from malpractice.
We may say, "oh, it's just a marketing email", but TFA perfectly encapsulates the relationship we have with our suppliers.
Now that we're at it, let's talk about Google ads. I reported a Google ad because I deem it political, and in Europe you must make it clear that a political ad is a political ad and not just an ad (and it failed to do so, it should be corrected or eliminated).
Google refused to comply and act in any way, because they "don't moderate 3rd party content". Except that EU says you _must_ comply if you're publishing a political ad. I'm bringing this forward with an appeal and then I'm going to escalate to the national authority if they still refuse to act.
The laws are there. It's just that big tech think they can ignore them freely and even if down the road there's a fine it's going to be much less than what they gained by spreading ads.
>then I'm going to escalate to the national authority if they still refuse to act.
You are actually doing this wrong...
Report to the national authority first...
Then report to Google.
Fuck them, it is not in your interest to report to them first, make them react for their bullshit. Over here in the states this is how I ended up dealing with telecom in the ISP industry. "Hello, I have put in an FTC/FCC complaint on $issue, and would like to see about getting it resolved".
It didn't matter that's not the order you're supposed to go in, at the telecom side they send it off to a team that actually gets shit solved before it becomes a regulatory problem.
>You might have a stronger case with the national authority
At least on the ISP side, we started doing it this way after the telcos would yank our chains for weeks or months first, when we had issues that needed to get solved quickly. More so I started working with our competitor ISPs because it was very common we'd all the have the same issues. More than one complaint of the same type in the same area to these agencies tends to get noticed and followed up quickly. The follow through process on it starts to get expensive for the telcos too.
My next recommendation on this political ad bullshit is don't go at it alone. Find as many like minded people to dig up and complain on these ads as you can. Flood the regulators with violations that are occurring. When you think of it in reverse, these companies breaking the law will have no issues with pooling resources and going after you.
It is an error to believe this is only happening in/with marketing. In general, "empathy" and "capitalism" are mutually exclusive. If profit is your goal, you don't care about individuals.
To name and shame two: LinkedIn and MyHeritage. If you ever made an account with either of them, they will never stop spamming you. They have configuration options to select which mail to receive, but they appear to consider them temporary suggestions.
A special dishonourable mention goes to Wal-mart. I never interacted with them in any way whatsoever, as well I wouldn't since they don't exist on my continent as far as I know, yet they still send me spam. DKIM signed and all!
LinkedIn once seemed to somehow go through my (GMail?) contacts and ask if I should invite my, late, grandfather to the platform in the subject of a marketing message.
I guess you also received the Linkedin Gaming spam a couple of weeks ago?
I opted out of almost every category and I never opted in to a category like that. So why is there now a new category which I have to opt out of?
It seems to me blatant, unpunished disregard of GDPR - but their whole business was founded on abuse of emails and there's no reason to expect a Microsoft acquisition to make a company act more in line with the law.
That gaming email took me mentally straight back to Facebook circa 2009, and not in a good way. LinkedIn always serves as a fantastic example of exactly how not to treat your users.
There’s probably a bigger association with it. I don’t like ai and see it everywhere, in every app I use, every service I purchase, my goddamn start bar.
So, when they start emailing unwanted emails, it feels like a spam problem, when really it’s insidious on multiple fronts.
I can’t wait for the enshittification phase. When the products royally fuck their fan base.
It is a very common problem with modern marketing teams, that have zero empathy for customers (even if they have one, they will never push back on whatever insane demands come from senior management). This is why any email subscription management interface now is as bloated as a dead whale. If too many users unsubscribe, they just add one more category and “accidentally” opt-in everyone.
It’s a shame that Proton marketing team is just like every other one. Maybe it’s a curse of growing organization and middle management creep. The least we can do is push back as customers.