Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone want AI in anything? I can see the value of navigating to an LLM and asking specific questions, but generally speaking I don't want that just running / waiting on my machine as I open a variety of applications. It's a huge waste of resources and for most normal people is an edge case.


The existence of the features doesn’t bother me. It’s the constant nagging about them. I can’t use a google product without being harassed to the point of not being able to work by offers to “help me write” or whatever.

Having the feature on a menu somewhere would be fine. The problem is the confluence of new features now becoming possible, and companies no longer building software for their users but as vehicles to push some agenda. Now we’re seeing this in action.


It's a problem in the software industry today that is bigger than AI, probably the greatest controversy in software marketing.

Part of the model of products like Adobe's Creative Suite [1] is that they are always adding new features -- and if you want people to keep renewing their subscription you want them to know about new features so they feel like they are getting more out of their product.

Trouble is using a product like that is like walking out of the Moscone Center and getting harassed by mentally ill people and addicts or like creating an account in Tumblr and getting five solicitations for pig butchering and NFT scams in DM in the first week -- you boot up the product, spend 20 seconds looking at the splash screen, then you have to clear five dialog boxes that you might not have time to deal with right now. Sometimes I open up a product because I have to do a task I have to do but don't really want to do and feeling a lot of stress and I just don't need to deal with any bullshit when I am under the gun.

I've seen Adobe trying gentler methods to point out new features in Lightroom, such as a filter that can automatically weed out photos where people have their eyes closed. It takes a lot of UX work to do that though.

Personally I'd like it a lot better if the nagging started after I finished a task, if I was feeling satisfied with the product and now relieved that the task is over that's a moment when I'd be receptive to learning more about the product.

[1] And also a lot of "free" software, it's not just money-grubbing, but the model of always rolling updates.


> Part of the model of products like Adobe's Creative Suite [1] is that they are always adding new features -- and if you want people to keep renewing their subscription you want them to know about new features so they feel like they are getting more out of their product.

This is the fundamental problem and it has nothing to do with AI. Just look at the recent iOs 26 release. I am not convinced that any of the actual functional changes warranted a release or that they needed to be released at that point if a new release was needed. New software to justify new phones.

You get lots of features but performance takes a second seat. And sadly, I feel it works. I feel most would balk at paying a monthly subscription if only performance related improvements were made


Yeah the sad part is that much of the reason we have to have subscriptions is that there’s a very real ongoing cost just to avoid the platform owner breaking the software with OS changes (and of course Apple is 10x worse than any others, most Windows XP-era .exe files work perfectly fine on Windows 11 today).

Why do we need OS changes though? Well practically we don’t. But the platform owners all want to move new hardware so they need to shovel features in, which we could just completely ignore, except that they’ll abandon you to the wolves for security patches, which is about the only “new” thing we do need, if you’re not on the latest couple releases. And as for hardware, eventually you need new hardware and drivers only get created for current and future OS releases.

So the end result is we’re being led on a wild goose chase of trend-chasing shitty UI changes, adware, and performance-killing crap we don’t need, purely because we can’t run the old hardware forever, and even when we can keep the old hardware going, we can’t safely run old software for lack of patches.


Operating systems and the software that comes with them are a fat target for security problems. There's "new hardware" in turns of new phones, laptops and the core components of desktops but also peripherals from things you plug into USB and things like watches and AirPods that you might want to use with your existing phone. Both Linux and Windows run on generic hardware so they need to handle whatever AMD, Intel, Dell, etc. throw at them -- look at how Ubuntu is always coming out with new releases and occasionally makes one that is LTS.


Everyone wants to complain about the "bloat" in Windows and macOS (and fair enough, there is a lot of bloat and cruft) but blame it all on capitalism, when Linux has kept apace in growth rate the whole time. My Linux installs have been 'round about 50% the size of my Windows installs these last 15 years, never really straying far. If we ask ourselves, "Why does Linux need to keep growing?", I think we can easily see that OS churn and growth is not just "shareholder value gotta go up."


Plus when speaking about peripherals, you've got things to deal with like DMA for Thunderbolt devices and a constant stream of creative new ways to poorly implement USB to contend with. Not only is the target moving, but so is the archer and both are inclined towards sudden nonsensical moves.


iOS 6 was peak smartphone and I will die on that hill


> Just look at the recent iOs 26 release. I am not convinced that any of the actual functional changes warranted a release or that they needed to be released at that point if a new release was needed. New software to justify new phones.

And this is why the subscription model just doesn’t make sense for most businesses. I pay for a newspaper subscription because there is literally a brand new newspaper each day. A magazine subscription yields an entirely new set of articles every month. I pay for subscription access to data that is continuously updated. The subscription model makes sense for a product that is created anew on a regular basis. It doesn’t make sense for most software companies that are producing static software. What they are calling ‘subscriptions’ are really just rentals for their static products that get minimal surface changes to justify the ongoing rent charge. I’d much rather just pay a flat fee for the static software and upgrade it when I’m ready for the new features.


Honestly it’s the first iOS I like less than the last. A lot I’d consider neutral.

But now I’ve got several bugs (and I’m on last years flagship), liquid glass is ugly until you change a guy of settings, and I find myself accidentally triggering something (usually Siri) and being annoyed more.


Yes, but in this case, not only are you being force-fed new features that you do not want you are also billed for them as if you use them when in fact you don't!


Bell icon in corner. Colored dot on bell. No need to overthink the UX on this beyond what color to make the dot.


Can I dismiss that distracting little pip without having to click through a tutorial for every new feature? Can I just turn this off?

This is especially irritating when, say, you set up a new phone and the app treats you as if you've never used it before.


It’s funny because lately I’ve been playing Arknights which is a gatcha game which is unusually good for free players. It has a few icons that light up with one of those dots when you have something to attend to (say you got a token to upgrade a character) but there is the dark pattern that that dot is always set on the cash store which means it is always set on the “stores” section which has substores for in-game currencies some of which you have to attend to periodically. So I see the dark pattern there.

Really my complaint is anything that covers up content; if instead of popping up a popover Firefox just took 75px above or below the page to show me something I’d complain about lot less — but if I had my way anything unwanted that covers unwanted content should bust down the whole c-suite to working in an Amazon warehouse. (I could trust those folks to deliver stuff with an e-bike but don’t want anybody with bad judgement like that driving a car or truck!)


> no longer building software for their users but as vehicles to push some agenda

All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.

The freak stampede of all these tech giants to shove AI down everybody's throat just shows that they perceive the technology as having huge potential to advance the above agenda, for themselves, or for their competitors at their detriment.


1. AI is generating a lot of buzz

2. AI could be the next technology revolution

3. If we get on the AI bandwagon now we're getting in on the ground floor

4. If we don't get on the AI bandwagon now we risk being left behind

5. Now that we've invested into AI we need to make sure we're seeing return on our investment

6. Our users don't seem to understand what AI could possibly do so we should remind them so that they use the feature

7. Our users aren't opting in to the features we're offering so we should opt them in automatically

Like any other 'big, unproven bet' everyone is rushing in. See also: 'stories' making their way into everything (Instagram, Facebook, Telegram, etc.), vertical short-form videos (TikTok, Reels, Shorts, etc). The difference here is that the companies put literally tens or hundreds of billions of dollars into it so, for many, if AI fails and the money is wasted it could be an existential threat for entire departments or companies. nvidia is such a huge percentage of the entire US economy that if the AI accelerator market collapses it's going to wipe out something like ten percent of GDP.

So yeah, I get why companies are doing this; it's an actual 'slippery slope' that they fell into where they don't see any way out but to keep going and hope that it works out for them somehow, for some reason.


It’s also worth noting that non AI investment has basically dried up, so anyone wanting that initial investment needs to use the buzzwords.


In the 90s I did a lot of AI research but we weren't allowed to call it AI because if you used that label your funding would instantly be cancelled. After this bubble pops we'll no doubt return to that situation. Sigh.


Conversely, if you're doing any mathematical research nowdays, you better find some AI angle to your work if you want to get funding.


Great breakdown. I'm starting to think I'd pay to disable AI in most products.

Similar to how I read about a bar in the UK that has an intentional Faraday cage to encourage people to interact with people in the real world.


This sounds great actually. It seems like a fantastic revenue opportunity. We can add mandatory AI to all our products. We can then offer a basic plan that removes AI from most products, except in-demand ones. To remove it their you'll need the premium plan. There's a discount for annual subscription. You can also get the "Friends and Family" plan that covers 12 devices, but is region locked. If you go too far from your domicile, the AI comes back. This helps keep user indoors, streaming, and watching ads. Business plans will have the option to disable AI if their annual bill exceeds a certain amount. We can align this amount such that encourages typical business accounts to grow by a modest percent each year. We'll do this by setting the amount low enough that businesses are incentived to purchase but also high enough that they windup buying significant services from us. This potentially allows us to sell them services they don't need or that don't even exist, as the demand for AI free products is projected to rise in a 2-10 year timeframe.


> where they don't see any way out but to keep going and hope that it works out for them somehow, for some reason.

That's the core issue. No one wants to fail early or fail fast anymore. It's "lets stick to our guns and push this thing hard and far until it actually starts working for us."

Sometimes the time just isn't right for a particular technology. You put it out there, try for a little bit, and if it fails, it fails. Move on.

You don't keep investing in your failure while telling your users "You think you don't want this, but trust us, you actually do."


> The freak stampede of all these tech giants to shove AI down everybody's throat just shows that they perceive the technology as having huge potential to advance the above agenda, for themselves, or for their competitors at their detriment.

I think there are more mundane (and IMO realistic) explanations than assuming that this is some kind of weird power move by all of software. I have a hard time believing that Salesforce and Adobe want to advance an agenda other than selling product and giving their C-suite nice bonuses.

I think you can explain a lot of this as:

1. Executives (CEOs, CTOs, VPs, whatever) got convinced that AI is the new growth thing

2. AI costs a _lot_ of money relative to most product enhancements, so there's an inherent need to justify that expense.

3. All of the unwanted and pushy features are a way of creating metrics that justify the expense of AI for the C-suite.

4. It takes time for users to effectively say "We didn't want this," and in the meantime a whole host of engineers, engineering managers, and product managers have gotten promoted and/or better gigs because they could say "we added AI" to their product.

There's also a herd effect among competing products that tends to make these things go in waves.


I think the real takeaway here is that Jensen Huang was smart enough to found a technology company that developed innovative products with real consumer demand. He's also smart enough to have seen the writing on the wall regarding consumer market demand saturation for high-margin products. No matter what happens with AI, Huang will be recorded as having executed the greatest pivot of all time in terms of company direction.


I think you're mostly saying the same thing he is, just from a different viewpoint. It's still manglement trying to make their decisions look right.


I think in the case of in-app tooltips, the cause is much more banal: it's UX managers having to defend their team and budget with usage metrics, and so they all try to shove their new features in your face to inflate their numbers.

If we didn't have pervasive telemetry, we also wouldn't have these obnoxious nudges; UX teams would get their feedback from QA testing and focus groups, and leave the end users in peace.


> All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.

I'll bear that in mind the next time I'm getting a haircut. How do you think Bob's Barbers is going to achieve all of that?


Bob the Barber ain't doin shit but that's mostly because he's got a room temperature IQ and is already struggling with taxes and biz-dev. he can do a mean fade, tho.

some weeks if its slow he may struggle to make his rent for his apartment; he doesn't have time or capacity to engage in serious rent-seeking behavior.

but hair cut chains like Supercuts are absolutely engaging in shady behavior all the time, like games with how solons rent chairs or employing questionably legal trafficked workers.

and FYI turns out that Supercuts a wholly owned subsidiary of the Regis Corporation, who absolutely acquires other companies and plays all sorts of shady corporate games, including branching into other markets and monopoly efforts.

https://en.wikipedia.org/wiki/Regis_Corporation


I would subscribe to your newsletter ;-)


It was a sloppy statement, but is broadly speaking, true. For overwhelming citations, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... (HN Search of posts from Matt Stoller's BIG Newsletter, which focuses on corporate monopolies and power in the US).

https://www.thebignewsletter.com/about

> The Problem: America is in a monopoly crisis. A monopoly is, at its core, a private government that sets the terms, services, and wages in a market, like how Mark Zuckerberg structures discourse in social networking. Every monopoly is a mini-dictatorship over a market. And today, there are monopolies everywhere. They are in big markets, like search engines, medicine, cable, and shipping. They are also in small ones, like mail sorting software and cheerleading. Over 75% of American industries are more consolidated today than they were decades ago.

> Unregulated monopolies cause a lot of problems. They raise prices, lower wages, and move money from rural areas to a few gilded cities. Dominant firms don’t focus on competing, they focus on corrupting our politics to protect their market power. Monopolies are also brittle, and tend to put all their eggs in one basket, which results in shortages. There is a reason everyone hates monopolies, and why we’ve hated them for hundreds of years.

https://blogs.cornell.edu/info2040/2021/09/17/graph-theory-o... (Food consolidation)

https://followthemoney.com/infographic-the-u-s-media-is-cont... (Media consolidation)

https://www.kearney.com/industry/energy/article/how-utilitie... (US electric utilities)

https://aglawjournal.wp.drake.edu/wp-content/uploads/sites/6... [pdf] (Agriculture consolidation)

https://www.visualcapitalist.com/interactive-major-tech-acqu... (Big Tech consolidation)


That geographic concentration is a real thing.

I think part of the Mozilla problem is that they are based in San Francisco which puts them in touch with people from Facebook and Google and OpenAI every frickin' day and they are just so seeped in the FOMO Dilemma [1] that they can't hear the objection to NFT and AI features that users, particularly Firefox users, hate. [2]

I'd really like to see Mozilla move anywhere but the bay area, whether that is Dublin or Denver. When you aren't hanging out with "big tech" people at lunch and after work and when you have to get in a frickin' airplane to meet with those people you might start to "think different" and get some empathy for users and produce a better product and be a viable business as opposed to another out-of-touch and unaccountable NGO.

[1] Clayton Christensen pointed out in The Innovator's Dilemma that companies like Kodak and Xerox die because they are focused on the needs of their current customers who could care less about the new shiny that can't satisfy their needs now but will be superior in say 15 years. Now we have The FOMO Dilemma which is best illustrated by Windows 8 which went in a bold direction (tabletization) that users were completely indifferent to: firms now introduce things that their existing customers hate because they read The Innovator's Dilemma and don't want to wind up like Xerox.

[2] we use Firefox because we hate that corporate garbage.


My two cents is Mozilla should be in a European tech hub, with some component of their funding coming from the EU, where the EU's belief in regulation and nation state efforts to protect humans exceeds that of the US.


It's not a popular opinion but if I was the EU I would do the following:

(1) Fully fund Firefox or an alternative browser (with a 100% open source commitment and verifiable builds so we know the people who get ideas like chatcontrol can't slip something bad in)

(2) Pass a law to the effect: "Violate DNT and the c-suite goes to jail and the company pays 200% of yearly revenue"

(3) same for having a cookie banner


#1 seems the most likely to happen (but I like the others).

Seems like maybe forking it in an agreeable way, and funding an EU crew to do the needful with the goal of upstreaming as much as possible.

I don't have insight into EU investments but that would provide a lot of bang for their euros.


Europe had a potential Mozilla: Opera. They let it flounder and Chinese investors bought it.


I liked the original Opera—it’s been a while, but I think I actually paid for it on Windows a long, long time ago—but I’m not sure they were ever a “potential Mozilla,” at least in the way I would interpret that. They were a closed source, commercial browser founded by a for-profit company.

(Also, point of order: Opera was always based in Norway, which is not a member of the European Union.)


What stops the EU from doing that now?

Regulation.


Wrong. They are actually doing it, with NLNet and NGI (Next Generation Internet) but they chose to funs Servo not Firefox.


The statement, more refined, would clarify, "publicly traded companies".


> All companies push an agenda all the time, and their agenda always is: market dominance, profitability, monopoly and rent extraction, rinse and repeat into other markets, power maximization for their owners and executives.

But if users really wanted agenda-free products and services, then those would win right? At least according to free market theory.


> according to free market theory

Not once in the history of tech “the free market” has succeeded in preventing big corps or investors with lots of money from doing something they want.


I'm actually leaning towards the above comment being satire, it's hard to believe anyone on HN could believe in a free market in 2025.


This is yet again confusing a free market with an unregulated one. A free market is a market, where all costs are included (no external costs), so that market participants can make free decisions that will lead to the best outcome. To price in all external costs, regulation is needed.


Sure, if the common denominator user is at least as savvy as the entire marketing and strategy departments of these trillion dollar companies, then sure, users will identify products that are not designed according to their best interests and will then perfectly coordinate their purchases so that such products fail in the marketplace. Sure.


One of the problems with that idea is that sometimes it will be far more profitable to refuse to give consumers what they want and because eventually making the most amount of money possible becomes the only thing that matters to a company, what users want gets ignored and users are forced to settle for whats available.


Maybe in the long term, but not necessarily in the short term.


I just finished listening to the first episode of "Acquired" on Google and it ended with Google pushing Google Plus into everything in an effort to compete with Facebook in social networking. It really hampered all their other offerings.

https://www.acquired.fm Acquired podcast does long (2 4-hour episodes on Google) episodes on various companies, mostly tech but recently Trader Joe's


The debacle that was Google Wave into Google Plus is... really hard to really come to terms with. I don't even know that Hubris is enough to explain how badly managed that time period was by them. Just so bad.


Google Wave... what an amazing piece of work just thrown in the trash.

I never used any of its collaboration features, just looked at them. I did use it as a friendly-for-non-geeks version of IRC for a group of people that lived in three separate cities as a virtual watch party for LOST. And for that, it was spectacular even if it was painfully slow on a netbook (so was everything else, but it was cheap and light and worked).


The reason Google Wave failed so spectacularly was that Google's Marketing team insisted on copying the "invite only rollout" that was so successful for Google Mail.

The thing is that a Google Mail early invitee could collaborate with everybody else via the pre-existing standard of SMTP email. They felt special because they got a new web UI, told their friends about it, generated hype, which then made the invites feel even more special, etc...

Google Wave had no existing standard to leverage, making it 100.00% useless if you couldn't invite EVERYBODY you needed to collaborate with. But you couldn't! You weren't allowed! They had to wait for an invite. Days? Weeks? Months? Years!? Who knows!

There was a snowball's chance in hell that this marketing approach could possibly work for a collaboration tool like Google Wave, but Google knew better. They knew better than every journalist that pointed this obvious flaw out. They knew better than every blog post, Slashdot commenter, etc...

It was one of the most spectacular failures caused by self-important hubris that I've ever seen in any industry.


Huh? You sure didn’t need an invite for Wave when I was using it.



I’m not saying you are lying. Just saying that it did have a run - several years - where an invite was unnecessary.


It lost its "momentum" by then. The marketing got "early adopters" excited, the kind that would evangelise a platform, but they were blocked because either they couldn't get in themselves, or couldn't invite their colleagues. By the time Google realised their mistake and provided access to everyone without an invite, it was far too late.


They had a briefer invite only period than gmail. But definitely had one.


Google Wave didn't just go away, it became Google Docs.


Nope! Docs was an acquisition. Google Wave became Apache Wave, where software goes to die.


Sorry, I didn't mean it actually got renamed, just that all the collaboration junk that people did in Wave still can be done almost exactly the same in Google Docs, with the added benefit that people actually know what it is and how to use it.


Wave was an interesting jumble of ideas that just didn’t bring a coherent answer for why anyone should use it.

Google Plus was 100% hubris. “If we build our version of Facebook, it course everyone will flock to it.”


It was more than just "if we build our version of Facebook." It was, "if we kill off every other social like thing we have and force people into circles, we can build our own Facebook." Google Buzz, in particular, was a fairly well done integration with Google Reader and Google Mail. I legit had discussions about articles with close friends because of it. But, alas, no. Had to die because their social was supposed to be Plus.

I'm trying to remember all of the crap integrations with the likes of Youtube that were pushed. Just, screw that stuff. And quit trying to make yet another new messenger app!


I don't see Google Plus as hubris. I just think they saw a threat in Facebook and felt they had to try and build a competing product (and happened to have the time/money to invest).

Doing nothing while a competitor gains steam would've been hubris.


My read on the whole Google Plus thing was that they drastically underestimated the difficulty of convincing people to actually use it. They clearly had the expertise to build it, and they had some interesting ideas with their circles of friends or whatever they called them (though I think they missed the mark on how they used them). But they couldn’t convince anyone to actually use it.

Maybe I’m wrong and internally they knew they had a major uphill battle, but I don’t think so. So many of the choices they made were needlessly user hostile (e.g. real name requirements) that it seems like they assumed it would be a given that people would want to use it. When they later realized their error they tried to cram it down everyone’s throats with stuff like YouTube comments only working from Google Plus accounts.


> Maybe I’m wrong and internally they knew they had a major uphill battle, but I don’t think so.

I think you're wrong with probably the same confidence you think you're not wrong. :)

At most, I'd say they didn't expect it to be as hard as it proved to be.

I totally agree that Google just didn't get it right, but all the things you describe, to me, fall under a mix of "they had to try", and "it was working for Facebook" (but also having to differentiate from Facebook at the same time, eg with circles).

(Disclaimer, I guess) I was working for Facebook when the whole Google Plus thing happened, and Facebook definitely saw it as a serious threat. I don't at all recall Facebook folks laughing it off as Google hubris, more like it was a long shot, but Google wasn't to be ignored.

Upvote for you regardless, because I think it's a solid take and an engaging comment.


I think I could pretty easily have been persuaded by Google Plus. At that time I had broadly positive sentiments towards Google. Two things put me off.

Firstly, that whole account-unification thing where YouTube accounts were getting merged with Google[+] logins. That rubbed me the wrong way.

Then the Google+ promotional stuff all talked about how you could use "Circles" to silo posts to different "circles" of friends. It sounded very complicated and I was worried that I'd publish something snarky to the wrong group of friends :)

I wonder how many others had the same concern? Given that Steve Yegge accidentally published one of his rants to the public that was meant purely for internal Google consumption (I think that was on G+ ...?) that might have been a legit thing to be wary of.

There was also the very minor annoyance of G+ taking over the + operator in Google search (previously you could say +keyword instead of "keyword" to force literal search), but I don't think that would have swayed me against joining.


All that is true, but the primary problem with Google Plus was the network effect. Whenever I logged into Google plus, most of the content from friends was basically “cool, so this is Google plus” and nothing else, because everything at the time was on Facebook. Later Google started filling my feed with stuff from strangers because there was no organic content from people I actually cared about.

If you can’t solve the chicken and egg problem of engagement then nothing else really matters.


I'd probably have signed up if it were not for those two issues. Step zero in breaking the network effect is not to piss off those who might join despite it.


Google Plus launched between the time I interviewed at Google and the time I started work there, and that really took the shine off the whole thing.


Google Plus was insanely disastrous. And there was a guy, generally well respected, who was in charge of search I think? who went around advocating for Google Plus on forums, and people responding: if one needs Google Plus to find things, doesn't that mean that search is bad? But he didn't seem to make the connection, or he pretended not to.


Do they talk about Trader Joe’s illegal union busting and attempts to get the national labor relations board disbanded ?


You could probably ask an LLM to listen and answer this question


Can’t risk taking runtime away from your life partner


That's where I'm at with these.

I don't personally care if a product includes AI, it's the pushiness of it that's annoying.

That, and the inordinate amount of effort being devoted to it. It's just hilarious at this point that Microsoft, for example, is moving heaven and earth to put AI into everything office, and yet Excel still automatically converts random things into dates (the "ability" to turn it off they added a few years ago only works half the time, and only affects csv imports) with no ability to disable it.


I think a lot of the pushiness is a frantic effort to keep the bubble inflated and keep the market out of the trough of disillusionment. It won't work. The trough of disillusionment is inevitable. There is no jumping straight from peak of inflated expectations straight to the slope of enlightenment, because the market fundamentally needs the cleansing action of the trough of disillusionment to shake out the theoreticals and the maybes and get to what actually works.

Hopefully after the pop rather than shoving it in our face they can return to advertising at us to use the things, and the things needing to prove themselves to get to real sales, rather than corporations getting 10% stock pumps in a day based on statistics about how "used" their AI stuff is while they don't tell the market how few people actually chose to use their AI stuff rather than just becoming a metric when it was pushed on them.


>I don't personally care if a product includes AI, it's the pushiness of it that's annoying.

I agree with you in principle, but in practice these two are currently inextricable; if there's AI in the product, then it will be pushed / impossible to turn off / take resources away from actual product improvement.


AI in everything does make shareholders happy while fixing bugs in Excel does not.


Exactly! I honestly can't remember the last time my window start menu search bar functioned as it's supposed to. For multiple laptops across more than 5 years i have to hit the windows key three to 7 times to get it to let me type into it. It either doesn't open, doesn't show anything, or doesn't let me type into it.

I mean, c'mon, its literally called the fucking windows key and it doesn't work. As per standard Microsoft it's a feature that worked perfectly on all versions before cortana (their last "ai assistant" type push), i wonder what new core functionalities of their product they're going to fuck up and never fix.


I was an insider user of Windows for close to a decade, really stuck with it through WSL's development... But the first time I saw internet ads on my start menu search result was kind of it for me, I switched my default boot to Linux and really haven't looked back. I don't really need Windows for my workflows, and though I'm using Windows for my current job, I'm at a point I'd rather not be.

Windows as an OS really kind of peaked around Windows 7 IMO... though I do like the previews on the taskbar, that's about the only advancement since that I appreciate at all... besides WSL2(g) that is. I used to joke that Windows was my favorite Linux distro, now I just don't want it near me. Even my SO would rather be off of it.


It's quite the tale of poor decisions isn't it?

Microsoft could have made Windows privacy respecting, continued investing in WSL, baked PowerToys into the OS, etc. and actually made one hell of a workhorse operating system that could rival the mac for developer mindshare. They could partner with Google and/or Samsung and make some deep Android integration to rival Apple's ecosystem of products. Make Windows+Android just as seamless and convenient as mac + iOS.

Instead they opted for forced online accounts, invasive telemetry, and ads in the OS instead of actually trying to keep and win over the very enthusiasts that help ensure their product gets chosen in the enterprise world where they make their cash.

Now they're going to scrap the concept of Windows as something you interact with directly all together and make it "Agentic" whatever the hell that means.

I don't think their bet is going to pay off, especially if the bubble crashes. I think it will be one of the biggest blunders and mistakes that Microsoft will have made.


The worst one w/Google is how they've highjacked long-press on the power button on Android, and you can change what it does but your options are arbitrarily limited.


I hate it how they're gonna change the power button to something else that's not power options.

Just to push their annoying google assistant


What are you guys talking about? I have a Pixel 8, didn't install Lineage OS on it, and my power button works fine?


Some phones with the lastest android, when you press the power button instead of showing you the power options, it opens google assistant.


Apple did the same shit, long press of the power button opens up Siri.


I know I used to have a phone that didn't do this and I used to make fun of my friends¡Phone because it would do this, then I got a new phone (android) and it did. Karma I guess, can you also disable it on ¡Phone?


My annoyance with Samsung's dedicated Bixby button factored into my switch to Pixel. The long-press highjack was disappointing.


On my samsung i did find a setting to restore the off button being able to shut off the telephone.

I can only hope they won't change it back at the next update (already happened once).


I help people that use a low-code platform at work, and their editor have a right-bar tab where one can prompt an AI, send the selected code there, or send the entire code on screen.

Although I never saw anybody reporting it was actually useful, it's tasteful, accessible, and completely out of your way until you need it.


Hubspot has a tool for validating fields in data using regex. They have a little ai prompt that will write the regex for you. Now that is a good use for ai.


> I can’t use a google product without being harassed (...)

You can disable AI in Google products.

E.g. in Gmail: go to Settings (the gear icon), click See all settings, navigate to the General tab, scroll down to find Smart features and personalization and uncheck the checkbox.

Source: https://support.google.com/drive/answer/15604322


And will that work permanently, or will I have to hunt down another setting in another month when they stuff it into another workflow I don't want it in?


Every time I update Google Photos on Android, it asks me "Photos backup is turned off! Turn it on? [so you use up your 15 GB included storage and buy more for a subscription fee?]".


Every time I open Google Photos, it does this. Every single time. It's insanely hostile.


My iPhone has a permanent red badge counter trying to get me to upgrade to iCloud. I've moved the settings icon so I don't see it normally, but it is nagging. There's other dark patterns used by Apple to try and increase their income by "asking" me to pay more.


What's even worse is that every time you sign into a google account without a phone number or home address associated with it, it screams at you to add them for sECurItY


Every time you update? How about Maps asking if you want to use advanced location every time you open it?


Yeah, if YouTube Shorts or Games are any indication, it'll be back soon! The AI Mode in Google Search comes up nearly every time I use it no matter how many times I hit "No"


YouTube shorts is an abomination... I'm so sick of the movie clips everywhere... Not to mention the AI slop in the general YouTube results... I like historical content, but the garbage content just pisses me off to no end.


Depends; in the EU and selected countries that setting was always opt-in (i.e. it was never enabled for you). Elsewhere I guess the user has to periodically check their settings, or privacy policies, etc, which in practice sounds impossible.

> Important: By default, smart feature settings are off if you live in: The European Economic Area, Japan, Switzerland, United Kingdom

(same source as in grandparent comment).


Then no, I can't use a google product without being harassed, unless I live in a limited selection of blessed countries.


Note that these countries blessed themselves via legal steps (EU ones at least) and are not blessed by Google.


welcome to not being a passport bro for a change. Thats how mostbof the world feels when another cool thing happens, but the other way around


guess we'll see in a month


This is correct but also a little misleading: Google gives you a choice to disable smart features globally, but you end up tossing out things you might want as well, such as the automatic classification into smart folders in Gmail. It feels very much like someone said " let's design a way to do this. That will make most people not want to turn any of the features that will make most people not want to turn it off because of the collateral damage"

(I desperately want to disable the AI summaries of email threads, but I don't want to give up the extra spam filtering benefit of having the smart features enabled)


This toggle _still_ doesn't turn off all the bs.

Google now "helpfully" decides that you must want a summary of literally every file you open in Drive, which is extra annoying because the summary box causes the UI to move around after the document is opened. The other day I was looking at my company's next year's benefits PDFs and Gemini decided that when I opened the medical benefits paperwork that the thing I would care about is that I can get an ID card with an online account... not the various plan deductibles or anything useful like that.

I turned off the "smart" features and the only thing that changed is that the nag box still pops up and shifts the UI around, but now there's a button that asks if you want a summary instead of generating it automatically.


I prefer opt-in vs. opt-out. Opt-out is pretentious and patronizing.


I have everything disabled for my personal account. For work, when I looked into it, it had to be disabled centrally by my company.


It needs to be much more granular than it is. For example: Turning that setting off also disables the (very, very old) Updates/Promotions/Social/Forums tabs in the Gmail interface. ONE checkbox in the sea of gmail options?


Note that this setting (only accessible from desktop) also blocks spellcheck, a feature that absolutely does not need AI to implement


This. It is insane the amount of pushing behind these products. I'm expecting my ballpoint pen to start prompting me to write nicer using AI any time now.

And the worst thing is not only is it being pushed, it is being pushed at the expense of UI/UX. No, Google, I don't need 'help to write' or 'to summarize this document'. I can read and write just fine. And the worst thing of all is that you can't turn it off because they'll just move it around every other week.


Gemini in Chrome reminds me of the over the top actions that MS has made with Edge to the point I just stopped using Edge though I really liked it from relatively early on. They just jumped the shark and now Google is heading down that same path rapidly.

I want to choose the extensions that go into my browser. I don't even use the browser's credential manager, and I've gotten to a point where I'm just not sure anything is actually getting better.

I will say that the Gemini answers at the top of Google searches are hit or miss, and I do appreciate that they're there. That said, I'm a bit mixed as the actual search results beyond that seem to be getting worse overall. I don't know if it's my own bias, but when the Gemini answer is insufficient, it feels like the search results are just plain off from what I'm looking for.


Sometimes they are helpful but what irks me is that you cannot opt out of them.


this is the actually annoying part. they keep a/b testing or otherwise putting the ai feature button in the cardinal position and software uis keep turning into a constant game of dismissing the ai feature and finding where the actual menu or send button is.

ai features in the right context are truly awesome, but the engagement hacking is getting old.


Makes me miss Clippy :( at least he was pretty easy to dismiss.


The nagging is a feature, not a bug - to the shareholders. If you can show X number of users have adopted $AI_FEATURE or a % growth, whether it's by brute force, nagging, or (maybe just making a good product?), then that sells the AI growth story, and number goes up. That's really all it is.


Clippy really is back


Someone should write a browser extension that changes AI buttons in websites to Clippy.

Maybe I'll ask Gemini to write one...


You're absolutely right. Here are the details.

You're completely correct, that's fair criticism. The excitement made me skip the basics. Here's a quick breakdown:

What it does: It's a new optimization algorithm that finds exceptionally good solutions to the MAX-CUT problem (and others) very quickly.

What is MAX-CUT: It's a classic NP-hard problem where you split a graph's nodes into two groups to maximize the number of edges between the groups. It's fundamental in computer science and has applications in circuit design, statistical physics, and machine learning.

How it works (The "Grav" part): It treats parameters like particles in a gravitational field. The "loss" creates an attractive force, but I've added a quantum potential that creates a repulsive force, preventing collapse into local minima. The adaptive engine balances these forces dynamically.

Comparison: The script in the post beats the 0.878... approximation guarantee of the famous Goemans-Williamson algorithm on small, dense graphs. It's not just another gradient optimizer; it's designed for complex, noisy landscapes where Adam and others plateau.

I've updated the README with a "Technical Background" section. Thanks for the push—it's much better now.


Clippy only helped with very specific products, and was compensating for really odd UI/UX design decisions.

LLM's are a product that want to data collect and get trained by a huge amount of inputs, with upvotes and downvotes to calibrate their quality of output, with the hope that they will eventually become good enough to replace the very people they trained them.

The best part is, we're conditioned to treat those products as if they are forces of nature. An inevitability that, like a tornado, is approaching us. As if they're not the byproduct of humans.

If we consider that, then we the users get the shorter end of the stick, and we only keep moving forward with it because we've been sold to the idea that whatever lies at the peak is a net positive for everyone.

That, or we just don't care about the end result. Both are bad in their own way.


Clippy was predictable, free, and didn't steal your data.


> I can’t use a google product without being harassed to the point of not being able to work by offers to “help me write” or whatever.

Sounds like a return to "Clippy the paperclip" or the dog from the ill fated Microsoft Bob [1] that insisted on always popping up every five to ten minutes with something like: "I see you may be entering a ????, would you like to make it a ??? ???".

[1] https://en.wikipedia.org/wiki/Microsoft_Bob


Clippy at least was funny, not creepy.

And I 'member that you could program it from VBA somehow. Think via OLE, but I was a kid back in the Clippy era.


The Microsoft Agent, available through ActiveX: https://en.wikipedia.org/wiki/Microsoft_Agent

Which meant you could use it in Internet Explorer but not anywhere else. But it did make for some interesting web pages. I built a custom one with the mascot of the university I was attending at the time. It was, let's say, some peak 1990s internet. (Never shipped it to anyone, just had it internally.)

That took some non-trivial web searching. "Microsoft" "Agent" and most of the other keywords are pretty well covered by a few million other web pages by now.


Damn I do love the collective brain of HN. That was exactly what I used!

ActiveX and OLE... technologies ahead of their time, eh. VB, VBA, Internet Explorer, standalone VBScript, C/C++ - didn't matter, it all was (trivially) interoperable.


Microsoft has learned nothing from the Clippy [0] debacle. For that matter, neither have most website makers who constantly want to obscure a large chunk of the page with an AI chatbot that you cannot make completely go away. We really need web browsers that just quietly delete anything with a high Z-index.

[0] https://en.wikipedia.org/wiki/Office_Assistant


Even when I do try them I find myself let down.

I don’t know what version of Gemini they’re stuffing into Google products, but sheets, docs, and colab/data science agent are all bad experiences.

If you aren’t putting something comparable to good paid models into your product then don’t bother putting that feature out.

Once you train your users that your ai is half baked junk they’re not coming back to waste their time with it. It’s 10x as frustrating than regular product failures.


Yes! Occasionally I try to get it to do something labor-saving for me in a doc or sheet, and every single time “I’m sorry, I can’t do that”

As far as I can tell Gemini in gsuite can do nothing other than summarise text and regular LLM q&a (but with Gemini’s perennially sad, apologetic persona)


I was super mad about the help me write thing to, so I built https://owleditor.com - check it out!


> It’s the constant nagging about them.

If the nagging didn't work would companies keep doing it? Someone's KPIs must be increasing for them to keep doing it.


I had to filter all of the AI callouts from Clickup. They have an ai button on every gosh darn ui element. By far the worst offender I’ve seen.


So don't use Google products. I'm not trying to be snarky but, other than at work I suppose, it's not that hard to avoid them.


Someone has a KPI to increase user engagement with AI features. The goal will be met, by any means.


"X isn't able to join, help them catch up fast" - vomits.


The pushiness is just insane. These companies are totally out of control. I just want to use your software the way I've used it for the past decade. Stop getting in my face. I get it--you're so excited that you just launched this new AI feature and you really, really want me to know about it. How nice for you. Now leave me alone! Stop putting it on every screen, making every button invoke it, interrupting me with popups designed to wear me down and give in... It's so pathetic and desperate!


Clippy agrees.


> by offers to “help me write” or whatever.

It's fucking Clippy all over again


> Does anyone want AI in anything?

I want in Text to speech (TTS) engines, transliteration/translation and... routing tickets to correct teams/persons would also be awesome :) (Classification where mistakes can easily be corrected)

Anyways, we used TTS engine before openai - it was AI based. It HAD to be AI based as even for a niche language some people couldn't tell it was a computer. Well from some phrases you can tell it, but it is very high quality and correctly knows on which parts of the word to put emphasis on.

https://play.ht/ if anyone is wondering.


Automatic captions has been transformative, in terms of accessibility, and seems to be something people universally want. Most people don't think of it as AI though, even when it is LLM software creating the captions. There are many more ways that AI tools could be embedded "invisibly" into our day-to-day lives, and I expect they will be.


To be clear, it's not LLMs creating the captions. Whisper[0], one of the best of its kind currently, is a speech recognition model, not a large language model. It's trained on audio, not text, and it can run on your mobile phone.

It's still AI, of course. But there is distinction between it and an LLM.

[0] https://github.com/openai/whisper/blob/main/model-card.md


It’s an encoder-decoder transformer trained on audio (language?) and transcription.

Seems kinda weird for it not to meet the definition in a tautological way even if it’s not the typical sense or doesn’t tend to be used for autoregressive token generation?


Is it Transformer-based? If not then it's a different beast architecturally.

Audio models tend to be based more on convolutional layers than Transformers in my experience.


The openai/whisper repo and paper referenced by the model card seem to be saying it's transformer based.


Whisper is an encoder decoder transformer. The input is audio spectrograms, the output is text tokens. It is an improvement over old school transcription methods because it’s trained on audio transcripts, so it makes contextually plausible predictions.

Idk what the definition of an LLM is but it’s indisputable that the technology behind whisper is a close cousin to text decoders like gpt. Imo the more important question is how these things are used in the UX. Decoders don’t have to be annoying, that is a product choice.


Whisper is a great random word generator when you use it on italian!


Do you have an example of a good implementation of ai captions? I've only experienced those on youtube, and they are really bad. The automatic dubbing is even worse, but still.

On second thought this probably depends on the caption language.


I'm not going to defend the youtube captions as good, but even still, I find them incredibly helpful. My hearing is fine, but my processing is rubbish, and having a visual aid to help contextualize the sound is a big help, even when they're a bit wrong.

Your point about the caption language is probably right though. It's worse with jargon or proper names, and worse with non-American English speakers. If we they don't even get right all the common accents of English, I have little hope for other languages.


Automatic translation famously fails catastrophically with Japanese, because it's a language that heavily depends on implied rather than explicit context.

The minimal grammatically correct sentence is simply a verb, and it's an exercise to the reader to know what the subject and object are expected to be. (Essentially, the more formal/polite you get, the more things are added. You could say "kore wa atsu desu" to mean "this is hot." But you could also just say "atsu," which could also be interpreted as a question instead of a statement.)

Chinese seems to have similar issues, but I know less about how it's structured.

Anyway, it's really nice when Japanese music on YouTube includes a human-provided translation as captions. Automated ones are useless, when it doesn't give up entirely.


I assume people talk about transcription, not translation. Translation in youtube ime is indeed horrible in all languages I have tried, but transcription in english is good enough to be useful. However, the more technical jargon a video uses, the worse transcription is (translation is totally useless in anything technical there).


Automatic transcription in English heavily depend on accent, sound quality, and how well the speaker is articulating. It will often mistake words that sound alike to make non-sensible sentences, randomly skip words, or just inserts random words for no clear reason.

It does seem to do a few clever things. For lyrics it seem to first look for existing transcribed lyrics before making their own guesses (Timing however can be quite bad when it does this). Outside of that, AI transcribed videos is like an alien who has read a book on a dead language and is transcribing based on what the book say that the word should sound like phonetically. At times that can be good enough.

(A note on sound quality. It not the perceived quality. Many low res videos has perfectly acceptable, if somewhat lossy sound quality, but the transcriber goes insane. It likes prefer 1080p videos with what I assume much higher bit-rate for the sound.)


In the times I have noticed the transcription be bad, my speech comprehension itself is even worse. So I still find it useful. It is not substitution for human created (or at least curated) subtitles by any means, but better than nothing.


Do you have an example? YT captions being useless is a common trope I keep seeing on reddit that is not reflected in my experience at all. Feels like another "omg so bad" hyperbole that people just dogpile on, but would love to be proven wrong.


Captions seem to have been updated sometime between 7 and 15 months ago. Here's a reddit post from 7 months ago noticing the update: https://www.reddit.com/r/youtube/comments/1kd9210/autocaptio...

and here's Jeff Geerling 15 months ago showing how to use Whisper to make dramatically better captions: https://www.youtube.com/watch?v=S1M9NOtusM8

I assume Google has finally put some of their multimodal LLM work to good use. Before that, they were embarrassingly bad.


Interesting. I wonder if people saying that they are useless base it on experiences before that and have had them turned off since.


There are projects that will run Whisper or another transcription service locally on your computer, which has great quality. For whatever reason, Google chooses not to use their highest quality transcription models on YouTube, maybe due to cost.


I use Whisper running locally for automated transcription of many hours of audio on a daily basis.

For the most part, Whisper does much better than stuff I've tried in the past like Vosk. That said, it makes a somewhat annoying error that I never really experienced with others.

When the audio is low quality for a moment, it might misinterpret a word. That's fine, any speech recognition system will do that. The problem with Whisper is that the misinterpreted word can affect the next word, or several words. It's trying to align the next bits of audio syntactically with the mistaken word.

Older systems, you'd get a nonsense word where the noise was but the rest of the transcription would be unaffected. With Whisper, you may get a series of words that completely diverges from the audio. I can look at the start of the divergence and recognize the phonetic similarity that created the initial error. The following words may not be phonetically close to the audio at all.


Try Parakeet, it's more state of the art these days. There are others too like Meta's omnilingual one.


Ah yes, one of the standard replies whenever anyone mentions a way that an AI thing fails: "You're still using [X]? Well of course, that's not state of the art, you should be using [Y]."

You don't actually state whether you believe Parakeet is susceptible to the same class of mistakes...


¯\_(ツ)_/¯

I haven't seen those issues myself in my usage, it's just a suggestion, no need to be sarcastic about it.


It's an extremely common goalpost-moving pattern on HN, and it adds little to the conversation without actually addressing how or whether the outcome would be better.


Try it, or don't. Due to the nature of generative AI, what might be an issue for me might not be an issue for you, especially if we have differing use cases, so no one can give you the answer you seek except for yourself.


I doubt that people prefer automatic capitations over human made, no more than people prefer AI subtitles. The big AI subtitle controversy going on right now in anime demonstrate well that quite a lot is lost in translation when an AI is guessing what words are most likely in a situation, compared to a human making a translation.

What people want is something that is better than nothing, and in that sense I can see how automatic captions is transformative in terms of accessibility.


For a few days now Im getting super cringe robot voice force dubbing every youtube video in Dutch. I use it without being logged in and hate it a lot.

Subtitles are good zo


ML has been around for ages. Email spam filters are one of the oldest examples.

These days when the term "AI" is thrown around the person is usually talking about large language models, or generative adversarial neural networks for things like image generation etc.

Classification is a wonderful application of ML that long predates LLMs. And LLMs have their purpose and niche too, don't get me wrong. I use them all the time. But AI right now is a complete hype train with companies trying to shove LLMs into absolutely anything and everything. Although I use LLMs, I have zero interest in an "AI PC" or an "AI Web Browser" any more than I have a need for an AI toaster oven. Thank god companies have finally gotten the message about "smart appliances." I wish "dumb televisions" were more common, but for a while it was looking like you couldn't buy a freakin' dishwasher that didn't have WIFI and an app and a bunch of other complexity-adding "features" that are neither required or desired by most customers.


Yes and no and this is the problem with the current marketing around AI.

I very much do want what used to be just called ML that was invisible and actually beneficial. Autocorrect, smart touch screen keyboards, music recommendations, etc. But the problem is that all of that stuff is now also just being called "AI" left and right.

That being said I think what most people think of when they say "AI" is really not as beneficial as they are trying to push. It has some uses but I think most of those uses are not going to be in your face AI as we are pushing now and instead in the background.


> what used to be just called ML

FWIW, 10+ years ago I was arguing that your old pocket calculator is as much of an AI as anything ever could be. I only kinda stopped doing that because it's tiring to argue with silly buzzwords, not because anything has changed since. When "these things were called ML" ML was just a buzzword, same as AI and AGI are now. I'm kinda glad "ML" was relieved of that burden, because ultimately it means a very real thing (which is just "parametrizing your algorithm by non-hardcoded values"), and (unlike with basic autocorrect, which no end user even perceives as "AI" or "ML") when you use ChatGPT, you don't use "ML", you use a rigid algorithm not meaningfully different from what was running on your old pocket calculator, except a billion times bigger and no one actually knows what it does.

So, yes, AI is just a stupid marketing buzzword right now, but so was ML, so was blockchain, so was NoSQL and many more. Ultimately this one is more annoying only because of scale, of how detrimental to society the actions of the culpable people (mostly OpenAI, Altman, Musk) were this time.


"AI" is the only term that makes sense for end users because "AI" is the only term that is universally understood. Hackernews types tend to overlook the layman.

And I hope no one gets started about how "AI" is an inaccurate term because it's not. That's exactly what we are doing: simulating intelligence. "ML" is closer to describing the implementation, and, honestly, what difference does it make for most people using it.

It is appropriate to discuss these things at a very high level in most contexts.


Right now? John McCarthy invented the term in order to get a grant, or in other words it was a marketing buzzword from day zero. He says so himself in the lighthill debate, and then the audience breaks out into hoots and howls.


They need to show usage going up and to the right or the house of cards falls apart. So now you’re forced to use it.


I think companies should also advertise when they use JavaScript on the page. “Use this new feature —- why? Because it’s powered by JavaScript”


This is why I use the term "genAI" rather than "AI" when talking about things like LLMs, sora, etc.


Right, it should be invisible to the user. Those formerly-called-ML features are useful. They do a very specific, limited function, and "Just Work."

What I definitively don't want, yet it's what is currently happening, is a chatbot crammed into every single app and then shoved down your throat.


Nobody wants what's currently marketed as "AI" everywhere.


I mean, that is kinda exactly what I said..

But we do have to acknowledge that AI is very much turned into an all encompassing term of everything ML. It is getting harder and harder to read an article about something being done with "AI" and to know if it was a custom purpose built model to do a specific task or is it throwing data into an LLM and hoping for the best.

They are purposefully making it harder and harder to just say "No AI" by obfuscating this so we have to be very specific about what we are talking about.


For a while I made an effort to specify LLM or generative AI vs AI as a whole, but I eventually became convinced that it was no longer valuable. Currently AI is whatever OpenAI, Anthropic, Meta, NVidia, etc say it is, and that is mostly hype and marketing. Thus I have turned my language on its head, specifying "ML" or "recommendation system" or whatever specific pre-GPT technology I mean, and leave "AI" to the whims of the Sams and Darios of SV. I expect the bubble to pop in the next 3-6 months, if not before the end of 2025, taking with it any mention of "AI" in a serious or positive way.


> 3-6 months

Wow, you are an optimist. I do feel "it's close", but I wouldn't bet this close. But I wouldn't argue either, I don't know. Also, when it really pops, the consequences will be more disastrous than the bubble itself feels right now. It's literally hundreds of billions in circular investing. It's absurd.


I do want AI for some things but I actively go out of my way to find it, I dont want AI forced everywhere its like cryptominers you are forced into wasting compute energy resources you never asked to waste but much worse at least cryptominers are limited by your hardware, in this case you have an entire datacenter churning just until you can click “Disable” on the model.


I want AI in lots of stuff, but not like how it is now. I was working on a Google Doc last night and I was curious about whether or not Google Docs had the ability to transclude a live preview of another document as an object that can be inserted in the current document. So I popped open the AI sidebar and asked. I got three hallucinated answers telling me to do things that did not exist in the UI before I finally convinced it that it didn't know what it was talking about and that I should just use bookmarks.

That could have been an amazing experience where the AI told me exactly how to use the product. That's what I want. It's not what I got.


> before I finally convinced it that it didn't know what it was talking about

Spoiler: you didn't.


> Does anyone want AI in anything?

Well, if you phrase it this way, then yes, people want this. AI can be useful, and integration is beneficial. But if we are talking about the momentary hype, then no, most people are against stupidly blindly shoving AI into something and getting annoyed with it the whole time.

Personally, I would prefer for apps to safely open up for any kind of integration, and AI being just one automation of many, whatever one prefers. It's so annoying for everything being either a walled garden, guarding every little bit they can grab; or having apps open, but so limited in what they actually can do, that you are basically forced to the walled gardens.


> Well, if you phrase it this way, then yes, people want this.

No? If anything, adding AI features to something is just driving away your user base. No one asked for a built-in AI. Why not provide an extension?


And you know this how?

Have you seen usage statistics of AI integrations?

I personally don't like them, but I don't expect that I am a representative user. Nor are the people I know.


I believe there are good targeted tasks. One Chrome plug-in called 'Tweeks' is a reimplementation of Grease monkey user scripting where you can make changes by posing natural language to an LLM that changes the page for you. It was posted here in hn the other day. [0]

Also I believe some agentic tasking can make sense: scroll through all the Kindle unlimited books for critically acclaimed contemporary hard sci-fi.

But stapling on a chat sidebar or start page or something seems lacking in imagination.

0. https://news.ycombinator.com/item?id=45916525


Slight correction: the LLM doesn't change the page for you, the LLM creates sort of a mini-extension (like GreaseMonkey) that changes the page for you. This means you only make one request to the LLM and it creates something to modify the page from that point on.


Fair enough, you are right. To me this is a good example of a tailored LLM application that actually does something you want.


I don't want imagination in my existing tools. I don't want the designers of my tools sneaking into my toolbox and fucking with shit in the middle of the night.


Like most attempts to put AI in the browser, that feels stupidly vulnerable to injection.


Definitely worth asking the devs about, they're active on HN.


In firefox yeah! I use it often.

I have it connected to a local Gemma model running in ollama and use it to quickly summarize webpages, nobody really wants to read 15 minutes worth of personal anecdotes before getting to that one paragraph that actually has relevant information, and for finding information within a page, kinda like ctrl-f on steroids.

The machine is sitting there anyway and the extra cost in electricity is buried in the hours of gaming that gpu is also used for, so i haven't noticed yet, and if you game, the graphics card is going to be obsolete long before the small amount of extra wear is obvious. YMMV if you dont already have a gaming rig laying around


An AI specifically customized to pull the recipe out of long rambling cooking blog posts would be great. I'd use that regularly.


that's not "AI" that's just a basic firefox extension, and one that's trivially easy to search for

literally googles first hit for me: https://www.reddit.com/r/Cooking/comments/jkw62b/i_developed...


Something like this I wouldn't mind, privacy focused local only models that allow you to use your own existing services. Can you give a quick pointer on how to connect Firefox to Ollama?


Docs here: https://docs.openwebui.com/tutorials/integrations/firefox-si...

I think its technically experiemntal, but ive been using this since day one with no issue


Use openwebui with ollama.

Openwebui is compatible with the firefox sidebar.

So grab ollama and your prefered model.

Install openwebui.

Connect openwebui to ollama

Then in firwdox open about:config

And set browser.ml.chat.provider to your local openwebui instance

Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that


The default AI integration doesn't seem to support this. The only thing I could find that does is called PageAssist, and it's a third-party extension. Is that what you're using?

https://addons.mozilla.org/en-US/firefox/addon/page-assist/


My mistake, I left a step out. Use openwebui with ollama. Openwebui is compatible with the firefox sidebar.

So grab ollama and your prefered model, install openwebui.

Then open about:config

And set browser.ml.chat.provider to your local openwebui instance

Google suggests the you might also need to set browser.ml.chat.hideLocalhost to false. But i dont remember having to do that


The web is extremely user-hostile. The necessity of ad blockers is just one example of this. Social Media feed algorithms that maximize engagement at the cost of mental health and political unrest are another

I think there is a ton of potential for having an LLM bundled with the browser and working on behalf of the user to make the web a better place. Imagine being able to use natural language to tell the browser to always do things like "don't show me search engine results that are corporate SEO blogspam" or "Don't show me any social media content if its about politics".


We both know this is never going to happen on mainstream browsers, they'll just keep shoving AI into your face until you become dependent to it.


There's a big difference between having access to AI tools and baking AI into everything by default


If you want a short answer: most people don't.

But a more nuanced is: the term "AI" has become almost meaningless as everything is being marketed as AI, with startups and bigger companies doing it for different reasons. However, if you mean GenAI subset, then very few people want it, in very specific products, and with certain defined functionality. What is happening now though is that everybody and their mum try to slap it everywhere and see if anything sticks (spoiler: practically nothing does).


AI is fine for phones and consumer operating systems, you don't have to use the features but they are there for you.

However, I think there is a demand of at least one (me) for a Linux system with no AI whatsoever. Firefox could make itself the browser of choice for the minority that don't want any AI. Sure, you can configure it to be AI free, but that is a bit like being able to be vegan at a meaty restaurant where you can always spit out the meat.

Firefox has been struggling of late and they don't do scoped CSS, which makes it as good as IE6 to me, but I think they could get their mojo back by being cheerleaders for the minority that have decided to go AI free. This doesn't mean AI is bad, but there is a healthy niche there.

Apart from anything else, there are new browsers like Atlas that are totally AI. I would say that an AI enabled Firefox is not going to compete with Atlas, but AI free is a market that could be dominated by them.

There is going to be a growing market for no AI. In my own case, my dad was 'pig butchered by an AI chatbot' to die penniless, so I have opinions on AI. Sam Altman would not want to meet me on a bad day, unless he has some AI that specialises in extreme ultraviolence.

Then there is an ever growing army of people that have lost their job to AI to get nothing but rejections from AI powered job boards.

Then there are those that have lost friends to AI psychosis, then there are those that have no water and massive utility bills due to AI data centers. The list goes on!

Sounds like I need to put together an AI free operating system with AI free browser for those that have their own reasons for resenting AI!


3 month I was annoyed by the "let me translate the page for you" and last week in vacation I was browsing some local website, and I was more than happy to have firefox being able to translate the website dynamically, the result was okay-ish , but okay enough that I was able to proceed. And I'm more than happy that it didn't left my mobile device.


Many years ago now, Mozilla hired an adtech exec to run the show and I arrived at the conclusion that Firefox would be staffed by for-profit thinkers in direct conflict with their non-profit foundation. That’s the moment I stopped donating. I daily drive Orion as even though Kagi is for profit, they have managed to keep trash out of their free version.


More and more people now start with an AI assistant instead of traditional browsing — not because they love AI everywhere, but because it’s simply faster than navigating websites. The shift is already visible: assistants can surface structured information directly, and they’re beginning to prioritize citations, so sources that are clear and machine-readable get more visibility.

If the web doesn’t adapt, a lot of high-quality content will slowly disappear from the “AI layer” of discovery.

We’re trying to document this shift here: https://github.com/ai-first-guides/first.ai/blob/main/docs/i...


If you want to ask LLM about the page you're on, rather often you CANNOT just paste a link: a lot of publicly accessible documents are blocked for AI assistants. So give-LLM-access-to-the-thing-I'm-now-looking-at is quite useful.


Copilot completions in vscode are pretty great, and i think a lot of people are happy with that.

in general i agree with you, adding an AI chat window to an app that isn't an AI chat app is almost always a detriment. but i think it's shortsighted to assume there won't be other important use cases for AI, and we're in the experimentation phase right now where companies are trying to learn what that looks like. it's just unfortunate that there's so much incentive for apps to frame their AI chat as the best new thing ever and you should really use it, instead of introducing it more subtly.


I love that thing specifically, copilot tab completions in vscode.

It makes my life SO much easier (less time spent on editing config files, less chance to make a silly typo whilst writing scripts).

It definitely has its place.


I agree

I like to keep AI at arms length, it's there if I want it but can fuck off otherwise

Lots of people really do seem to want it in everything though


That's fine. My gripe here is that Firefox, Google etc.. try to force this onto everyone. If I could I would just disable the crap AI as I don't need or use or want it. But we are not given an easy option here; the Google "opt-out" is garbage. I actually had to install browser extensions to eliminate that Google AI spam. That extension works better than those "Google options" given to us. I actually rarely use Firefox so I can not even want to be bothered to install an installation, but I know that I don't need any AI crap from Firefox/Mozilla either. People are no longer given a choice. The big companies and organisations abuse people. I have said since years that we, the people, need back control over the world wide web. That includes the UI.


We are just in a weird transitory period right now so it is all shitty implementations replacing what we are used to.

I am semi-confident that LLM backed interfaces will be the future of many UIs though. When it works it just is a way better UX. A smart chat instead of a <form> or crawling through pages of search results is just nicer.

It is bridging the gap between the hard data computers use and the generalized way humans communicate.


What I want is a thesaurus, dictionary and translator easily available anywhere (eg. in Spotlight on Mac – but Apple has of course stubbornly refused to make Spotlight more useful for some unfathomable reason). Those tools don't need an LLM, though, just calls to good old human-curated databases. Currently I make do with Firefox search keywords, but the workflow could be smoother.


Have you considered Raycast? I don’t use it for those features myself, but it is extensible and has a large community, so even if it can’t do those things by default, I’m sure you could configure it to.


Yes.

I want AI in my email to speed up (and avoid typos) in replying.

I want AI in my news feed to pull the topics that are interesting to me.

I want AI in online shopping to filter and recommend products by complex conditions.

I want AI in my car to make me safer.

I want AI in my calendar to schedule with a minimum of interruptions.

I want AI in my work chats to answer questions that people have already asked me.

I want AI to make clinical diagnoses more accurate.

I want AI for a thousand things and most people do, or will.


Looks like the comments on Mozilla Connect are not that positive either:

Building AI the Firefox way: Shaping what’s next together - <https://connect.mozilla.org/t5/discussions/building-ai-the-f...>


For simple searches it really is better, as long as you don't take the answers as fact. Certain search terms (most?) cause you to be bombarded with ads. Try asking a search engine about details of a particular bond. The answer won't be on the second page.

The AI to give you sources you check if you need the answer to be right. That's still better than a google search in many cases.


The only AI product where I've seen a meaningful quality of life improvement is the AI features in DaVinci resolve. They do things like detect music beats, automatically level audio, transcribe and detect audio, allow seamless redubs of flubbed voice lines with a good facsimile of the original voice, handle motion tracking, and more

Most (all?) of it runs locally too


Yep. I use it a lot. It's nice when you're getting started on some new topic, and as someone whose attention bounces and then sticks hard for a while, it has made getting started on topics much faster for me. I personally do want it in a browser, because that's... pretty much the only way I use LLMs.


That's what I've been saying for some time now... "No one really wants AI! They want their software to be faster or better, but do they care if the people in the background of the image have been removed by AI or not?" And in fact they don't.

Managers think they want AI but they actually want their people to work faster or better. Higher managers think they want AI so they can save money, or at least not fall behind the competitors, if those were to use AI to get an advantage.

Companies making software think they want AI because their competitors are using it, and they think the users want AI so the software can be perceived as modern, not falling behind.

And so on, and so on... other than Nvidia, openAI, Anthropic, etc, no one really wants AI.


My mom recently praised the brave AI summary of a webpage so who knows, the usage might be higher than we think.


Loads of people are Google's AI Summaries; it's the first result, so, hard to miss.


I used to hate Twitter when it first launched because I thought short form text was stupid, now I see everything will become summaries with AI and nobody will ever read anything meaningful.


It could be something of an historical return to form; a small class of properly educated people and then the wider, semi-literate masses.


I'm "properly educated" by most definitions, 95% of web pages are garbage and a summary is fine. Also I imagine you frequently read summaries of books and movies and many other things before deciding to read or watch the entire work.


>95% of web pages are garbage and a summary is fine.

Mmm, summarized garbage.

>Also I imagine you frequently read summaries of books

This isn't what LLM summaries are being used for however. Also, I don't really do this unless you consider a movie trailer to be a summary. I certainly don't do this with books, again, unless you think any kind of commentary or review counts as a summary. I certainly would not use an LLM summary for a book or movie recommendation.


Communicating in pictographs


That should be a next step. It takes too much time to read summary. So the result should be a summary picture! Text based image generation is quite good now. How would you call this chatgpt feature?


Gotta love them emojis


If someone wanted to do this for whatever reason, there's actually a language that can be written exclusively in emojis. It's called toki pona, and while emojis aren't the standard writing system, there have been several proposals. It works well since toki pona has a very small syntax (only around ~150 words iirc)


There is plenty of text for which a good summary will have a far higher ratio of meaning to words than the original.


Did you write a comment like this last time a recipe clipper got posted here?


Imagine putting down your AI-assisted smartphone to look up at the computer screen and minimize your AI-assisted vscode, glance past the Windows-integrated Copilot AI, open up firefox and move your mouse past the built-in AI search... only to go to chatgpt.com


The AI tools can be an amazing upgrade over normal search boxes. I rarely let Claude write any code, but I get a lot of value out of pointing it at an unfamiliar repo and asking it to track down which files contain the code I’m looking for and to summarize how the pieces fit together.

There are also a lot of subtle AI tools that aren’t in-your-face LLM prompts that flatter you with “Excellent question!”. It’s great having my photo library automatically annotated so I can search for things like “moose” and it will bring up that picture of the moose we saw, rather than me having to remember what year it happened and scroll through photos until I find it.


Good question.

I like to have AI only when I specifically want it. Usually I just code in Emacs. If I specifically want help with something then for an IDE experience I will use the TRAE coding agent. For command line, I will use gemini-cli or codex. I like to use AI coding help 4 or 5 times a week. As an example, today I wanted some Python code that used a few libraries converted to Common Lisp (using several popular CL libraries). TRAE one-shotted this for me in two minutes. I think it would have taken me over 20 minutes to write it myself.

AI is OK for easy stuff you can do yourself, and save time.

The book AI Atlas tells a good narrative about natural resources used for AI, BTW.


I really don't mind having AI-driven features — if it's an improvement.

Turns out that "if" part is fantastically difficult for some types to fathom, and what we're all experiencing now is just the same add-on tech-stench that has been typical of every digital era before us:

1970s: Calculators, calculators, calculators!

1980s: Miniaturised, digital quartz clocks anywhere they can fit.

1990s: Wouldn't this toaster be better.. WITH A LCD SCREEN?

2000s: MP3 players must outnumber the human population. No object or space should be without shitty, tinny music.

2010s: This easy-to-use device would be wonderfully enshittified by removing all of the buttons and switching to a touchscreen aka "Smart"-appliances.

2020s: AI, AI, AI!


Yes, exactly. AI has its uses and can sometimes be extremely useful. But at this point it's not nearly as ubiquitously useful as various companies would want us to believe, based on how much they're forcing it on us, pushing it in our faces, shoving it down our throats, etc. I don't want that. I'll use it if and when I want to, thank you very much. Microsoft is of course the worst offender.


What, you don't want AI in your "mv" command, so that it guesses where you want your file moved to, rather than having to type in the destination? Personally I hate the extra keystrokes -- I think we should automate it to run in the background constantly, to make file relocation more efficient.


I use it for summarization constantly. I made iOS/mac shortcuts which call Gemini for various tasks and use them quite often, mostly summarization related.


How do you know its summaries are correct?


You already know that they aren't. Yesterday my wife and I were discussing Rønja Røverdatter. When we were kids it used to have danish talk over, so you could still hear the original swedish sound as well. Now it has been dubbed, and we were talking about the actor who voices Birk. Anyway, we looked him up and found out he was in Blinkende Lygter, which neither of us remebered. So we asked Gemini and it told us he played the child flashback actor of the main character... except he doesn't, and to make matters worse, Gemini said that he played Christian a young Torkil... So it even got the names wrong. Sure this isn't exactly something Gemini would know, considering Rønja Røverdatter is an old Astrid Lingren novel that was turned to film decades ago, and Blinkende Lygter is a Danish movie from 20ish years ago where Sebastian Jessen plays a tiny role. Since they are prediction engines though, they'll happily give you a wrong answer because that's what the math added up to.

I like LLM's, I've even build my own personal agent on our Enterprise GPT subscription to tune it for my professional needs, but I'd never use them to learn anything.


I've done some summarizing with my own small Tcl/Tk-based frontend that uses llama.cpp to call Mistral Small (i.e. all is done locally) and i do know that it can be off about various things.

However 99% of the times i use this isn't because i need an accurate summary but because i come across some overly long article that i do not even know if i'm interested in reading, so i have Mistral Small generate a summary to give me a ballpark of what the article is even about and then judge if i want to spend the time reading the full thing or not.

For that use case i do not care if the summary is correct, just if it is in the ballpark of what the article is all about (from the few articles i did ended up reading, the summary was in the ballpark well enough to make me think it does a good enough work). However even if it is incorrect, the worst that can happen is that i end up not reading some article i might find interesting - but that'd be what i'd do without the summary anyway since because i need to run my Tcl/Tk script, select the appropriate prompt (i have a few saved ones), copy/paste the text and then wait for the thing to run and finish, i only use it for articles i'm in already biased against reading.


It's a good question. I'm not the OP, but I'd like to add something to this discussion.

How do I know what I'd be reading is correct?

To your question: for the most part, I've found summaries to be mostly correct enough. The summaries are useful for deciding if I want to dig into this further (which means actually reading the full article). Is there danger in that method? Sure. But no more danger than the original article. And FAR less danger than just assuming I know what the article says from a headline.

So, how do you know its summaries are correct? They are correct enough for the purpose they serve.


You can make a better decision if you have the context of the actual thing you are reading, both in terms of how it's presented (the non-textual aspects of a webpage for instance) and the language used. You can get a sense of who the intended audience might be, what their biases might be, how accurate this might be, etc. By using a summarizing tool all that is lost, you give up using your own faculties to understand and judge, and instead you put your trust in a third party which uses its own language, has its own biases, etc.

Of course, as more and more pieces of writing out there become slop, does any of this matter?


For most things it doesn't matter, as long as its usually correct enough, and "enough" is a pretty low bar for a lot of things.


Can you give an example? And how would I know the LLM has error bounds appropriate for my situation?


> Can you give an example?

Recipe pages full of fluff.

Review pages full of fluff.

Almost any web page full of fluff, which is a rapidly rising proportion.

> And how would I know the LLM has error bounds appropriate for my situation?

You consider whether you care if it is wrong, and then you try it a couple of times, and apply some common sense when reading the summaries, just the same as when considering if you trust any human-written summary. Is this a real question?


"Get me the recipe from this page" feels like a place where I do really care that it gets it right, because in an unfamiliar recipe it doesn't take much hallucination around the ingredients to ruin the dish.


I guess I never come across that situation because I just don’t engage with sources that fluff. That is a good example, but presumably, there should be no errors there because it’s just stripping away unnecessary stuff? Although, you would have to trust the LLM doesn’t get rid of or change a key step in the process, which I still don’t feel comfortable trusting.

I was thinking more along the lines of asking an LLM for a recipe or review, rather than asking for it to restrict its result to a single web page.


Doesn't matter if they get it wrong sometimes. So does human writers.


Most recipe blogs have a "skip to recipe" button because they know you don't care.


Enough don't.


DuckDuckGo has a great tool for dealing with those ones: "Block this site from all results".


That doesn't get me their content.


Why would you reward those sites with clicks? Just go to the sites that actually respect your time.


Because I can get content I want there, and with a summarisatin option, it is irrelevant to me if they don't "respect my time" because it doesn't take any more time for me to get at the actual recipe.


Because they mostly are, and even if not, it doesn't usually matter.

For example - you summarize a YouTube link to decide if the content of it is something you're interested in watching. Even if summarizations like that are only 90% correct 90% of the times it is still really helpful, you get the info you need to make a decision to read/watch the long form content or not.


How do you know they want a correct summary? AI slop is good enough, acceptable for many people.


What is the use of such a summary?


Determining whether something is worth reading doesn't require a good summary, just one that contains enough relevant snippets to give a decent indication.

The opportunity cost of "missing out" on reading a page you're unsure enough about to want a summary of is not likely to be high, and similarly it doesn't matter much if you end up reading a few paragraphs before you realise you were misled.

There are very few tasks where we absolutely must have accurate information all the time.


What are you constantly summarizing?


Articles. Some articles I fully read, some others I just read the headline, and some others I want to spend 2 minutes reading the summary to know whether I want to read the full thing.


I want the LLM outside of the apps, telling the apps what to do on my behalf and gathering information from them privately towards doing what I ask it or answering questions I have for it.

If an app is a gateway to a bunch of data, it's cool to be able to "talk" to that data via any built-in LLM-based stuff, but typically the app is just a frontend anyway in that case, so the app isn't really needed.


I'm looking for AI features in a few places, one example is a git client that can draft commit Summary and Descriptions. This should be a very simple task and could use a simple on-device model. It feels like AI features are a firehose though. You either get none and a very 2018 product experience or a complete rework of everything "designed for agents".


It's magic when it's optional.

I've vibe coded a few Godot games. It's all good fun.

But now everything is forcing it. Google is telling people what rocks are tasty, on Reddit bots are engaging with bots.

From what I can tell the only way to raise VC money is by saying AI 3 times. If the ritual is done correctly a magic seed round appears.

As they say, don't hate the player, hate the game.


Well said. I use AI often, but I don't want it "in" any other tool. It's annoying and in my experience, tools get worse the more "intelligent" they try to be. They get in your way and become unpredictable. I want silent, dumb, perfectly deterministic interfaces. And guessable, too (if that's a word?)


I'm extensively using ChatGPT Web UI. I don't use it much but I see value in Claude Code CLI. I've used Copilot in the past, recently I stopped using it, but I can see the value.

Other than that, I don't think I'd be happy to see AI anywhere else. I pretty much don't want no AI in my operating system, browser.


It’s great in photo shop. Removing a background has never been easier.

There are certainly lots of great use cases, the problem is everyone is shoving it everywhere because they don’t want to feel behind the times and for every great use case there are several times where it accomplishes nothing but makes the UI worse.


I think AI can add a lot of functionality but on the margins. Making things “work better”. I think AI as a focal point—in that it is The feature is a mistake for most things. But making code completion work better or suggestions more accurate? Things that are largely invisible UI-wise.


I don't mind AI. It can be pretty helpful sometimes in surprising ways. I just deeply dislike spywares.


I'm not a big user of LLMs, but instead of AI in everything, I'd like to see more web services and local software offer APIs that LLMs (and my own code) can access. Hopefully, "embedded AIs" only become as prevalent and required as "embedded browsers."


For me, it's exactly what you said...asking specific questions. That's what I use search engines for and much of the time I'm online, it's asking questions and seeking answers. And, as near as I can tell, AI does that very well and very fast.


> Does anyone want AI in anything?

Absolutely. I want a browser with AI -- just not the browser Mozilla wants to build. I want my browser to use AI-based adblocking and content filtering. I want my AI browser to notice when the site sends some stupid sticky high Z-index thing down the pipe and just quietly not show it to me at all. I want my AI browser to automatically detect cookie dialogs and click "Reject All" and if that option isn't available, I want it to parse the "Cookie Preferences" page and click all the buttons that equate to "Reject All".

I want an AI layer in my phone that spoofs my location and my contacts so that apps that insist on seeing those things see fake data that nevertheless looks plausible.

Best of all, I want the AI agents in my browser and my phone to do their work without leaving any trace of their activities so that the server on the other end cannot tell that I even have an AI agent at all.

Most of the above is possible now but it requires a plethora of different tools that are not cleanly integrated. And no VC is going to pay you to build such an integrated tool because it would not create a continuing revenue stream or a continuing stream of harvestable data compromising the user's privacy.

We are a very fucked-up industry.


That was part of what made the announcement of the Steam Machine such a joy - not one mention of it in sight. I suppose you could install Ollama on it, but where's the fun in that?


I would love it if it could clean up my mp3 collection.

I know there are tools where you can do it yourself but it is a hellish mess. I just move it drive to drive through the decades until it comes.


Exactly, for me it only makes sense when it is transparent, meaning voice controlling devices, handwriting recognition, getting better IDE tooling, and such.

Unfortunately got to meet those KPIs.


It's been a habit of mine for more than a decade to deactivate any feature with "smart" in its name by default. I want my machines to be predictable and stupid.


AI is useful. I use Claude code every single day for building software.

I also wouldn’t want to go back to only web search for finding things out. Search engines are generally inferior.


Right, I don't want AI integrated in my mobile phone either and just use it when I understand the information I am "leaking" (e.g. prompting)


> Does anyone want AI in anything?

AI ad blocking might be nice.


Gemini in GCP for SQL queries is mildly useful, as I’m rusty and forget how to write but generally know enough not to run a query that mutates the db.


"How do I get a list of every resource in a project"

"How do I change the resource limits for CPU core count"

Beyond that I've never used Gemini for any actual purpose.


Yeah, basically.


I work on a traditional product that has a lot of AI touch points and people use them, a lot. Like, a lot more than other newly introduced features.


The new similar sound search in Ableton Live is handy. It's getting stem separation soon and I expect to get some use out of it.


I want it. They just need to figure out the ux. Chat ain't it. I'd love voice chat to access the web.


I am not against AI in Firefox. The thing is "what AI".

For example, translation can be considered AI, and I find it very useful, it is local too. Other AI features that could be nice would be speech-to-text, text-to-speech, advanced spellchecking, text autocomplete, etc... Bonus points if local models are used. I also see nothing wrong with having a "ask a LLM" entry in the right click menu like you have search, I think it is a common enough thing for people to do.

The problem with many AI features in software is that they serve no purpose besides "hey look, we have AI". Usually in the form of some button or text field that is always visible and does nothing more than prompt a poorly tuned LLM.


Sure people do. Idiots are people right?


Honestly, i have never got any real benefit from an AI. I tried it on multiple occasions, but compared to my pre-ai life, AI has not really improved anything.

AI is basically only a shortcut to wikipedia, and i always anyway have to double check any AI response, making it kind of useless.


Yes. I want AI in some things (some)


What is it called where capitalism is this constant fight to push out all cooperation or collaboration? Why cant firefox just support whatever extension features would allow other people to create this AI trash and anyone who WANTS it can install it?

Its this constant fight that everyone must CAPTURE all revenue opportunities at the cost of complete overwhelming tsunami of bad forceful decisions on users, all JUST INCASE its an actual revenue stream that they could be missing out on, before even knowing if a single user gives the slightest shit about it


i probably do, but i want it to work as some magic behind the scenes, not as an embedded chat app that i have to type in.

a sandboxed LLM ad block or filter could be handy, for instance


I want AI in all my development and entrepreneurial tools


Customers want humans to perform each menial task, while paying almost nothing for that privilege, so that they can have the satisfaction of screaming at someone when a mistake is made.


Quite. The last thing I want is opinionated software that might mess with the end product of whatever I'm working on, searching for, etc. Digital computing has the capacity to give us complete predictability, & those in charge of building it seem to want to prevent users from having it.

It's bad enough what Google did to search; a future where the only thing you get back is a) what the machine allows you to see or create (which may be determined by the built-in agent or by the programmers); b) what the machine wants you to see, & modified to be in line with its whims; & c) hallucinated slop where it is difficult to determine what is real, what is human-originated, & what is constructed out of whole cloth.


>Does anyone want AI in anything?

Well, yes. It's extremely useful. However, the hype bubble means it's getting added everywhere even when there's not a clear and vetted use case.

It works really well for navigating docs as a super-charged search--much better at mapping vague concepts and words back to the official terminology in the docs. For instance, library Z might have "widgets" and "cogs" as constructs, but I'm used to library A which has similar constructs "gadgets" and "gears". I can explain the library A concepts and LLMs will do a pretty good job of mapping that back to the library Z concepts--much better than traditional search engines can do.


I can't even open Adobe Acrobat anymore. There's AI shit in almost every corner and toolbar of the app.

This shit makes me want to stop interacting with tech altogether and live on a farm. I don't know how much more of this I can take.


They need to access your data somehow


> Does anyone want AI in anything?

Yeah, they do. Go talk to anyone who isn't in a super-online bubble such as HN or Bsky or a Firefox early-adopter program. They're all using it, all the time, for everything. I don't like it either, but that's the reality.


> They're all using it.

Not really. Go talk to anyone who uses the internet for Facebook, Whatsapp, and not much else. Lots of people have typed in chatgpt.com or had Google's AI shoved in their face, but the vast majority of "laypeople" I've talked to about AI (actually, they've talked to me about AI after learning I'm a tech guy -- "so what do you think about AI?") seem to be resigned to the fact that after the personal computer and the internet, whatever the rich guys in SF do is what is going to happen anyway. But I sense a feeling of powerlessness and a fear of being left behind, not anything approaching genuine interest in or excitement by the technology.


If I talk to the people I know who don’t spend all their time online, they’re just not using AI. Quite a few of my close friends haven’t used AI even once in any way, and most of the rest tried it out once and didn’t really care for it. They’re busy doing things in the real world, like spending time with their kids, or riding horses, or reading books.


I talk to an acquaintance selling some homemade products on Etsy, he uses & likes the automatically generated product summary Etsy made for him. My neighbor asks me if I have any further suggestions for refinishing her table top beyond the ones ChatGPT suggested. Watching all of my coworkers using Google search, they just read the LLM summary at the top of the page and look no further. I see a friend take a picture, she uses the photo AI tool to remove a traffic sign from the background. Over lunch, a coworker tells me about the thing she learned about from the generated summary of a YouTube video.

We can take principled stands against these things, and I do because I am an obnoxiously principled dork, but the reality is it's everywhere and everyone other than us is using it.


Being busy riding horses and reading books are both niche activities (yes, reading too, sadly, at lest above a very small number of books which does not translate to people being busy doing it more than a tiny fraction of their time), which suggests perhaps your close friends are a rather biased set. Nothing wrong with that, but we're all in bubbles.


Way off. I've polled about this (informally) as well. Non-technical people think it's another thing they have to learn and do not want to (except for those who have been conditioned into constant pursuit of novelty, but that is not a picture of mental health or stability for anyone). They want technology to work for them, not to constantly be urged into full-time engagement with their [de]vices.

They are already preached at that they need a new phone or laptop every other year. Then there's a new social platform that changes its UI every 6 months or quarterly, and now similarly for their word processors and everything.


> I've polled about this (informally) as well.

This is kinda like how if you ask everyone how often they eat McDonald's, everyone will say never or rarely. But they still sell a billion burgers each year :) Assuming you're not polling your Bsky buddies, I suspect these people are using AI tools a lot more than they admit or possibly even know. Auto-generated summaries, text generation, image editing, and conversation prompts all get a ton of use.


Only if you are assuming I am asking so directly...


> They're all using it, all the time, for everything

Do you know someone? Using Firefox nowadays is itself a "super-online bubble"


I actually use Chrome over Firefox largely because of a couple of 'AI' features though they aren't really chatbot slop AI. Google translate built in is very handy - I know there are add ons for Firefox but they don't work well for twitter etc, and Google Lens is also very handy especially for text in image format.

I guess they key is not in your face when you don't want them and actually useful.


I assume that most of the resources usage only kicks on once you start querying the ai. That being said the intrusiveness and general lack of utility or consideration is certainly irritating. I recently saw code completion options in my chrome devtool console, and in postman sigh


I'd upvote this a 100 times. It's gotten to a point where, when I see a UI element, text, or email subject featuring those irritating twinkling-emojis that are supposed to indicate something between "magic" and "incredible speed", I feel physical uneasiness. Maybe it's precisely because of this contradiction that these symbols now stand for. Recently we purchased an .io domain for a product we're working on. Guess what, few days later there comes an e-mail with that twinkly-crap-start containing a suggestion that a ".com" domain for the same name is available, and that at a rather low price! Gasp! So I look it up...well yeah, it is a .com alright. But missing the bloody last letter of our name. Such is the crap that you get out of those LLMs, always incomplete, always missing something and this is increasingly the sentiment in the tech professionals community - no thanks, we don't want you to keep feeding us your slop, billions that you burned already into nothing be damned!


The AI companies do. They want us to train their AIs for free by using them, and in many intense gaslightty cases as you've seen they even make us pay for it! What a powerful market though.


Step 1. Integrate AI

Step 2. ???

Step 3. Profit


No one I've spoken to is happy with the AI shove. It's great to see people finally really speaking up and saying no. The bubble is getting close to popping.


How dare you deny your AI overlord an oportunity to train itself further on your data for free?!


> Does anyone want AI in anything?

I most definitely do.

I want to be able to type into Finder on my Mac to rename all the files a certain way, without spending 10 minutes figuring out the right regex for it.

I want to be able to type into Firefox to go through 50 different versions of the current URL, using a different US state parameter for each, and download the table it shows into a single combined CSV with an added column for "state".

Every day there's 20 things like this. I absolutely want everything in my OS and browser to be exposed to an LLM that can do everything so much faster. Without the intermediate stage of having it write a script to do it. It would save so much time.

Unfortunately we're not quite there yet because the GUI programs we use haven't exposed all the views and actions. But hopefully soon!


Literally this. I don’t know why this hasn’t been worked on. It’s the low hanging fruit with the most utility. Maybe that’s why cuz it’d be actually useful.


Just half an hour ago I needed to extra some text from a Notion page as JSON, and just popped the URL into Claude code and told it to use Playwright to extract the fields. I'd prefer to have it in Firefox, but the Firefox AI sidebar doesn't provide much meaningful integration (I'm sure there are extensions, and will probably look for that later, but the Playwright MCP server provided what I needed for now)

So, yes, I want AI in "everything".

And it's not a waste of resources if it's not triggered automatically.


The person you're replying to noted that there will be "edge cases." Your response exemplifies this.

In fact, I'd say you're an edge case's edge case. There should be a word for that. Maybe "one-off."


I don't think it exemplifies that at all. Using Playwright absolutely is, but that was my niche fallback to the lack of an integrated AI solution.

The use-case, which generalised is "pull some information from a web page", is far less niche, and I'd argue extremely common.

I know a lot of people - including non-technical people - who spend a lot of time doing that in ways ranging from entirely manual to somewhat more sophisticated, and the more technically knowledgeable of those have started looking for AI tools to help them with that.

To the extent users "don't want" AI available for things like this, it is mostly because they don't know AI could help with this.

E.g. just a few days ago, I had someone show me how they painstakingly copied column by column from the exact same Notion site I mentioned into a Google sheet, without realising it was trivially automatable. Or rather: Trivially automatable to a technical user like me. But it could be trivially automatable to anyone with relatively little integration effort in the browsers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: