Counter argument ... at what point is software still profitable to be sold?
I am running my Office 2007 still, and that thing is now almost 20 years old. That was a one time sale, with no other revenue for Microsoft.
I am not condoning subscriptions but one time selling software only works good, if your a small team with low overhead. The more you sell, the more support becomes a issue. And normal customers do not pay for support.
Making software now has become easier with LLMs but the same problem keeps existing in regards to support. Sure, you can outsource this to LLMs but lets just say that is problematic (being kind).
So unless you plan on making software that is not heavily supported/updated, and keep a low single/team cost...
If you sold a program for a one time fee of ... $39.
What if somebody now sells the same for $29 with LLMs. And the next guy in China does it even cheaper because his overhead is even smaller. Eventually you get into abandonware where software is made to just eat sales from the bigger guy and that is it.
Unless you focus on companies, and they have way less issue paying for subscriptions (if it includes support). You see the issue. People kind of overlook the cost of actually running a self employed job or a company (this is a MAJOR cost the moment you need to hire somebody).
So no, i do not see subscriptions going away because companies will pay for it. And on the normal consumer level, paid support as the solution?
I also buy the argument that a lot of time people are actually paying for cloud storage. While I'd love to see a generic protocol for cloud or self-hosted storage that every app can sync to, I expect we'll continue to see subscription software persist by locking down and gatekeeping cloud storage and sync, too.
But really I would be happy for that to go away.
I don't use much software that's sold in any way[0], and I'd prefer it to be none. The ideal situation is for it to always be better to collaborate on open source software than to build in private and keep it to yourself.
[0] I do donate to projects I like and use, though
>The ideal situation is for it to always be better to collaborate on open source software than to build in private and keep it to yourself.
This works for some software (developer tools is the prime example) but not so much for other things. Who is going to maintain the MTD software I used for my VAT returns without recompense. Who is going to update the PAYE payroll software I relied on.
Even with developer tools I feel we will lose something without companies like JetBrains. There would be no Kotlin without people paying for their software.
That's before you think about huge corporations leeching off of our free work or the AI companies vacuuming up open source only to regurgitate it for $200 a month.
Developers should consider that most people value things by how much they pay and if they aren't paying anything then you and your work can't have much value.
> Developers should consider that most people value things by how much they pay and if they aren't paying anything then you and your work can't have much value.
Most people in most situations pay as little as they can get away with not as much as they value the product or service.
The only time this is arguably somewhat untrue is when the point of having the thing is to signal wealth, but even someone buying e.g. a Rolex wants to do so as cheaply as possible, so it's only really true when they're directly spending the money in front of people (think bar, restaurant, nightclub, etc.)
I agree that right now it's mostly developer tools that are doing best in terms of open source. But browsers, operating systems, 3D modelling software, image/photo editing, and many others are not so far behind either.
My assertion/belief[0], though, is that the direction of travel is for open source to become dominant in more and more classes of software, especially as AI reduces the cost of contribution and collaboration, and disincentivises closed, proprietary software.
[0] Based on what I and others around me have been able to do with AI already and how fast it is moving.
> I think you overestimate the ability of AI to write perfectly secure apps. Humans can't do it, and AI is trained on their work.
Ironically, AI tend to be better at securing code, because unlike the squishy human, it is much more cable of creating tons of tests and figuring out weaknesses.
Let alone the issue when lots of meatbags with different skill levels are working on the same codebases.
I have barely seen any codebase that has been in production for a long time, that did not have glaring issues.
But if you tried to do a code audit, your spending somebody their time (assuming this is a pro), for a long time. Where as a AI with the correct hints on what too look for, can do insane levels of work, testing, etc...
Ironically, when you try to secure test a codebase, and you use multiple different LLMs, you get a very interesting list of issues they can find. Many that are probably in tons of production level software.
But its up to you, as the instructor of that LLM codebase, to actually tell it to do regular security audits of the codebase.
> Ironically, AI tend to be better at securing code, because unlike the squishy human, it is much more cable of creating tons of tests and figuring out weaknesses.
Sentences like this make me think AI is honestly the best thing that happened for my imposter syndrome. AI is great for simulating test case, and that's it. If you leave it, it write the most basic, useless tests (i mean, half of them might be usefull when you refactor, but that's about it). It can't design reusable test components and have trouble with test double, which i would think is the easiest test case for AI. Even average devs like me write test double faster than AI, and i'm shit at writing tests.
AI is also extremely bad at understanding versionning, and will use a deprecated API for no reason except increasing the surface of attack.
AI is great for writing CLI scripts, boilerplate and autocomplete. I use it for frontend because i'm shit at it (even though i have to clean its shit up behind), and to rewrite small functionalities of some libraries i want to avoid loading (which allowed us to remove legacy dependencies). It's good at writing prototypes (my main use nowadays), and a very good way to use it is to ask it a plan to improve/factorize your code (it's _very_ bad at factorizing, but as it recognize patterns, it is able to suggest interesting refactors. Half the time it's wrong, so use the "plan" mode)
I'm on a network security and cybersecurity tooling team, i guarantee you AI is shit at securing the code (and at understanding network).
Frankly, i feel like the people downvoting my comment, are still using older LLMs. When Opus 4.5 entered the picture, there was a noticeable improvement in the way the LLM (for me), interacted with the code base, and the issues that it was able to find.
I ran Opus on some public source code, and lets just say that the picture was less rosy for the whole "human as security".
I understand people have a aversion to LLMs but it irked me the wrong way to see the amount of downvotes on here, because people disagree with a opinion. Its starting the become like reddit. As i stated before, its still your tasks as the person working with the LLM to guide it on security practices. But as somebody now 30 years in the industry, the amount of absolute crap i have seen produced as code (and security issues), makes LLMs frankly security wizards.
Stupid example: I have yet to see LLMs not use placeholders to prevent SQL injection (despite it being trained on a lot of bad code).
The amount of code i have seen, where humans just injected variables directly into the SQL... Yea, what a surprise that SQL database content get stolen like its nothing. When doing a security audit on some public code, one of the items always found by the LLMs, yep ... SQL injectable code everywhere.
A lot of practices are easy, but anybody can overlook something in their own code base. This is where LLMs are so great. You audit with multiple LLMs and you will find points that are weak or where you forgot something, even if you code security wse.
So yea, i have no issue doing discussions but the ridiculous downvotes on what seems to come from people with no clue, is amazing. Going to take a break from here.
I must only work with genius (or rather, extremely competent seniors) who keep their codebase very clean, because that never happened to me. Even in my worst job at a bank, with idiotic product dev who couldn't read a Java trace to save their lives, security was the only thing that mattered.
But like i said, this whole discussion on LLMs since Opus is out is _great_ for my ego. At first i thought i used it wrong, then my company made weekly meeting on "how to use AI" with devs who swore by it, now i'm confident I might be a bit above average after all.
Maybe it's different for tooling/network/security devs than for product devs, but i doubt our backend are _that_ complex.
Even something like MS Paint can turn a laptop in to a aircraft.
The issue is actually very simple. In order to gain more performance, manufactures like AMD / Intel for a long time have been in a race for the highest frequency but if you have some knowhow in hardware, you know that higher frequency = more power draw the higher you clock.
So you open your MS Paint, and ... your CPU pushes to 5.2Ghz, and it gets fed 15W on a single core. This creates a heat spike in the sensors, and your fans on laptops, all too often are set to react very fast. And VROOOOEEEEM goes your fan as the CPU Temp sensor hits 80C on a single core, just for a second. But wait, your MS Paint is open, and down goes the fan. And repeat, repeat, repeat ...
Notice how Apple focused on running their CPUs no higher then 4.2Ghz or something... So even if their CPU boosts to 100%, that thermal peak will be maybe 7W.
Now combine that with Apple using a much more tolerant fan / temp sensor setup. They say: 100C is perfectly acceptable. So when your CPU boosts, its not dumping 15W, but only 7W. And because the fan reaction threshold is so high, the fans do not react on any Apple product. Unless you run a single or MT process for a LONG time.
And even then, the fans will only ramp up slowly if your 100C has been going on for a few seconds, and while yes, your CPU will be thermal throttling while the fans spin up. But you do not feel this effect.
That is the real magic of Apple. Yes, their CPUs are masterpieces at how they get so much performance from a lower frequency, but the real kicker is their thermal / fan profile design.
The wife has a old Apple clone laptop from 2018. Thing is for 99.9% of the time silent. No fans, nothing. Because Xiaomi used the same tricks on that laptop, allowing it to boost to the max, without triggering the fan ramping. And when it triggers with a long running process, they use a very low fan rpm until it goes way too high. I had laptops with the same CPU from other brands in the same time periode, and they all had annoying fan profiles. That showed me that a lot of Apple magic is good design around the hardware/software/fan.
But ironically, that magic has been forgotten in later models by Xiaomi ... Tsk!
Manufactures think: Its better if millions of people suffer from more noise, then if we need to have a few thousand laptops that die / get damaged, from too much heat. So ramp up the fans!!!
And as a cherry on top, Apple uses custom fans designed to emit noise in less annoying frequencies and when multiple fans are in play, slightly varies their speeds to avoid harmonizing. So even when they do run, they're not perceived as being as loud at most speeds.
This seems like intuition fed from 8+ years ago and a HN culture of mac fetishism. It's just not like that. You can find some loud gaming laptops. And sure, occasionally there's an individual device with outlier fan behavior. But really the PC laptop world is pretty calm and boring these days. No, nothing goes "VROOOOEEEM". Chromebooks are even better.
You can mostly fix this by running your CPU in "battery saving" mode. CPUs should basically never boost to the 5GHz+ range unless they're doing something that's absolutely latency-critical. It's a huge waste of energy for a negligible increase in performance.
> Notice how Apple focused on running their CPUs no higher then 4.2Ghz or something... So even if their CPU boosts to 100%, that thermal peak will be maybe 7W.
If you max out your processor it will be more than happy to draw 20W+
Your forgetting a little detail ... While you do not need a lot of new stuff, companies need buyers. A lot of companies work on rather thing margins and losing potentially 10 a 20% sales can result in people getting fired, or companies shutting down.
Remember, its not just about "O, X big brands sells less, they can deal with it". But a lot of brands have suppliers who feed that system. Or PC component makers like ... heatsinks, Fans, Cases ... seeing a 20% less sales because people buy less new PCs.
People do not realize how much is linked in the industry. Smaller GPU card makers are literally saying that they may be forced to leave the industry because of drops in sales and the memory prices making the products too expensive.
We can live a long time on old hardware but hardware also limits. Hey, the wife's laptop is from 2019, just before Covid (2020 when a lot of people bought new laptops). The battery is barely holding on. Replacement? None (reputable) ... So in a year that laptop is dead.
How about phones? Same issue ... battery is the build in obsolete maker.
You see the issue. It goes beyond what what most people realize.
Wait when a recession hits when the whole AI bubble bursts and cascades down the already weakened industry. Unlike previous bubbles, the hardware being build is so specialized, that little will hit the normal consumer market. So there will not be a flood of cheap GPUs or memory being dumped on the market.
> the memory prices making the products too expensive.
There seems to be a very simple (if not expensive and time consuming) solution to this: increase capacity. If the market had real confidence in AI, manufacturers would have started yesterday on new fabs. Yet we're seeing the opposite. Companies exiting the consumer space altogether and nobody rushing in to claim that void in the market. It feels like everyone is in wait and see mode
I'm honestly not worried. My laptop battery is at ~70% health, same for phone. It's not great but it's enough to be useful.
Hopefully current companies failing, and yes closing down, might in fact be the opportunity for new companies focusing precisely on different dimension, namely repairability rather than top specs. I have in mind MNT, Pine64, Librem, FairPhone, or projects like LiFePO4wered/Pi+ or battery adapters.
There are new actors that are not computer manufacturers but precisely reconsider the value chain instead of solely focusing on higher specs.
The problem is that Postgres uses something like 24B overhead per row. That is not a issue with small Tables, but when your having a few billion around, each byte starts to add up fast. Then you a need link tables that explode that number even more, etc ... It really eats a ton of data.
At some point you end up with binary columns and custom encoded values, to save space by reducing row count. Kind of doing away with the benefits of a DB.
Yeah postgres and mariadb have some different design choices. I'd say use either one until it doesn't work for you. One of the differences is the large row header in postgres.
If you try to reimplement something in a clean room, its a step by step process, using your own accumulated knowledge as the basis. That knowledge that you hold in your brain, all too often is code that may have copyrights on it, from the companies you worked on.
Is it any different for a LLM?
The fact that the LLM is trained on more data, does not change that when you work for a company, leave it, take that accumulated knowledge to a different company, you are by definition taking that knowledge (that may be copyrighted) and implementing it somewhere else. It only a issue if you copy the code directly, or do the implementation as a 1:1 copy. LLMs do not make 1:1 copies of the original.
At what point is trained on copyrighted data, any different then a human trained on copyrighted data, that get reimplemented in a transformative way. The big difference is that the LLM can hold more data over more fields, vs a human, true... But if we look at specializations, this can come back to the same, no?
Clean-room design is extremely specific. Anyone who has so much as glanced at Windows source code[1] (or even ReactOS code![2]) is permanently banned from contributing to WINE.
This is 100% unambiguously not clean-room unless they can somehow prove it was never trained on any C compiler code (which they can't, because it most certainly was).
If you have worked on a related copyrighted work you can't work on a clean room implementation. You will be sued. There are lots of people who have tried and found out.
They weren't trillion dollar AI companies to bankroll the defense sure. But thinking about clean room and using copyrighted stuff is not even an argument that's just nonsense to try to prove something when no one asked.
Its not ... The problem is that people do not realize that devices like Steam Deck are also considered Linux desktop devices in those numbers. Chrome tends to also inflate those numbers. Yes, they are Linux desktops but not in the way people are comparing Windows to Linux.
The real number is closer to 2.5% somewhere. What is still growth but nowhere the "year of the Linux desktop".
You tend to see a rather vocal minority that makes you feel like there is some major switch but looking here in the comments, people that switched 8 years, 12 year, 20 years ago are people that are part of the old statistics. There are some new converts but not what you expect to see despite Linux now also being more gaming compatible.
It still has minor issues (beyond anti-cheat), that involve people fixing things, less then the past. But its still not the often click and play, works under every resolution, has no graphic issue etc etc. That is the part people often do not tell you, because a lot of people are more thinkers, so a issue pops up, they fix it and forget about it.
Ironically, MacOS just dominates as the real alternative to Windows in so many aspects. If Apple actually got their act together about gaming, it can trigger a actual strong contender to Windows.
>The problem is that people do not realize that devices like Steam Deck are also considered Linux desktop devices in those numbers.
Are people even browsing on Steam Decks? Because everybody in this thread seems to be referring to stats published by a rather obscure web tracking solutions company. "High-traffic sites using Statcounter include khabarban.com, codelist.cc, and download.it"
Steam Deck is a Linux desktop device. It is literally a thin laptop with a build-in screen and joystick running linux. Does my linux system stop being that when I turn on big picture mode in steam? You can run the steam deck as your daily driver hooked up to a keyboard and a monitor.
The Steam Deck is not a Desktop ... That is like saying that every Android smartphone is a desktop. Sure, you can use it as a desktop but 99.99% of the people are using it as a handheld console.
And nice downvotes... Typical in Linux Desktop topics.
I didn't downvote, but it might have to do with the fact that you appear to be just inventing numbers like 2.5%. If Steam Decks are only used for gaming, why would they make up for 1.38% of the Statcounter numbers?
> The problem is that a paid operating system ships with ads in the first place.
You never buy a laptop or pre-build? They are often full of ads that are not Microsoft Windows build in but add-on by the OEM.
Now i agree that Ads in your OS that you paid for, is a big nono. I never understood why Microsoft threats Home and Pro as almost the exact same. Sell Home for cheaper and with Ads, but keep the more expensive Pro clean. Microsoft can do that easily because Windows Server is just that ...
But on the Linux front, i have never been happy with the desktop experience. Often a lot of small details are missing, if the DE itself not outright crashes (KDE, master in Plasma/Widget crashes!). And so many other desktop feel like they have been made in the 90s (probably are) and never gotten updated.
And i do not run W11, still on old and very stable W10. There is no reason to upgrade that i see. Did the same with W7, for years after support ended (and by that time W10 was well polished and less buggy).
The problem is, what does Linux Desktop offer me more, then a few annoyances that i can remove after a fresh install? Often a lot more trouble with the need to use the terminal for things, that are ancient in Windows. That is the problem ... With Apple, you can get insane good M-CPU hardware (yes, mem/storage is insane), for the os/desktop switch.
I noticed that often the people who switch to Linux, are more likely to send more time into finetuning their OS, tinkering around, etc... aka people with more time on their hands. But when you get a bit older, you simply want something that works and gives you no trouble. I can literally upgrade my PC here from a NVidia to AMD or visa versa, and it will simply work with the correct full performance drivers. Its that convenience that is the draw to keep using (even ifs a older) Windows.
For now 25 years every few years, i look at upgrading to Linux permanently, install a few distro's and go back. Linus Desktop does not feel like you gain a massive benefit, if that makes sense? Especially not if your like me, who simply rides out Microsoft their bad OS releases. What is the killer features that you say, hey, Linux Desktop is insane good, it has X, Y, Z that Microsoft does not have, its ... That is the issue in my book. Yes, it has no adds but that is like 5 min work on a fresh install, a 2 min job of copy/past a cleanup script to remove the spyware and other crap and your good for year. So again, killer features?
Often a lot of programs that are less developed or stripped down compared to Windows, let alone way too often 90 style feels programs. You can tell its made by developers often, with no GUI / Graphical developers involved lol
I said it a 1000 times but Linux Desktop suffers from a lot of distro redoing the same time over and over again. Resulting in this lag ...
That is my yearly Linux rant hahaha. And yes, i know, W11 is a disaster but i simply wait it out on W10, and see what the future brings when the whole AI hype dies down and Microsoft loses too much customers. I am betting that somebody is going to get scared at MS and we then get a better W12 again.
I've been pretty happy with Pop in general, I did upgrade to COSMIC pre-release about 6 months ago, and although there have been rough edges, less than some of my Win11 experiences. I don't really fiddle that much in practice, I did spend a year with Budgie, but only the first week fiddling. Pop's out of the box is about 90% of what I want, which is better than most.
I do use a Macbook M1 Air for my personal laptop and have used them for work off and on over the years... I'm currently using a very locked down windows laptop assigned from work. Not having WSL and Docker have held me back a lot though.
In the end, I do most of my work in Linux anyway... it's where what I work on tends to get deployed and I don't really do much that doesn't work on Linux without issue at this point. Windows, specifically since Win11 has continued to piss me off and I jumped when I saw something that was just too much for me to consider dealing with. I ran insiders for years to get the latest WSL integrations and features. This bit me a few times, but was largely worth it, until it wasn't anymore.
C# work is paying the bills... would I rather work on Rust or TS, sure... but I am where I am. I'm similar to you in that I looked at Linux every few years, kicked the tires, ran it for a month or a couple weeks and always went back. This time a couple years ago... it stuck. Ironically, my grandmother used Linux much longer than I ever did on her computer that I maintained for her. For her, it just worked, and she didn't need much beyond the browser.
> You never buy a laptop or pre-build? They are often full of ads that are not Microsoft Windows build in but add-on by the OEM.
This was never acceptable, but we tolerated it because it subsidised the cost of the laptop, OEMs decided the trade-off and you could vote with your wallet for cleaner experiences (often with the same manufacturer).
Show me the ThinkPad T or X series (or EliteBook, or Precision/Latitude) that shipped with ads and I'll take it as a valid point. Otherwise, it's not valid.
> if the DE itself not outright crashes (KDE, master in Plasma/Widget crashes!). And so many other desktop feel like they have been made in the 90s (probably are) and never gotten updated.
All modern Linux desktops feel more advanced than the corresponding windows version, IMO. I just installed standard Raspbian on a bunch of Raspi5s, and it feels snappier and more advanced than Windows already.
Switching OSes is a major undertaking for power users, which I assume you are. Less so for someone who uses the browser, email, and plays some games.
As a power user, there's no point trying out OSes occasionally, unless that's your hobby. Think of it as switching between flying Boeings or Airbuses as a pilot; there's going to be a learning curve that you're going to have to commit to if you want the full benefits. I use the analogy to illustrate the point; OSes as users are definitely not nearly as complex to drive.
That said, the unstable experiences you're describing are odd. Maybe you're running into some odd edge case because that unstable experience hasn't been the case for mainstream Linux users for a couple of decades.
Neither is there a need to tinker with the big mainstream distros either. Most are install-and-forget these days and have been so for a while.
> I noticed that often the people who switch to Linux, are more likely to send more time into finetuning their OS, tinkering around, etc... aka people with more time on their hands. But when you get a bit older, you simply want something that works and gives you no trouble.
> Yes, it has no adds but that is like 5 min work on a fresh install, a 2 min job of copy/past a cleanup script to remove the spyware and other crap and your good for year. So again, killer features?
First thing you do after you install windows is fine tune it lol. For what it's worth, I just installed the latest debian on a Minisforum mini PC and it was clean and easy. Everything works out of the box, including bluetooth and gaming (surprisingly well given only has an integrated GPU). Same experience with two of my wifes laptops.
Now I did have issues with my desktop due to running bleeding edge hardware, but those all got resolved within months on its own and a clean install is now no hassle at all.
In short, I'm now older and don't have time to tinker with my PCs. That includes reverting whatever bullshit Microsoft decided to foist upon me, so now I run base debian and won't be buying bleeding edge hardware anymore.
Do not get me started on airport security staff in the Netherlands that cracked some insulting jokes about my nationality. I was not amused...
Or the idiotic "remove your shoes" so we can x-ray them... What next, go naked? O, that is what those new scanners are for that see past your clothing.
If i can avoid flying, i will ... Its not the flying, its the security. You feel like being a criminal every time you need to pass and they do extra checks. Shoes, bomb test, shoes, bomb test ... and you do get targeted.
The amount of times i got "random" checked in China as a white guy, really put me off going anymore.
Arriving, 50% chance of a check. Departing, 100% sure i am getting 1 check, 50% i am getting two.... Even won the lottery with 3 ... (one in entrance in Beijing: "Random" bomb check, one for drop-off luggage, and one for security) .... So god darn tiring ...
And nothing special about me, not like i am 2m tattoo biker or something lol. But yea, they see me, and "here we go again, sigh"...
I'm sure this exists too, but isn't the mundane rationale more likely? That gruffness is inevitable because the work sucks?
Overworked, understaffed, the days blur together because it is boring, mostly sedentary work. They are ground down from dealing with the juxtaposition of their role; internally TSA are told they are important because their vigilance is heroic and prevents catastrophe, yet the general public views them with annoyance if not disdain. _Everyone_ they interact with is impatient, and at the that scale of human interaction nobody is really a person anymore, just a complication to throughput.
Probably a issue with PFAS contamination. Stuff was used in firefighting water, and has contaminated just about every airport and the surrounding area's groundwater, all over the world. So while microbiologically safe, it has PFAS issues.
Counter argument ... at what point is software still profitable to be sold?
I am running my Office 2007 still, and that thing is now almost 20 years old. That was a one time sale, with no other revenue for Microsoft.
I am not condoning subscriptions but one time selling software only works good, if your a small team with low overhead. The more you sell, the more support becomes a issue. And normal customers do not pay for support.
Making software now has become easier with LLMs but the same problem keeps existing in regards to support. Sure, you can outsource this to LLMs but lets just say that is problematic (being kind).
So unless you plan on making software that is not heavily supported/updated, and keep a low single/team cost...
If you sold a program for a one time fee of ... $39.
What if somebody now sells the same for $29 with LLMs. And the next guy in China does it even cheaper because his overhead is even smaller. Eventually you get into abandonware where software is made to just eat sales from the bigger guy and that is it.
Unless you focus on companies, and they have way less issue paying for subscriptions (if it includes support). You see the issue. People kind of overlook the cost of actually running a self employed job or a company (this is a MAJOR cost the moment you need to hire somebody).
So no, i do not see subscriptions going away because companies will pay for it. And on the normal consumer level, paid support as the solution?
reply