is it unrealistic to think the companies that benefit from orgs such as this could donate a fraction of a percent of their wealth to keep them going? the responsibility always seems to fall most on those with the least resources.
It seems the open-source experiment has failed. Hundreds of billion-dollar companies have been built on millions of hours of free labor, on the backs of ten thousands of now-burnt-out maintainers. Yet, apart from token gestures, these exploiting entities have never shared substantial or equitable profits back.
For the next generation of OSS, it would be wise to stand together and introduce a new licensing model: if a company builds a product using an open-source library and reaches a specific revenue threshold (e.g., $XX million), they must compensate the authors proportional to the library's footprint in their codebase and/or its execution during daily operations.
People have been saying this since the 80s. Reality is that without open source, this industry would be tiny compared to what it is. So many times open source has enabled an entire sub industry (i.e. ISPs in the 90s, Database, SaaS in the 2010s, now AI). And most of it is someone solving a problem that was worth solving for their own use, and for whatever reason made no sense to commercialize by selling licenses.
> on the backs of ten thousands of now-burnt-out maintainers.
Money isn't the motivation for most "free" open source. If it was, the authors would release as commercial software and maybe as "source available". That someone can use open source to build businesses has been the engine for the entire industry. In other words, the thought that maintainers quitting maintaining is some problem that can be fixed if we only paid them is non-sequitur. A lot of it is that people age out, get bored with their project, or simply want to do something else. Not accepting money for maintaining open source is a good way to ensure it stays something you can walk away from and something where the people attached to the money have zero leverage.
I do think that a lot of maintainers struggle with pushy and sometimes nasty people that take the fun out of what is a "labor of love."
> exploiting entities have never shared substantial or equitable profits back.
If I want to make money, I sell commercial software, SaaS or PaaS.
> they must compensate the creators proportional to the library's footprint in their codebase and/or its execution during daily operations
One of the more interesting uses of open source is to level the playing field. For example, there was a time when database was silly expensive. Several open source products emerged that never would have been viable commercially without the long term promise of "free" and the assurance of having source code. To have a license with a cost bomb on it would just ensure that people would use another choice.
we have a solution for that: GPL + commercial dual-licensing. the problem is that a) there is an entire anti-GPL crowd; although I'd just not give a shit about them, it's worth mentioning, b) who's gonna enforce the license?, c) how are you going to monetize internal use? what if your tech (e.g. a build system) is only really useful internally?
The AI companies claim that training is fair use, so no license can prevent what is happening with AIs. It would require a clarification in copyright law that AI models are considered derivative works, and that AI generated code is a derivative work of the model and the prompt. That seems unlikely to ever happen though.
> Especially as the cost of producing code drops, the value of libraries decreases.
Does it? If the cost of slop that (1) no one understands, and (2) no one can be sued for if it misbehaves drops to zero, what have we gained? A "library" is code plus reliability and accountability. (Yes, GPL disclaims liability, but that's why consultants exist.)
And then you'd be getting things like Hollywood accounting, where companies will claim that the "footprint" is not that large or simply find ways to hide their usage of FOSS.
Without teeth (and the resources to initiate the bite), companies will just freeload. Any attempts to monitor will require some degree of telemetry or proprietary solutions, with the associated blowback that generates.
The only model I've seen work in reality is open core (aside from the very few projects that have been successful with patronage)
Keep in mind giving stuff away for free is going to be much more popular than any other price. Don't discount that these projects are popular _because_ they are OSS, and if you introduce another model then one potential outcome is that no-one with money actually uses them.
There are licenses like that, just don't call them open source. They're just another form of proprietary software albeit sometimes also being source available.
If you want to make money, make commercial software and sell it. It's funny to see people complain about people taking what they gave out for free, it's like having a lemonade stand with a huge sign saying "free" and being surprised people take the lemonade.
The decision of the market seems pretty clear. We've been able to co-operate and build a software commons for decades, iterating on and improving shared infrastructure and solutions to problems common and niche. The work done for these commons, though, benefits everyone, and that's a hard sell for a profit-driven organization. So the commons are enriched with
a) volunteers
b) brief windows in which corporate decision makers are driven by ideology and good intentions, where those decisions carry momentum or license obligations (see Android, and how Google tries to claw it back)
c) corporations attempting to shape the larger landscape or commoditize their complement, see Facebook's work on React, or contributions to the Linux kernel
Of the above, only (a) or rarely and temporarily (b) are interested in collective wellbeing. Most of the labor and resources go into making moats and doing the bare minimum to keep the shared infrastructure alive.
Now companies selling LLM coding agents enter the scene, promising to eliminate their customers' dependence on the commons, and whatever minimal obligations they had to support it. Why use a standard solution when what used to be a library can now be generated on the fly and be part of your moat? Spot a security bug? Have an agent diagnose and fix it. No need to contribute to any upstream. Hell, no upstream would even accept whatever the LLM made without a bunch of cleanup and massaging to get it to conform with their style guides and standards.
Open source, free software, they're fundamentally about code. The intended audience for such code is machine and human. They're not compatible with a development cycle where craft is not a consideration and code is not meant to be read and understood. That is all to say: yes, it is unrealistic to expect companies to donate anything to the commons if they can find any other avenue. They prefer a future where computer programs are purchased by the token from model providers to one where they might have to unintentionally help out a competitor.
> Now companies selling LLM coding agents enter the scene, promising to eliminate their customers' dependence on the commons, and whatever minimal obligations they had to support it.
This is misguided. Maintenance of LLM code has a far greater cost than generating it.
> They prefer a future where computer programs are purchased by the token from model providers to one where they might have to unintentionally help out a competitor.
I don't think that's even a thought. The thought is that "no one can tell me no".
The longevity of code depends at least on whether it's a product or a service.
Services are what the majority of devs already work on and maintain. There's almost no incentive for anyone to use LLMs for that outside of startups. They do indeed last a long time because the code is as fundamental to the recurring revenue of the business as their legal or accounting or marketing. Devs make changes according to the evolving needs of the business, and "productivity" isn't as much of a priority as accuracy and reliability. The implementation details are very relevant to the business, especially for B2B services that need to meet compliance requirements.
Products, however, have always been disposable code written by people being thrown into a meat grinder. I don't think LLM-generated code is better, but it's probably not that much worse either.
> This is misguided. Maintenance of LLM code has a far greater cost than generating it.
I agree. I'm just observing what they're doing.
> I don't think that's even a thought. The thought is that "no one can tell me no".
I doubt there's any one thought driving things. I didn't mean to imply the existence of some grand strategy or scheme. The preference I speak of isn't of any person, it's the direction pointed at by incentives and circumstance. Companies will make decisions to steer clear of helping competitors. Separately, they signal great interest in replacing costs spent on labor with costs spent on services. See the transition to cloud. The result is the preference of a world where code is like gasoline, purchased from a handful of suppliers for metered cost.
We really need to stop this misconception about FOSS. Free software is provided as is, with no obligations on either party (minus the viral clause of copyleft). The user is not obligated to "contribute" in any way, and the provider is not obligated to support in any way. It is a single one off donation of work from the author to the public.
I think Matrix as a protocol has been pretty ineffective, as their top priority seems to be keeping data permanent and duplicated. Both performance and privacy are at the bottom of their priority list. The one good thing I can say about it is that encryption of message contents is enabled by default in conversations and available in groups, but that's about it - nothing else is, or can be, encrypted. In other words, every participating server knows who is talking to who, and how much, and when, and in what rooms, and what those rooms' names are, and what those rooms' descriptions are, and who moderates them, etc.
Meanwhile, an app like Signal can do none of that, and that's by design.
If you're looking for a privacy oriented messaging system, you'd best look elsewhere.
I'm new to Matrix and found this comment on reddit. How much of it is accurate and does it actually contribute to whether or not the future of the protocol is promising?
@Arathorn would be an objectively better person to discuss this, but the Redditor isn't completely off the mark: metadata is (currently) not nearly as well-guarded on Matrix compared to Signal.
However, work is ongoing to improve the situation; more importantly, Matrix is a different threat model (in my opinion), and allows for different trade-offs.
When I use Signal, I have to trust Signal's servers and their admin team. With Matrix, we get to keep trust circles smaller (friends and family on smaller servers, where we already trust the people running them). We have no hard requirement to federate either - if I want something just for people I know, we leak less data than Signal does to the outside world. We also get to host Matrix servers in areas we're comfortable with, whether that's our living room, or any nation that isn't America.
Matrix isn't perfect, but I appreciate how quickly they're improving, and the areas they're focusing on.
Matrix and Signal have very different objectives. Matrix wants to be an encrypted IRC or Slack. Signal wants to be a secure messenger you can entrust your life to. They are both worthy projects; there's not as much overlap as people think.
I trust my life to the server I host in my own closet. People can lecture me all day long about the superiority of Signal's encryption, and I'll just slowly rotate my chair to point my index finger at the Dell OptiPlex behind me.
That's fine. You'll pardon me if I'm unwilling to trust my own safety to your Dell OptiPlex. Whatever you think about Signal, the fact is that Matrix --- which is what the thread is about --- makes decisions that serve the IRC/Slack use case at the expense of the "absolute most possible safety" use case. That makes sense: some of larger-scale group chat's goals are in tension with "absolute most possible safety".
I wouldn't characterize Signal as "absolute most possible safety" as you are implicitly doing here.
I would probably characterize Signal as "most possible safety for the average nontechnical user" which entails trade-offs against absolute safety for certain UX affordances (and project governance structures that allow for these decisions to be made), because if said affordances are not given, the average nontechnical user either simply won't use Signal or will accidentally end up making themselves even less secure.
I couldn't be less interested in arguing with you about Signal. My point is that it doesn't make as much sense to compare Signal and Matrix as people think it does. Large-scale group chat is intrinsically less safe than the kind of chats most people use Signal for. You can substitute whichever other secure messenger you prefer.
This "average nontechnical user" stuff, though, miss me with. For 2 decades people have been encouraging the "average nontechnical user" to do incredibly unsafe things on the premise that any kind of message encryption is the best alternative to sending plaintext messages. No: telling people not to send those kinds of messages at all, unless you're dead certain the channel they're using is safe, is the only responsible recommendation.
I have started using Signal for large group chats in the past year or so, after spending many years using it as an encrypted replacement for SMS texting. Signal has gotten noticeably better at the UX of group chats during that time, although I am still annoyed that they basically require you to use their client to access the network in the name of security. I can't easily run a legitimate 3rd party Signal client on my server, and when I've tried I've accidentally broken my access to my account on my phone, which is quite annoying since I use Signal pretty frequently.
I want there to be something like Matrix that is designed first and foremost as a large-group realtime chat program (really, as a meaningful FOSS alternative to Discord), and it should make different tradeoffs than Signal. I'm actually willing to entirely forego encryption, at least at first, to make this happen - IRC wasn't encrypted and Discord isn't either, and these are things I want to replace with something better. Matrix's UX is still noticeably worse than Discord's, and I'm skeptical that the ostensible security gains from the encryption are worth it, especially given the problems with device verification UX, metadata leakage, and the fact that as the number of people in a group chat grows the possibility that they will take a screenshot of the encrypted message sent to them and leak it to the press grows higher and higher.
> This "average nontechnical user" stuff, though, miss me with. For 2 decades people have been encouraging the "average nontechnical user" to do incredibly unsafe things on the premise that any kind of message encryption is the best alternative to sending plaintext messages. No: telling people not to send those kinds of messages at all, unless you're dead certain the channel they're using is safe, is the only responsible recommendation.
Eh. You misunderstand me. I don't really have too much of a view on this personally. Unless you specifically think that the term "average nontechnical user" is a bad term.
N.B. for other readers of this thread to flesh out my initial point:
Signal specifically didn't do that recommendation until they got sufficient critical mass of users in 2022. In particular Signal gracefully degraded to unencrypted SMS if the other side didn't have Signal.
Likewise Signal required phone numbers until 2024 when it shifted over to usernames, with all the security vulnerabilities that entails.
Signal has repeatedly made trade-offs that prioritize UX over absolute security even in 1-1 chat settings. That's not to criticize those trade-offs, there's a variety of reasons why they make sense or don't. But Signal has consistently demonstrated that it is not willing to make severe compromises to the UX and understandability in the name of absolute security and that it will balance the two.
This is basically the same logic for why I often recommend Plex over jellyfin to people. Yes Plex is not proper self hosting. Yes Plex the org is making increasingly questionable decisions. But for people who want to get away from the major streaming services and maybe even want to dip their toes into something that resembles self hosting, there really is no other option like Plex. It’s so insanely turnkey and easy to install on every device. You also don’t have to worry about exposing your network if you don’t know what you’re doing.
If nothing else it’s an incredible foot in the door for a lot of people to make the leap to something like jellyfin later.
I obviously can't speak for you, but there's not a freaking chance I'd trust my life to the servers I run.
To go maybe too literal: when I'm working on machines that could physically eat me, I don't trust myself with just one off switch -- I want redundancy. And since computers are horrible piles of ridiculous complexity, the closest I can get (and not really get close) is trusting some of the top minds to overthink the crap out of it in a way that I can't do with the systems I manage.
Well, when US-EAST-1 went down, my family was still chatting. Same with Cloudflare. Even if I lose internet, we can all chat so long as we’re on the network.
That said, the uptime is still probably worse than Signal. I didn’t mean trust the reliability. I meant the security.
matrix's users want it to be a decentralized/encrypted irc/slack, but unfortunately matrix's maintainers believe their mandate is to build a next-gen tcp/ip (or something very close to that)
In the real world friends and family aren’t running their own matrix servers. At most they are signed up for whatever random one came up first in the search results.
So you end up with a similar problem to Mastodon where either you are facing problematic or inexperienced admins, servers shutting down, and everyone centralising on the main server.
It's pretty accurate. I was a bit shocked when I saw that room names were not encrypted. I thought that was such a basic privacy requirement, and it's not hard to implement when you already have message encryption.
Matrix seems to have a lot of these structural flaws. Even the encryption praised in the Reddit post has had problems for years where messages don't decrypt. These issues are patched slowly over time, but you shouldn't need to show me a graph demonstrating how you have slowly decreased the decryption issues. There shouldn't be any to begin with! If there are, the protocol is fundamentally broken.
They are slowly improving everything, with the emphasis on "slowly". It will take years until everything is properly implemented. To answer the question of whether the future of the protocol is promising, I would say yes. This is in no small part because there are currently no real alternatives in this area. If you want an open system, this is the best option.
The decryption problems I've experienced have a been fixed a while ago. There was a push to fix these last year or the year before that, and at this point I'm pretty sure only some outdated or obscure clients with old encryption liberties still suffer from these problems.
The huge amount of unencrypted metadata is pretty hard to avoid with Matrix, though. It's the inevitable result of stuffing encryption into an unencrypted protocol later, rather than designing the protocol to be encrypted from the start.
I've had similar issues with other protocols too, though. XMPP wouldn't decrypt my messages (because apparently I used the wrong encryption for one of the clients), and Signal got into some funky state where I needed to re-setup and delete all of my old messages before I could use it again. Maintained XMPP clients (both of them) seem to have fixed their encryption support and Signal now has backups so none of these problems should happen again, but this stuff is never easy.
Yes, messaging protocols, especially federated ones, are never easy. I just wish we could have skipped the three or four years when Matrix was basically unusable for the average user because end-to-end encryption was switched on by default. Perhaps a clean redesign would have been better. Now they have to change the wheels on a moving car.
> These issues are patched slowly over time, but you shouldn't need to show me a graph demonstrating how you have slowly decreased the decryption issues. There shouldn't be any to begin with! If there are, the protocol is fundamentally broken.
This is wrong, because afaik these errors happen due to corner cases and I really don't like the attitude here.
It's not just a corner case. The issue was so prevalent for years that if it was limited to just a few corner cases, the entire protocol must consist of nothing but corner cases.
It frequently occurred on the "happy path": on a single server that they control, between identical official clients, in the simplest of situations. There really is no excuse.
I'm not saying that building a federated chat network with working encryption is easy. On the contrary, it is very hard. I'm sure the designers had the best intentions, but they simply lacked the competence to overcome such a challenge and ensure the protocol was mostly functional right from the outset.
> The issue was so prevalent for years that if it was limited to just a few corner cases, the entire protocol must consist of nothing but corner cases.
for me it wasn't really; occasionally it would hit me, but mostly it worked, and I have been using it for encrypted communication since 2020.
> It frequently occurred on the "happy path": on a single server that they control, between identical official clients, in the simplest of situations. There really is no excuse.
There still can be technical corner cases in the interaction of clients
> I'm sure the designers had the best intentions, but they simply lacked the competence to overcome such a challenge and ensure the protocol was mostly functional right from the outset.
well, even if this was true, they still were brave enough to try and eventually pull it off eventually. Perhaps complain to the competent people who haven't even tried.
> for me it wasn't really; occasionally it would hit me, but mostly it worked, and I have been using it for encrypted communication since 2020.
I think the statistic said that around 10% of users receive at least one "unable to decrypt" message on any given day. That's a lot. Perhaps not for devs who are accustomed to technical frustrations, but for non-technical people, that's far too frequent. Other messaging systems worked much better.
> There still can be technical corner cases in the interaction of clients
You linked to a German political talk show. If you wanted to show me the talk in which the guy listed reasons such as "network requests can fail and our retry logic is so buggy that it often breaks" and "the application regularly corrupts its internal state, so we have to recover from that, which is not always easily possible", let's just say I wasn't that impressed.
> well, even if this was true, they still were brave enough to try and eventually pull it off eventually. Perhaps complain to the competent people who haven't even tried.
It isn't a problem that the Matrix team are not federated networking experts. At the time, they had already received millions in investment. That's not FAANG money, but it's still enough to contract the right people to help design everything properly.
I'm not mad at them. Matrix was a bold effort that clearly succeeded in its aims. I'm just disappointed that it was so unreliable for such a long time, and still is to some extent.
To be fair: signal means everybody trusts one central authority. Doesn't matter that it's a foundation or non-profit or whatever.
And: a phone number is still required, a PIN is not, so by default it's susceptible to phone/SIM spoofing attacks. This one really boggles my mind, it's not that I personally am afraid of this vector, but I don't understand why they would insist on phone numbers at this point.
I think part of the problem may be that Matrix is just pretty complex, because of its modular and decentralised design. Meanwhile, Signal is much more centralised and monolithic. And while they have added a few features over the years, its core functionality is relatively simple, and they were initially just focussed on getting that right.
The "decentralization" of Matrix is true in some respects, and false in others. Which would be ok, but if all of the complex architecture and issues are in the support of being decentralized, then this seems like an early planning failure.
My suspicion is the real problem that exists now originated from the bifurcation of desktop and mobile. Mobile broke the true p2p decentralization which was easy on desktop, and the split between Android and iOS makes it worse. Users expect an experience on iOS and Android which has parity with desktop. And the entire thing has to be as good as Discord.
I've taken a hard look at all of the truly open source alternative messaging options, and almost nothing handles multi-platform very well. Even when you expand it to commercial options, for a very long time, all of the Slack clones had mediocre mobile apps -- which basically was a death sentence if you weren't Microsoft. This is true today, but I expect it will change in 2026 and onward with the rapid increase in software development driven by AI agents.
I remember reading some of the pdf on state management in matrix. The math and logic behind working out what the current name of the group chat is made my head spin.
it's pretty on point, it's mostly a "trusted" platform as long as you trust the host with the messages between two people (or more?) being (optionally) encrypted.
I wish FOSS communities that want an alternative to Discord or Slack ditched Matrix altogeter. It sucks for that. Better use Zulip or Mattermost, both of which are self-hostable.
Edit: I looked up and apparently Mattermost would be out of the question for their feature downgrades in the community version as of late...
Correct me if I'm wrong but I believe Zulip's licensing de facto restrict self-hosting solution for 10 users (others won't see notifications on their mobiles or something like that). This is important for non-commercial communities.
No. See the "Sponsorship and discounts" section on the pricing page, which makes clear the 10 users limit for free usage of the mobile notifications service is for workplace use, not communities.
As an N4 student of Japanese the origin story of Hiragana listed here was quite interesting. However the overall layout/design of the characters and page leaves much to be desired.
Good riddance, Domm Holland is a textbook example of the modern day tech bubble shyster. Reading about his previous towing startup in Australia tells you everything you need to know about him.
No, they have a point. It makes for a more difficult overall reading experience. Just because you put up with it doesn't mean everyone else has something wrong with them.