Hacker Newsnew | past | comments | ask | show | jobs | submit | tech_ken's commentslogin

My (admittedly a bit tinfoil) take on the recent self-hosting boom is that it's highly compatible with individualist suburban capitalism; and that while there are elements of it that offer an alternative path to techno-feudalism, by itself it doesn't really challenge the underlying ideology. It's become highly consumerist, and seems more like a way of expressing taste/aesthetics than something that's genuinely revolutionary. Cooperative services (as you describe) seem like they offer a way more legitimate challenge, but I feel like that's a big reason why they don't see as much fete-ing in the mainstream tech media and industry channels.

I say all this as someone who's been self-hosting services in one form or another for almost a decade at this point. The market incorporation/consumerfication of the hobby has been so noticeable in the last five years. Even this AI thing seems like another step in that direction; now even non-experts can drop $350+ on consumer hardware and maybe $100 on some network gear so that they can control their $50/bulb Hue lights and manage their expansive personal media collection.


Interesting! I'm not sure how severe the consumerisation really is, but yeah I can totally see the whole home-automation thing playing into it too.

I don't think mainstream tech media is deliberately omitting co-ops in their reporting due to them challenging the status quo. I think it's rather that actually, there aren't really many initiatives in the space.

And I think that is due to a lot of tech people thinking that if only the technology becomes good enough, then the problem will be solved, then, finally, everyone can have their own cloud at home.

I think that's wrong though, I think the solution in this case is that we organize the service differently, with power structured in a different way. We don't need more software to solve the problem. We know how to build a cloud services, technically. We know how to do it will. It's just that if the service is run for-profit, counter to the interests of the users, it will eventually become a problem for the users. That's the problem to fix, and it's not one to fix with technology, but just with organizing it differently.

It works for housing, in some areas it also works for utilities like internet, there are also co-ops for food. Why shouldn't it also work for modern-day utilities like cloud storage and email?

As a techie, don't be content with just running your own self-hosted service. Run it for your family, run it for your friends, run it for your neighborhood! Band together!


> It's just that if the service is run for-profit, counter to the interests of the users, it will eventually become a problem for the users. That's the problem to fix, and it's not one to fix with technology, but just with organizing it differently.

100% agree with you here, and yeah I'm definitely leaning a bit too conspiratorial about it. It's probably not actually intentional, and instead just a product of the larger dynamics.

A while ago I read some interesting economic analysis about why more co-ops hadn't popped up specifically in the gig worker space, since it seems to natural to cut out the platform rent that eg. Uber extracts as profit. I'm failing to recall the specific conclusions, but IIRC the authors seemed to feel that there were some structural obstacles preventing co-ops from growing in those space. Something something capex and unit costs. It's certainly an area I'd be interested to see further analysis in.

Also you sounds like you might get a kick out of mayfirst.coop (if you're not familiar with them already). It's not exactly what you're describing, but the spirit is there. I use them for my web-hosting needs and have been extremely satisfied.


I think this is a good idea so long as you ensure you've got a good backup going or don't put anything super critical on there. I think it's seriously outside odds that Claude `rm -rf /`s your server, but definitely not 0%.

Managing the wg.conf is a colossal PITA, especially if I'm trying to like provision a new client and don't have access to my main laptop. It's crying out for a CRUD app on top of it, and I think tailscale is basically that plus a little. The value add seems obvious.

Also plex is way more than sugar on top of file sharing; it's like filesharing, media management, and a CDN rolled into one product. Soulseek isn't going to handle transcoding for you.


I use Tailscale for exactly those reasons, plus the easy SSL certificates and clients for Android and iOS.

From this thread, I've learned about Pangolin:

https://github.com/fosrl/pangolin

Which seems very compelling to me too. If it has apps that allow various devices connect to the VPN it might be worth it to me to trial using it instead of Tailscale...


> Observability made us very good at producing signals, but only slightly better at what comes after: interpreting them, generating insights, and translating those insights into reliability.

I'm a data professional who's kind of SRE adjacent for a big corpo's infra arm and wow does this post ring true for me. I'm tempted to just say "well duh, producing telemetry was always the low hanging fruit, it's the 'generating insights' part that's truly hard", but I think that's too pithy. My more reflective take is that generating reliability from data lives in a weird hybrid space of domain knowledge and data management, and most orgs headcount strategy don't account for this. SWEs pretend that data scientists are just SQL jockeys minutes from being replaced by an LLM agent; data scientists pretend like stats is the only "hard" thing and all domain knowledge can be learned with sufficient motivation and documentation. In reality I think both are equally hard, it's rare that you find someone who can do both, and that doing both is really what's required for true "observability".

At a high level I'd say there are three big areas where orgs (or at least my org) tend to fall short:

* extremely sound data engineering and org-wide normalization (to support correlating diverse signals with highly disparate sources during root-cause)

* telemetry that's truly capable of capturing the problem (ie. it's not helpful to monitor disk usage if CPU is the bottleneck)

* true 'sleuths' who understand how to leverage the first two things to produce insights, and have the org-wide clout to get those insights turned into action

I think most orgs tend to pick two of these, and cheap out on the third, and the result is what you describe in your post. Maybe they have some rockstar engineers who understand how to overcome the data ecosystem shortcomings to produce a root-cause analysis, or maybe they pay through the nose for some telemetry/dashboard platform that they then hand over to contract workers who brute-force reliability through tons of work hours. Even when they do create dedicated reliability teams, it seems like they are more often than not hamstrung by not having any leverage with the people who actually build the product. And when everything is a distributed system it might actually be 5 or 6 teams who you have no leverage with, so even if you win over 1 or 2 critical POCs you're left with an incomplete patchwork of telemetry systems which meet the owning team's (teams') needs and nothing else.

All this to say that I think reliability is still ultimately an incentive problem. You can have the best observability tooling in the world, but if don't have folks at every level of the org who understand (a) what 'reliable' concretely looks like for your product and (b) have the power to effect necessary changes then you're going to get a lot of churn with little benefit.


This is a super insightful comment & there is a bunch that I want to respond to but I can't do it all neatly in one comment. Hahaha

I'll choose this point:

> reliability is still ultimately an incentive problem

This is a fascinating argument and it feels true.

Think about it. Why do companies give a shit about reliability at all? They only care b/c it impacts bottom line. If the app is "reliable enough" such that customers aren't complaining and churning, it makes sense that the company would not make further investments in reliability.

This same logic is true at all levels of the organization, but the signal gets weaker as you go down the chain. A department cares about reliability b/c it impacts the bottom line of the org, but that signal (revenue) is not directly and attributable to the department. This is even more true for a team, or an individual.

I think SLOs are, to some extent, a mechanism that is designed to mitigate this problem; they serve as stronger incentive signals for departments and teams.


I'd +1 incentives, primarily P&L/revenue/customer acquisition/retention, with a small carve out for "culture." I've worked places, and for people, where the culture was to "do the right thing" or focus on user experience as the objective which influenced decisions like paying more (time and money) for better support. For the SDEs and line teams it wasnt about revenue or someone yelling at them, they just emulated the behavior they saw around them which led to better observability/introspection/reliable/support. Which, of course, we'd like to believe leads to long term to success and $$$$.

I also like the call out of SLOs (or OKR or SMART goals or whatever) as a mechanism to broadcast your priorities and improve visibility. BUT I've also worked places where they didnt work because the ultimate owner with a VP title didnt care or understand to buy in to it.

And of course theres the hazard of principal agent problems between those selling, buying, building, and running are probably different teams and may not have any meaningful overlap in directly responsible individual.


It's a long running topic in a lot of areas. I remember back when data warehousing was the hot thing, collecting and cleaning all this data was supposed to be the key to insights that would unlock juicy profits. Basically didn't happen.

I would add that "extremely sound data engineering" is also necessary to make observability cost-effective. Some of these otel platforms can burn 10%-25% of your cloud budget to show you your logs. That is insane.

I think you're making the mistake that any of their points are cogent or intended to function as proper arguments. It's just bullshit chaff to make you waste time, and provide a patina of legitimacy for the fact that they really just want the US to be an ethnostate and will adopt whatever policy stance is convenient to that end. Note how at the start they're complaining about tax-dollar spending, and then later in the thread they hit you with "My concern goes far beyond monetary cost, the problem is really [SOME_OTHER_BULLSHIT]". There's no consistency; it's just sound and fury, signifying nothing.

I am aware of this. I just don't see what else to do.

1. I actually believe in many of these lofty ideals that are being dishonestly abused by the fascists.

2. Discussing things in terms of abstract ideals is a Schelling point that at least creates a chance for people from disparate tribes to find common ground.

3. There are other people reading along that might be swayed by the disingenuous chaff standing unquestioned.

4. I'd say it's going too far to write off most people spouting this nonsense as fully consciously aware of a contradictory agenda they keep hidden. I'd say it's more like they bought into feel-good nonsense posed as opposition to the blue head of the authoritarian hydra, and then basically haven't examined it too hard. And I'd say much of the opposition groupthink framed in terms of directly clashing overt values doesn't help either. So I think it's valuable to point out the glaring hypocrisy even if many of them have learned to revel in it.


You'll spend way more tax money to haul them across the border than you would to just print them a permit to live and work in the community they've been contributing to for years. Every study ever conducted on the issue has concluded that undocumented immigrants contribute far more to the economy than they consume in public welfare dollars. You've let the actual tax dollar parasites pawn the blame on a scapegoat because you're addicted to being angry.

> You'll spend way more tax money to haul them across the border than you would to just print them a permit to live and work in the community they've been contributing to for years

At the expense of legal immigrants who bothered to do it the right way.

Law enforcement isn't free, unfortunately.

> Every study ever conducted on the issue has concluded that undocumented immigrants contribute far more to the economy than they consume in public welfare dollars

Some of these studies exist for legal immigrants, cite the one making this case for illegals?

Do these "studies" account for second-order effects on housing, local job markets, etc.?


> At the expense of legal immigrants who bothered to do it the right way.

How come it is at their expense? The end result is a growing economy which benefits everybody.


> Do these "studies" account for second-order effects on housing, local job markets, etc.?

Yes everything improves. Displaced workers find new jobs, markets and economies expand, etc. etc.

> At the expense of legal immigrants who bothered to do it the right way.

This is just nonsense, immigration isn't a zero-sum game.

> Some of these studies exist for legal immigrants, cite the one making this case for illegals?

Google it, I'm at work

edit: had a lull, here you go https://www.epi.org/publication/unauthorized-immigrants/

The money quote:

> If we examine just the net fiscal impact of unauthorized immigrants, even this is positive, despite the fact that lacking work authorization also means being trapped in low-wage work and being unable to adequately assert one’s labor and employment rights. A prime reason the net contribution is, nonetheless, positive is that many unauthorized immigrants pay income taxes and have Social Security taxes withheld yet are generally ineligible for government benefits and services.


It's got that analog warmth


This is an important clarification; from the abstract and title I was super confused how they identified a "subspace" that could be consistently identified across model structures (I was assuming they meant that they saw stability in the dimension of the weight subspace or something), but if they're just referring to one model class that clears things up substantially. It's definitely also a much weaker result IMO, basically just confirming that the model's loss function has a well-posed minima, which...duh? I mean I guess I'm glad someone checked that, but called it "the universal weight subspace hypothesis" seems a bit dramatic.


Basic rule of MLE is to have guardrails on your model output; you don't want some high-leverage training data point to trigger problems in prob. These guardrails should be deterministic and separate from the inference system, and basically a stack of user-defined policies. LLMs are ultimately just interpolated surfaces and the rules are the same as if it were LOESS.


O(n^(~2.8)) because fast matrix mult?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: