Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of the time, self-hosting sites focus on installing the tools, but most of an admin's time is not the initial setup, it's the long-term maintenance, the updates, the fixes when something doesn't work as expected, the disk full, the disk failure, ... and most of the time I find that those nice websites don't help enough. How will you deal with your self-hosted service when your friends or family rely on it and it is down for an obscure reason on an obscure component you never heard of?


> when your friends or family rely on it

I would very much discourage people reading "how to self-host" articles on hosting for anyone but themselves. As soon as you involve other people, your stress will grow immense, there are all kinds of expectations and possible disappointments. It's one thing to share a few photo folders, but to provide online services is a dev job, and you don't want this relation with them.

Note that I specifically mean the situation where you share your own self-hosted setup with some close people so they can also benefit. If you want to self-host to specifically serve a community, go for it, you're awesome.


FWIW, I'd go a bit stronger and say that, if you are running a service for other people, you aren't "self-hosting" anymore, you're just regular-old "hosting" at that point ;P.


It makes much more sense to form a little co-op, pay some pros to look after it (managed hosting) and contribute to the core, imo. Extract an SLA from the managed service that will cover restoring service. It doesn't feel as cool, but it is much less stressful as well as fairer to the nontechnical folk.


True, op is a saas provider


Self hosting is hosting users yourself.

If it’s production grade even for you, the most boring and reliable way for systems to update themselves is mandatory. Luckily there’s way more out there than the past.


I grew up hosting PHPBB, game servers, ratio'd FTPs among many other things in the 90s. Dozens of friends used my "services". It didn't contribute to anxiety. It taught me a lot of skills because I wanted the things we used collectively to be available. Were they always? Nope. That was the contract.

Fast forward to today and it's no different. The complexity has increased, yes. But in many ways some of those complexities make things easier. The hosted solutions I provide friends and family with are rarely offline. Do I hit 5 nines annually? The real question is - does anyone care? No. But if I were to guess my downtime is cumulatively less than a few hours per year.

If you're drawn to it don't be distracted by an opinion of others who project these sorts of narratives. For some it may create anxiety - so don't do it, but for others they may find a path in their career through learning via the self-host path.

And if people you're providing ancillary services for are jerks about a bit of downtime here and there, and those folks are not contributing to your endeavors then tell them to go fly a kite. Because nobody I've shared self-hosting with has ever truly complained. I've had years of fun banter, especially in the earlier days, around uptime. But those people weren't trying to cause discourse. Those people are still friends.

I enjoy following the self-host community, contributing in minimal ways, and self-hosting useful things for friends and family. It doesn't keep me up at night. I'm constantly improving because of it.


Depending on what you are hosting, I recommend hosting just for yourself for perhaps a year. After you have the redundancies, backups, processes and other knowledge, you can share it with your community. I would never expect my family and friends to have the skill or desire to self-host, but I do not like to see them abused by platforms. It is a labor of love for me.


I host a bunch of stuff for friends completely free with the expectation that if there is an outage I'll do my best, on my own schedule to fix it. If I plan on sunsetting something I'll let them know a bit of time in advance, and that they're responsible for backing up the data they can't afford to lose.

So far everyone is happy with that arrangement, and I don't mind sharing the extra resources I have with people I care for.


Also if you do host for other people they need to know that shit might go wrong and they should have their own backups (which is a good thing to teach them with any service).


I use cloudron for this. While I agree that there is _some work_, but technology and services has improved to the extend that a reasonably good developer can host apps for themselves, their family and others.


Yunohost is not a site, it's a Linux distribution focused on making popular self-hosting apps more accessible. It tries to solve exactly what you're talking about - long-term maintenance and excessive complexity of small scale/personal self-hosting.


I think OP is using the term site loosely.


Because the cloud remains the same?

All of what you mention is something you consider during install. Which arguably makes the install take most of its time.

Same should be true for the cloud. Because you surely are not just jumping into something without analyzing what you are locking yourself into etc.?

Yes, it sucks if your PSU goes into smoke at an inopportune time. So you don't put critical services on it. Just as you don't on the cloud either. AWS has a much worse uptime than my homeserver. Github, slack, netflix, etc too.

Also, you get to do maintenance when you know noone will be bothered by it.


“ AWS has a much worse uptime than my homeserver.”

I find this extremely hard to believe.


I absolutely do not. I see AWS outages and issues with provisioning all the time at work. Last time my home server went down was because the power went out and we don't have a generator.


i think what gets lost here is how we’re defining uptime. if you’re regularly experiencing ec2 outages that result in your apps and services being unavailable i find it very hard to believe you’re using ec2 correctly.

running a bare one-off ec2 instance on aws is a strange choice if you care about uptime.


the statement doesn’t even make sense. “aws” isn’t a single service and measuring uptime across every aws service is nonsensical. obviously they mean ec2, but even then lumping all regions and azs together is weird.


Self-hosting is a hobby, more or less. You do it to get something practical out of it, but you also do it because you learn something on the way.

Self-hosting is a low risk way of learning lessons which otherwise would be high risk.

The effort self hosting takes sinks with the experience you gain.

There is terrible software to self-host (both as a beginner and as an expert) and there is software that is a breeze to work with.

There is battles to be chosen and battles to be avoided.

I host 20 websites and multiple services and it costs me less than a day of work per month (realistically maybe 4 hours or less a month)


I would interested in hearing examples of software that is a breeze to self host and terrible to self host. And most importantly, why? What can software devs do to make something a breeze? And what should they avoid? Thanks.


Yup, add a bit of automation to the mix, and you will also not need to repeat much in rare case where you need to reinstall something from scratch or add new machine.

> There is terrible software to self-host (both as a beginner and as an expert)

Any examples ? Just so I can avoid it


It doesn’t have to be a hobby. There’s production grade tools like proxmox that loft any self hosting to comparable models


This is the first time I've heard of Yunohost, but their documentation instills a lot of confidence in me.

I've been thinking of setting up NextCloud or similar for starting self-hosting of our HOA's email server, and file host, but it felt like a huge lift for someone who has never done it before.

Seeing that this is a full-on OS distro (Debian based), has been used and improved for over a decade, has docs that address backups, restores, and various attack vectors, has all app users integrated via LDAP, I feel comfortable at least giving it a try.


Setting up an email server is easy. Having messages sent from that server and appear in other people's inboxes is very hard.


Using outgoing esps (email service providers) and self hosting the rest trivializes this.

Or you can just run something like mdaemon and let it trivialize self hosting mail.

Self hosting is the original devops


IP reputation matters. If you self host, for the love of god don't use DO/AWS/Linode/etc. Small, established hosts are best, in my experience - shoutout to Mythic Beasts and (in times gone by) Bytemark for that.


Using outgoing esps (email service providers) and self hosting the rest trivializes this.

Or you can just run something like mdaemon and let it trivialize self hosting mail.


Updates can be handled by stuff like unattended-upgrade package in debian (and derivatives. Then you're left with once every 2 years distro version upgrade, which if you're using Debian it's just few commands and 5-15 minutes of update + occasional config fix if some app changed enough that some config options you used are deprecated. Adding non-distro repositories can make it a bit iffy tho, as some packages are just made badly and have problems upgrading but it will generally not break your system

If you're using Ubuntu or ubuntu derivative it's same except its russian roulette whether upgrade will actually work. In my experience just fucking don't or prepare for full reinstall every few years.

Then there is also slew of stuff that are "appliance" type of OS, like FreeNAS, that also allow you to run containers. That's probably best choice if you don't want to play linux admin once every month or two.

There is definite lack of "all in one panel" for someone that wants to have "their own linux server" but doesn't want to set up monitoring and a bunch of little stuff around it. When you want some alert generation, some metric monitoring + few automated stuff like "resize this partition when it fills up, up to a limit", it's usually a bunch of scripts + disparate apps.

Then again it's "different 20%" problem; 20% of features of those commonly used stacks(grafana/elk/icinga/etc.) is enough to 80% of the users but the 20% is different from person to person.

For example, some might care to only have some rough stats to draw a line and see "okay, my hard drive will be full in 2 months, gotta do something about it. But someone else will want same metric solution to also ingest a bunch of stuff from their IoT gadgets.

> How will you deal with your self-hosted service when your friends or family rely on it and it is down for an obscure reason on an obscure component you never heard of?

... the same thing happens with "all in one" stuff that automates everything you'd do manually on a plain linux box. Maybe rarer but also harder to find a solution for. In the end, if you want it to be not a problem you need to pay to make it someone's else problem (managed/hosted solution)


> Updates can be handled by stuff like unattended-upgrade package in debian

That won't keep up with the breaking changes this type of software usually has.

> If you're using Ubuntu or ubuntu derivative it's same except its russian roulette whether upgrade will actually work. In my experience just fucking don't or prepare for full reinstall every few years.

When was the last time you did that? That hasn't been true for literally years.


>> Updates can be handled by stuff like unattended-upgrade package in debian

>That won't keep up with the breaking changes this type of software usually has.

this only updates to current stable, so there will be no breaking changes as those are essentially security updates.

The "breaking changes" part is once every 2 year distro upgrade but it's generally very little, although that heavily depends on software you use of course. That from experience of few hundred machines at work and half a dozen private ones.

But if you wrote custom config you will have to change it if the format changed, you won't get away from that no matter how much automation you throw at the thing.

>> If you're using Ubuntu or ubuntu derivative it's same except its russian roulette whether upgrade will actually work. In my experience just fucking don't or prepare for full reinstall every few years.

> When was the last time you did that? That hasn't been true for literally years.

Just recently we fixed a machine of user where they tried to upgrade and some random shit broke. It also filled /boot with a bunch of kernels (it did not remove old ones) that made the machine unbootable coz it ran out of space on kernel update. It still could be booted on old one but of course user didn't knew that. We've actually migrated few people over the years to Debian precisely because they just broke their ubuntu during update somehow.

I imagine it is much less with actual competent users and server, not desktop.


>> If you're using Ubuntu or ubuntu derivative it's same except its russian roulette whether upgrade will actually work. In my experience just fucking don't or prepare for full reinstall every few years.

> When was the last time you did that? That hasn't been true for literally years.

20.04 to 22.04 dist-upgrade... kept getting weird errors from apt post-upgrade, it was just easier to blow the vm away and start fresh.


Not GP but 20.04 to 22.04 fucked something up that ended up (I can't remember what off hand) being quicker and easier to just format and reinstall Ubuntu.

Admittedly I did it right after 22.04 dropped rather than waiting for 22.04.1 or even a week would have been fine so that's a little on me.


> How will you deal with your self-hosted service when your friends or family rely on it

That’s not self-hosting anymore, now you are the cloud.


But that is what that distro is proposing you do:

> With YunoHost, you can easily manage a server for your friends, association or enterprise.


Substantial numbers of YNH devs are French, I'm wondering if they're just more likely to think in terms of associations etc.

It is a good idea, to be fair: non-techy people don't get anything but big companies if people won't host them.


Self-hosting really should be understood as "hosting for myself" rather than "doing the hosting myself". Hence I'd never host for anyone else but me. The stress of having to make services reliable, only if for my wife and kids, is a no by my book.


This is probably perspective. I wouldn’t recommend it to everybody.

I don’t find hosting services for a couple of friends and my family stressful.

I’m also a full time system admin so it is really a more fun, more freedom, less pressure, less bullshit version of the work I’d love to be doing instead of whatever management and the team came up with.


I have also been a sydadmin for a long time, so maybe we're in a bubble, but I don't find it stressful.

If something's down, it's down. If someone complains, I offer them a full refund of the $0 they pay, which stops complaints fairly quickly.


Self hosting yourself shouldn’t be less important than others.

In moments of unplanned crisis it will really become apparent that self hosting is because you value yourself as a user more, and not less.

In a way we self host lots of things on our phones.


I'm working on a project to address the issues you're mentioning (long-term maintenance, updates...) -> https://nua.rocks/

Not ready for prime time, though.


Cloudron and yunohost are distributions. They provide managed updates.


Another problem with Yunohost at least is that is has no focus at all on security. There are other, like Sandstorm, that at least try to have some basic security.


Can you expand on that a bit? I don't think its true. Yunohost has a firewall, fail2ban, user management, access management for its installed apps and documentation on the topic: https://yunohost.org/en/security


I am not the poster, but the comparison with sandstorm leads me to believe that they meant sercurity as in service isolation, e.g. running potentially unstrusted services in one system.

IIUC yunohost correctly they help to deploy and manage services in a more traditional way and assume that each service is trusted and they aren't that rigerously isolated from each other.


Yes, that's what I was referring to. With Yunohost if a single service is compromised, then they're all compromised. That makes for a large attack surface that grows with each app you install.


That's not entirely true. Every service runfs as it's own user with (mostly) only access to it's own data.


That was best practices in the 2000s.

Today best practice is using the zero trust security model and sandboxing everything.


You might be surprised by proxmox with turnkey Linux images, or rockylinux containers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: