This is something I have thought about for many years. If we have small form factor computers that sell for, say, $5-$25 why not distribute pre-configured, free, open source servers on well-understood hardware^1 instead of having to test on various different hardware possibilities, not all of them well-understood, or making assumptions about what hardware people should have available. IMO, there is value in server configuration. A purchaser could choose to compile and install the software herself but having an example of a working configuration can be invaluable. For me, good examples are usually worth more than countless pages of verbose documentation. The question I have is the dollar amount of that value.
The example I have thought about in the past, rightly or wrongly, is the WRT54G and what became OpenWRT. To me, focusing on one item of hardware initially has advantages. That is generally what vendors of off-the-shelf products do. Yet with open source software, there is usually an expectation that it must work on a variety of hardware, which is likely to make things more complicated.
1. I am not implying this has never been done or that it isn't still happening.
Having a working DNS cofiguation is generally a prerequisite to sending email.
There is of course email smtpd software that allows receiving email using only IP addresses. Email predates DNS. Two people running their own smptd's could exchange email directly using only IP addresses, without the need for DNS. But the way most people use email today, delegating total control over it to third parties, email is highly dependent on DNS.
Another example that comes to mind: There are routers that come with Wireguard pre-installed. No technical support just online instructions, forum and a custom GUI.
You're not wrong. But, i feel that this type of (paid) offering is the next best thing. What we need is many, many more paid providers, which helps decentralize things a little at least (and of course helps those less techie than some)...until your proposed solution is more easily implementable by a greater swath of the interested population.
I believe it will come with time, we already have IPv6 and microservers like Pi, but then users have to secure, update,backup, monitor and upgrade the software and hardware regularly ... There is no solution yet for that.
Exactly. The key is that it's a paid service. And it looks to be based on Pleroma and they will export data and transfer domains upon request. As long as it stays like that and that's the bar across providers everyone wins.
I have Mastodon, mail, XMPP, Coturn, and Nextcloud instances all running on a single Raspberry PI setup quite easily thanks to yunohost.org. They have ISO's for just about any hardware or a VPS.
Hostman and Mastohost are among the other managed hosters of fediverse instances.
> all I would need to do is plug it into my router and open up a port.
I like this vision of how things should work, but finding an ISP that _doesn't_ put a firewall for incoming connections nor use an ISP-grade NAT is challenging.
In my [limited] experience, neither tech support nor sales have any idea what any of these terms mean, so they can't advise at all.
What are the odds the router itself can act as a Mastodon server with some external storage? I'm not sure how resource intensive Mastodon is in practice.
I don't think the average router would be able to run much of anything, not to mention how insecure the average router is since people don't update them.
Mastodon is very resource heavy, I couldn't get it to run on a raspberry pi 3b. Pleroma is an alternative to mastodon that is very light on resources, I'd recommend that personally.
I can easily run Pleroma on 1G VPS and Pi3s; the heaviest part is the Postgres server, which can itself be tuned to be pretty light. It relies on PG's native JSON data type which is cool. I run a couple instances and poked around with it for a few hours. I'd love to see SQLite support in Pleroma given that it allegedly has JSON now; there is a 2 year old ticket on their GitLab [1] mentioning it tho I've not spiked in to see if it's 100% plausible to port. Pleroma is written with Elixir that has a DB abstraction library that supports SQLite, I was just looking into it this week.
Even if SQLite JSON isn't sufficient it could be realized with string columns for now, and indeed SQLite is recommended by Library of Congress for archival storage and used by iPhone so I think it's a solid way to expand Pleroma even if native JSON is not sufficient.
If anyone wants to pay me to do this as open source I'd jump, contact in bio. I don't know Elixir yet but am motivated and have 15+ years of dev experience in all kinds of systems, and experience in the adjacent parts.
It's a ruby application. Take all the background processes to build timelines, trending hashtags, etc and it adds up. Perhaps a single-user instance might do with less, but still...
It probably depends on the load? It's a Rails app AFAIK so if the load is low (which is likely for a family or group of friends) it should be able to run anywhere.
If you don't federate (and so only receive toots from accounts you've explicitly followed, or who mention you or their shares/boosts) and you don't follow a huge number of highly active accounts, then maybe.
1. For anything where you are mentioned specifically, or where you're a follower, the toots gets distributed automatically.
2. For anything where the toot is marked public but you are not a follower or mentioned, your instance will only receive a toot if you've explicitly configured federation of the public feed with one or more instances that is reasonably well connected.
The former is "cheap" unless you follow lots of people. The latter is like sucking on a firehose, because you get pretty much everything. If you do, it matter relatively little how large your userbase is, because you're still processing nearly every public toot.
I would love to do this, but it's a lot more work to maintain that kind of thing for most users. At least with Togethr we let you choose from many different data centres so the instances won't all be in the same place.
I think it's something that a vanity subdomain would be fine for the base case, with an optional upsell for a custom domain. The self-hosted box would need to have some way to dynamically update DNS, as the average consumer doesn't have a static IP.
IE: If we are going to decentralize with Mastodon-style software lets actually decentralize the hardware used to host the system.